Turning data into actionable insights: a data-driven SEO strategy

Modern SEO is all about data. Rankings can change overnight, user behavior as well, and search engines increasingly use AI to power the search results. To be able to respond, your decisions should be dictated by real, measurable insights. This article offers a practical way to turn SEO data into actionable insights.

Table of contents

The role of data in modern SEO

The search landscape is more complex than ever, so you need all the help you can get. By analyzing data, SEOs and business owners can learn and understand what works and what doesn’t. Metrics from tools like Google Analytics and Search Console provide glimpses of how visitors behave, keyword usage, and page performance. Using data to make decisions takes the guesswork out of the SEO work.

Good data gives you a clear picture of user engagement. For instance, tracking engagement time, engagement rates, and click-through rates will reveal whether content meets audience needs. These are crucial data insights that uncover gaps that might hinder performance. Data-driven insights help you understand what to focus on and what to prioritize.

Data doesn’t just identify issues, but also opportunities. Trends in keyword performance or a shift in traffic sources can lead to new content ideas or a new market to target. This is data-driven marketing, as you are making decisions based on evidence instead of hunches. These insights will lead to strategies focused on real user behaviors, which should lead to better results.


The goal isn’t to find interesting stats — it’s to find what you can do next. In SEO and AI-driven search, the data that matters is the data that leads to action: fix this page, shift that content, change how you’re showing up. If your insights don’t lead to decisions, they’re just noise.

Carolyn Shelby – Principal SEO at Yoast


A Yoast example

Let’s take a simple example from Yoast. We noticed one of our articles (What is SEO?) was gradually losing traffic and slipping in the rankings for key terms. The content hadn’t been updated for a while, so we took a closer look. We analyzed the search results and compared our article with those from competitors. We looked at intent, structures, relevance, and freshness. It was easy to see that our article lacked depth and context in key areas.

We wrote a good brief for the article and detailed the work needed. Then, we rewrote sections, updated examples, improved internal linking, and made it generally easier to read. We also added new custom graphics and on-topic expert quotes from our in-house Principal SEO, Alex Moss.

After republishing, the article quickly regained visibility. Plus, it climbed back towards the top of the search results, which brought in extra traffic. This was a clear reminder for us; when data shows a drop, improving the quality of the content backed by a good analysis can still win.

And an example of going from data to actionable insights to results

Turning data into insights

You need a process to quickly and systematically turn raw data into valuable insights. Eventually, you’ll get these insights once you ask the right SEO questions, gather the data, analyze it, and plan accordingly. 


Start with your goals, then ask: what’s holding us back? Actionable insights live in the gap between where you are and where you’re trying to go. That gap is different for every site and that’s what makes good analysis so powerful.

Carolyn Shelby – Principal SEO at Yoast


Step 1: What do you want to know?

Start by writing down the SEO questions you want answered. Do you want to improve performance, get more organic traffic, or better engagement? Analyze a traffic drop? For instance, an online store owner might want to understand why certain product pages don’t convert as well as expected. Thinking these things through before you start digging into the data makes it easier to focus on the metrics that matter.

Step 2: Gather the relevant data

Collect the data you need using tools like Google Analytics, Semrush, Wincher, Ahrefs, or other platforms that can power your data-driven SEO strategy. If you’d like to investigate a product page with subpar performance, you’ll look at page views, click-through rates, average engagement times, and engagement rates in GA4. Data like this should give you an idea to find and address the issues. 

Step 3: Analyze and spot trends

Dive into the data and try to spot patterns and trends. For example, an educational site might notice that articles on a particular topic get a lot of traffic but low engagement. Digging deeper might find that the titles of the articles attract visitors, but for some reason, the content doesn’t keep them interested. Trends like these help turn that data into insights that you can act upon. You can also use things like segmentation to find differences between groups of people from specific regions, who could engage wildly differently with your content. 

Step 4: Turn findings into actions

Once you’ve pinpointed the issues, it’s time to decide what you want to do. For instance, if you’ve found that an article has a low engagement rate because of the time it takes to load the page, you could fix the images and scripts on the page. Or, if you find that some keywords get traffic, but no conversions, you might need to improve the CTA on the page. Or it might be a search intent mismatch to fix. This is the thing that turns the insights from data into actionable insights.  

This is a nicely structured way of getting the insights needed to inform your data-driven SEO strategy. You can use every piece of information you find to improve your work as you go. This will not only help you understand the data but also make it easier to make the improvements needed to reach your SEO and business goals. 

An example: Addressing brand performance in LLMs

For this example, think of a tech publisher named Digital Mosaic. It’s a reputable source for in-depth news from the tech industry. Recently, their marketing team noticed something off. Users interacting with AI search engines and large language models (LLMs) like Google Gemini or ChatGPT rarely saw mentions of the Digital Mosaic brand. In other words, even when asked for the latest tech insights, the AI-driven sources and answers often omitted Digital Mosaic in favor of other options. 

After finding the issue, the team started analyzing data from various analytics platforms, brand mention trackers, and user surveys. They found their SEO and content work was pretty good, but the content was not properly optimized to help LLMs surface it. The data showed that their content lacked the language and brand signals needed to help LLMs understand the brand’s authority. 

When they found this, the teams got to work to improve how LLMs perceive their content:

Improving brand signals

The content team added clearer brand signals to their content, and each post received better metadata and structured data. The goal was to clearly tie the brand to the content to help LLMs recognize the sources. 

Changes in content

Next, the team restructured certain articles to include branded segments, such as “Digital Mosaic Exclusive Analysis” or “Today’s Tech Insights by Digital Mosaic”. This makes the brand more visible to users and gives LLMs a chance to associate the content with the brand, coming from a trusted source.

Investing in partnerships and collaboration 

The publisher set up a series of collaborations with well-known tech influencers and other outlets. They made co-branded content and were mentioned in many podcasts and webinars. This helped improve the brand’s presence in online conversations. LLMs love to look for what’s available on third-party sites about brands while generating responses. 

Rinse and repeat 

The team reviewed the changes’ performance to see if the LLMs would improve brand mentions. They used AI tools, like AI brand monitoring tools, to monitor and simulate the LLM outputs to see if the work was effective. Based on their findings, they would fine-tune their work and continue to improve performance. 

Within a few months, the results were encouraging. LLMs were increasingly showing content from and mentioning Digital Mosaic, and the brand’s footprint in LLMs was steadily improving. This did not just help visibility and increase the brand’s authority in the industry, but also led to a new source of traffic from AI search interfaces.

This fictional example shows how a publisher can use data insights to overcome a very specific challenge. Mixing traditional SEO solutions with new technologies helped Digital Mosaic turn data into actionable insights. Not only did it help the brand’s visibility right now, but it also prepared it for the AI-powered future.

Read more: How to optimize content for AI LLM comprehension using Yoast’s tools. 

You need the right tools to turn data into actionable insights. This will be a mix of the tools we all know and love, and more specific ones to understand user behavior and site performance. 

We all start with Google Analytics 4 and Search Console. GA4 tracks many metrics, including user engagement, event counts, and traffic sources. Properly set up, it gives you a good overview of how users use your site. Search Console shows how your site performs in the SERPs, including keyword rankings, indexing status, and crawl errors. 

Tools like Ahrefs and Semrush provide information about backlinks, rankings, and search trends. These search marketing tools also have many features for competitive analysis and keyword research. You’ll get a big database of historical data, so you can spot and interpret trends over time. This data helps you with your data-driven marketing on all fronts. 

Looker Studio is a great tool to tie various data sources together and build dashboards
Looker Studio is a great tool to tie various data sources together and build dashboards

Advanced techniques and technologies

The are so many options to dive ever-deeper into your data to find the insights you need. Beyond the basics, you can use:

  • Segmentation: It could help to break up your data into specific audience segments. For instance, you could look at visitor behavior based on demographics, location, or the type of device they use. Segmenting data helps you understand why certain groups behave differently. For instance, if mobile users show lower engagement than desktop users, there might be something wrong with your mobile site.  
  • Trend analysis: Don’t just focus on looking at data for a specific day. It’s often better to look at metrics over different time periods. Look at the monthly or quarterly performance. This gives you an idea of the long-term impact of changes. 
  • Build dashboards to visualize data: Make a dashboard with data from various sources. Use tools like Looker Studio to combine Google data with SEO tools like Semrush and Ahrefs. This will give you reports that will show all key data at a glance. A dashboard makes it easier to understand data and communicate it with other team members or management. 
  • Big data: Big data is becoming increasingly important for data-driven SEO. Huge data sets can provide insights that smaller sets can overlook. They allow you to examine user behavior, search trends, and site performance at scale. With machine learning and automation, you can use big data to get better and faster results to inform your SEO strategy.

Iterative optimization and reporting

SEO is an ongoing process, and you’ll have to adjust course regularly. Don’t treat your site’s performance as a snapshot, but as something dynamic that evolves over time. Regularly looking at your data keeps you on top of things, from changes in user behavior to emerging search trends. 

Make it a routine

Schedule when you review data. This might be daily checks for urgent work or weekly to track short-term changes. For long-term trends, do monthly or quarterly deep dives. Route analysis helps you spot patterns that might not be so obvious at first glance. 

Test and experiment

With an iterative optimization approach, you test what works. For example, you could A/B test different page layouts, CTA buttons, or various meta titles. You might also try different content formats to see what gets more engagement. These tests will get you the data and insights needed to make the most of your SEO work.   

Feedback loop

A true feedback loop helps validate your improvements. After turning data into actionable insights, implement the changes in your content or technical SEO work. Keep updating your data to see if you need to refine your strategy. If a new tactic works, adopt it as a standard practice. But if it doesn’t work as intended, find out why and try a variation of it. Measuring trial and error and adopting your tactics makes you flexible and responsive.

Internet marketing tools like Wincher give key data points about your content's performance, like rankings
Internet marketing tools like Wincher give key data points about your content’s performance

Towards a data-driven SEO strategy

Using the knowledge you gain from turning data into actionable insights can greatly improve your SEO performance. Be sure to structure the data-gathering process: ask the right questions, collect the right data, analyze the trends, and create a system that turns those insights into action. 

What you change on your site isn’t even that important; it might be updating metadata, improving content, or diving into technical SEO aspects. If only what you do is the correct answer to the questions you wanted to have answered. 

Every insight can lead to big improvements in rankings and user engagement. Use this data-driven marketing approach to make the right decisions that will keep your SEO strategy effective in the future.

Three takeaways about AI’s energy use and climate impacts

This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself. 

This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell (and the work of many others on our team). Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead. 

There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. 

1. The energy demands of AI are anything but constant. 

If you’ve heard estimates of AI’s toll, it’s probably a single number associated with a query, likely to OpenAI’s ChatGPT. One popular estimate is that writing an email with ChatGPT uses 500 milliliters (or roughly a bottle) of water. But as we started reporting, I was surprised to learn just how much the details of a query can affect its energy demand. No two queries are the same—for several reasons, including their complexity and the particulars of the model being queried.

One key caveat here is that we don’t know much about “closed source” models—for these, companies hold back the details of how they work. (OpenAI’s ChatGPT and Google’s Gemini are examples.) Instead, we worked with researchers who measured the energy it takes to run open-source AI models, for which the source code is publicly available. 

But using open-source models, it’s possible to directly measure the energy used to respond to a query rather than just guess. We worked with researchers who generated text, images, and video and measured the energy required for the chips the models are based on to perform the task.  

Even just within the text responses, there was a pretty large range of energy needs. A complicated travel itinerary consumed nearly 10 times as much energy as a simple request for a few jokes, for example. An even bigger difference comes from the size of the model used. Larger models with more parameters used up to 70 times more energy than smaller ones for the same prompts. 

As you might imagine, there’s also a big difference between text, images, or video. Videos generally took hundreds of times more energy to generate than text responses. 

2. What’s powering the grid will greatly affect the climate toll of AI’s energy use. 

As the resident climate reporter on this project, I was excited to take the expected energy toll and translate it into an expected emissions burden. 

Powering a data center with a nuclear reactor or a whole bunch of solar panels and batteries will not affect our planet the same way as burning mountains of coal. To quantify this idea, we used a figure called carbon intensity, a measure of how dirty a unit of electricity is on a given grid. 

We found that the same exact query, with the same exact energy demand, will have a very different climate impact depending on what the data center is powered by, and that depends on the location and the time of day. For example, querying a data center in West Virginia could cause nearly twice the emissions of querying one in California, according to calculations based on average data from 2024.

This point shows why it matters where tech giants are building data centers, what the grid looks like in their chosen locations, and how that might change with more demand from the new infrastructure. 

3. There is still so much that we don’t know when it comes to AI and energy. 

Our reporting resulted in estimates that are some of the most specific and comprehensive out there. But ultimately, we still have no idea what many of the biggest, most influential models are adding up to in terms of energy and emissions. None of the companies we reached out to were willing to provide numbers during our reporting. Not one.

Adding up our estimates can only go so far, in part because AI is increasingly everywhere. While today you might generally have to go to a dedicated site and type in questions, in the future AI could be stitched into the fabric of our interactions with technology. (See my colleague Will Douglas Heaven’s new story on Google’s I/O showcase: “By putting AI into everything, Google wants to make it invisible.”)

AI could be one of the major forces that shape our society, our work, and our power grid. Knowing more about its consequences could be crucial to planning our future. 

To dig into our reporting, give the main story a read. And if you’re looking for more details on how we came up with our numbers, you can check out this behind-the-scenes piece.

There are also some great related stories in this package, including one from James Temple on the data center boom in the Nevada desert, one from David Rotman about how AI’s rise could entrench natural gas, and one from Will Douglas Heaven on a few technical innovations that could help make AI more efficient. Oh, and I also have a piece on why nuclear isn’t the easy answer some think it is

Find them, and the rest of the stories in the package, here

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A new atomic clock in space could help us measure elevations on Earth

In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side.

The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany. (Geodesists study our planet’s shape, orientation, and gravitational field.) While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge. 

To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort. 

Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space.

That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone. 

Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. (In 2022, China installed a potentially stabler clock on its space station, but the Chinese government has not publicly shared the clock’s performance after launch, according to Cacciapuoti.) 

From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics. 

But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world.

Alignment over this “zero point” (basically where you stick the end of the tape measure to measure elevation) is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters. 

Airbus worker performs critical tests on ACES in the Space Station Processing Facility cleanroom at the Kennedy Space Center.
A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.
ESA-T. PEIGNIER

To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude.

However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert. 

To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America. 

To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet. 

A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation. 

“We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive.

ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks.

These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field. 

Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago. 

Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.

The Download: the desert data center boom, and how to measure Earth’s elevations

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The data center boom in the desert

In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city.

Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities nearby. Meanwhile, Microsoft has acquired more than 225 acres of undeveloped property, and Apple is expanding its existing data center just across the Truckee River from the industrial park.

The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—and it’s just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. Read the full story.

—James Temple

This story is part of Power Hungry: AI and our energy future—our new series shining a light on the energy demands and carbon costs of the artificial intelligence revolution. Check out the rest of the package here.

A new atomic clock in space could help us measure elevations on Earth

In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss one.

The misalignment happened because they measured elevation from sea level differently. To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation.

Now, a decade after its adoption, scientists are looking to update the standard—by using the most precise clock ever to fly in space. Read the full story.

—Sophia Chen

Three takeaways about AI’s energy use and climate impacts

—Casey Crownhart

This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself.

This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell (and the work of many others on our team). Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead.

There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. Read the full story.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

MIT Technology Review Narrated: Congress used to evaluate emerging technologies. Let’s do it again.

Artificial intelligence comes with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion.

One way to combat that might be resurrecting the Office of Technology Assessment, a Congressional think tank that detected lies and tested tech until it was shuttered in 1995.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is buying Jony Ive’s AI startup
The former Apple design guru will work with Sam Altman to design an entirely new range of devices. (NYT $)
+ The deal is worth a whopping $6.5 billion. (Bloomberg $)
+ Altman gave OpenAI staff a preview of its AI ‘companion’ devices. (WSJ $)
+ AI products to date have failed to set the world alight. (The Atlantic $)

2 Microsoft has blocked employee emails containing ‘Gaza’ or ‘Palestine’
Although the term ‘Israel’ does not trigger such a block. (The Verge)
+ Protest group No Azure for Apartheid has accused the company of censorship. (Fortune $)

3 DOGE needs to do its work in secret
That’s what the Trump administration is claiming to the Supreme Court, at least. (Ars Technica)
+ It’s trying to avoid being forced to hand over internal documents. (NYT $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

4 US banks are racing to embrace cryptocurrency
Ahead of new stablecoin legislation. (The Information $)
+ Attendees at Trump’s crypto dinner paid over $1 million for the privilege. (NBC News)
+ Bitcoin has surged to an all-time peak yet again. (Reuters)

5 China is making huge technological leaps
Thanks to the billions it’s poured into narrowing the gap between it and the US. (WSJ $)
+ Nvidia’s CEO has branded America’s chip curbs on China ‘a failure.’ (FT $)
+ There can be no winners in a US-China AI arms race. (MIT Technology Review)

6 Disordered eating content is rife on TikTok
But a pocket of creators are dedicated to debunking the worst of it. (Wired $)

7 The US military is interested in the world’s largest aircraft
The gigantic WindRunner plane will have an 80-metre wingspan. (New Scientist $)
+ Phase two of military AI has arrived. (MIT Technology Review)

8 How AI is shaking up animation
New tools are slashing the costs of creating episodes by up to 90%. (NYT $)
+ Generative AI is reshaping South Korea’s webcomics industry. (MIT Technology Review)

9 Tesla’s Cybertruck is a flop
Sorry, Elon. (Fast Company $)
+ The vehicles’ resale value is plummeting. (The Daily Beast)

10 Google’s new AI video generator loves this terrible joke
Which appears to originate from a Reddit post. (404 Media)
+ What happened when 20 comedians got AI to write their routines. (MIT Technology Review)

Quote of the day

“It feels like we are marching off a cliff.”

—An unnamed software engineering vice president jokes that future developers conferences will be attended by the AI agents companies like Microsoft are racing to deploy, Semafor reports.

One more thing

What does GPT-3 “know” about me?

One of the biggest stories in tech is the rise of large language models that produce text that reads like a human might have written it.

These models’ power comes from being trained on troves of publicly available human-created text hoovered up from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs.

Melissa Heikkilä, MIT Technology Review’s former AI reporter, wondered what data these models might have on her—and how it could be misused. So she put OpenAI’s GPT-3 to the test. Read about what she found.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Don’t shoot the messenger, but it seems like there’s a new pizza king in town 🍕 ($)
+ Ranked: every Final Destination film, from worst to best.
+ Who knew that jelly could help to preserve coral reefs? Not I.
+ A new generation of space archaeologists are beavering away to document our journeys to the stars.

Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

Anthropic has announced two new AI models that it claims represent a major step toward making AI agents truly useful.

AI agents trained on Claude Opus 4, the company’s most powerful model to date, raise the bar for what such systems are capable of by tackling difficult tasks over extended periods of time and responding more usefully to user instructions, the company says.

Claude Opus 4 has been built to execute complex tasks that involve completing thousands of steps over several hours. For example, it created a guide for the video game Pokémon Red while playing it for more than 24 hours straight. The company’s previously most powerful model, Claude 3.7 Sonnet, was capable of playing for just 45 minutes, says Dianne Penn, product lead for research at Anthropic.

Similarly, the company says that one of its customers, the Japanese technology company Rakuten, recently deployed Claude Opus 4 to code autonomously for close to seven hours on a complicated open-source project. 

Anthropic achieved these advances by improving the model’s ability to create and maintain “memory files” to store key information. This enhanced ability to “remember” makes the model better at completing longer tasks.

“We see this model generation leap as going from an assistant to a true agent,” says Penn. “While you still have to give a lot of real-time feedback and make all of the key decisions for AI assistants, an agent can make those key decisions itself. It allows humans to act more like a delegator or a judge, rather than having to hold these systems’ hands through every step.”

While Claude Opus 4 will be limited to paying Anthropic customers, a second model, Claude Sonnet 4, will be available for both paid and free tiers of users. Opus 4 is being marketed as a powerful, large model for complex challenges, while Sonnet 4 is described as a smart, efficient model for everyday use.  

Both of the new models are hybrid, meaning they can offer a swift reply or a deeper, more reasoned response depending on the nature of a request. While they calculate a response, both models can search the web or use other tools to improve their output.

AI companies are currently locked in a race to create truly useful AI agents that are able to plan, reason, and execute complex tasks both reliably and free from human supervision, says Stefano Albrecht, director of AI at the startup DeepFlow and coauthor of Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. Often this involves autonomously using the internet or other tools. There are still safety and security obstacles to overcome. AI agents powered by large language models can act erratically and perform unintended actions—which becomes even more of a problem when they’re trusted to act without human supervision.

“The more agents are able to go ahead and do something over extended periods of time, the more helpful they will be, if I have to intervene less and less,” he says. “The new models’ ability to use tools in parallel is interesting—that could save some time along the way, so that’s going to be useful.”

As an example of the sorts of safety issues AI companies are still tackling, agents can end up taking unexpected shortcuts or exploiting loopholes to reach the goals they’ve been given. For example, they might book every seat on a plane to ensure that their user gets a seat, or resort to creative cheating to win a chess game. Anthropic says it managed to reduce this behavior, known as reward hacking, in both new models by 65% relative to Claude Sonnet 3.7. It achieved this by more closely monitoring problematic behaviors during training, and improving both the AI’s training environment and the evaluation methods.

New Ecommerce Tools: May 22, 2025

We publish a rundown each week of new products from companies offering services to ecommerce merchants. This installment includes updates on virtual shopping, agentic commerce, analytics, dropshipping, omnichannel payments, logistics, loyalty programs, promotions, and gamification.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Roblox launches APIs for in-game shopping of physical items. Roblox, an online gaming platform and virtual universe, is expanding its selling capabilities with Commerce APIs and Shopify as its first integrated partner. Eligible creators and brands can sell physical products within Roblox experiences. Roblox’s new Approved Merchandiser Program connects physical shopping with virtual goods and benefits.

Screenshot of a Roblox game

Roblox

Talon.One brings advanced promotions and loyalty capabilities to Shopify Enterprise. Talon.One, an incentive engine for loyalty, promotions, and gamification, has partnered with Shopify. Per Talone.One, brands that use both Talon.One and Shopify Enterprise have a single point of access to over 30 incentive tools, including personalized offers, gamified rewards, and advanced loyalty programs.

Zoho enters U.S. payments market. Technology platform Zoho has launched a unified payment solution for online businesses using multiple payment methods such as cards and ACH. The solution offers native payment capabilities within Zoho, enabling companies to manage payments directly within their existing financial workflows. Pricing for U.S. domestic cards is 2.9% + 30¢ per transaction, including Visa, Mastercard, AmEx, and Discover. Pricing for international cards is 1.5% plus the domestic card fee.

Nuvei joins the European Payments Initiative to launch Wero for ecommerce. Nuvei, a Canadian fintech company, has announced its membership in the European Payments Initiative, becoming one of the first payment services providers to enable ecommerce businesses to accept Wero, EPI’s digital wallet, in their checkouts. Wero facilitates instant account-to-account transfers using SEPA Instant Credit Transfer protocols, allowing users to send money within 10 seconds via mobile numbers, email addresses, or QR codes. Wero has attracted over 40 million users since its launch in 2024.

Image of Nuvei's Wero on a smartphone

Nuvei’s Wero

Perplexity selects PayPal to power agentic commerce. Perplexity, an AI answer engine, has partnered with PayPal to power agentic commerce across its Pro platform. Starting this summer in the U.S., consumers can check out instantly with PayPal or Venmo when they ask Perplexity to find products, book travel, or buy tickets. The process — payment, shipping, tracking, invoicing — will occur behind the scenes with PayPal’s account linking, secure tokenized wallet, and emerging passkey checkout flows.

WooCom Made Easy achieves HIPAA compliance for healthcare integration. HIC Global Solutions, a Salesforce consulting company, has announced WooCom Made Easy, an integration software connecting WooCommerce and Salesforce, is now HIPAA compliant. The update enables healthcare providers, companies, and businesses handling sensitive patient data to sync their ecommerce and customer-management platforms without compromising privacy. All data transfers are now encrypted and compliant with HIPAA guidelines, protecting personal health information while powering real-time sync and multi-store operations.

Amplitude and Twilio launch partnership for co-selling and deeper integration. Amplitude, a digital analytics platform, has partnered with Twilio Segment, a customer data platform. Amplitude recommends Twilio Segment as an optimal customer data platform, and Twilio recommends Amplitude as the digital analytics provider of choice. Amplitude has also released ready-made dashboards that connect its platform to Twilio Segment’s CDP.

Home page of Amplitude

Amplitude

Kibo Commerce debuts unified marketplace and dropship solution. Kibo, a composable commerce platform, has introduced an integrated marketplace and dropship tool to help retailers and brands expand product assortments and reduce operational costs without managing physical inventory. With Kibo’s tools, clients can onboard third-party vendors using self-service registration portals or targeted invitation workflows. Kibo says its integrated tools help merchants upload product catalogs, manage pricing, and fulfill orders, while analytics help operators track KPIs, vendor performance, and gross merchandise value.

Global Payments launches Genius POS Platform. Global Payments, a provider of payment technology and software solutions, has announced the release of Genius, a point-of-sale command center for business operations. According to Global Payments, the Genius platform enables global expansion and vertical specialization at scale. It also unlocks opportunities in specialized retail segments such as age-regulated, higher education, and consumer service businesses.

Global-e and Shopify sign new multi-year strategic partnership agreement. Global-e, a direct-to-consumer ecommerce provider, and Shopify have announced a three-year renewal of their partnership for 1P (Shopify Managed Markets) and 3P MoR (merchant of record) solutions to empower international D2C transactions on the Shopify platform. For 1P, Global-e remains the exclusive provider of MoR services. Future versions of Managed Markets will leverage Shopify Payments and other elements of the Shopify suite of services.

Stord acquires Ware2Go, a specialty warehouse business, from UPS. Stord, a provider of high-volume fulfillment services and ecommerce technology, announced its acquisition of Ware2Go, a subsidiary of UPS. With this acquisition, Stord adds 21 fulfillment centers to its network and tech platform (warehouse management, order management, customer service). Ware2Go brings an additional 2.5 million square feet of fulfillment centers, complementing Stord’s existing North American footprint and partner network.

Home page of Stord

Stord

Google Claims AI Search Delivers ‘Quality Clicks’ Despite Traffic Loss via @sejournal, @MattGSouthern

Google executives are trying to reframe the conversation about AI-powered search features as industry data reveals significant website traffic reductions.

During a recent Google Marketing Live press session, executives indicated that while clicks may be down, the visits that do happen are supposedly of higher quality.

The session featured a panel including Jenny Cheng, Vice President and General Manager of Google’s Merchant Shopping organization; Sean Downey, President of Americas & Global Partners at Google; and Nicky Rettke, YouTube Vice President of Product Management.

Photo: Matt G. Southern / Search Engine Journal

Traffic Quality vs. Quantity Debate

Independent studies have documented that pages with AI overviews in search results receive significantly fewer clicks on organic listings than traditional search results.

When confronted with this issue, a Google executive sidestepped direct traffic concerns by shifting focus to user behavior, stating:

“What we’re seeing is people asking more questions. So they’ll ask a first question, they’ll get information and then go and ask a different question. So they’re refining and getting more information and then they’re making a decision of what website to go to.”

Google pointed to a 10% increase in queries from AI-enhanced search.

Google’s narrative suggests these changes benefit everyone:

“When they get to a decision to click out, it’s a more highly qualified click… What we hope to see over time—and we don’t have any data to share on this—is more time spent on site, which is what we see organically in a much more highly qualified visitor for the website.”

The notable admission that Google has “no data to share” on these quality improvements leaves their claims unverified.

Ads Perform Differently Than Organic Content

While publishers grapple with declining traffic, Google insists that ad performance remains largely unchanged in AI-enhanced search:

“When we run ads on AI overviews versus ads on standard search, we see pretty much the same level of monetization capabilities, which would indicate most factors are the same and they’re producing really the same results for advertisers to date.”

This favorable situation suggests that Google’s ad revenue may stay stable while organic traffic patterns shift, potentially pressuring more publishers to adopt paid strategies to maintain visibility.

New Search Patterns Demand Content Adaptation

Google executives characterized the evolution of search as a response to user preferences for more conversational and multimodal queries, stating:

“What we’re trying to do when we release things like AI overviews or AI mode is we’re trying to give consumers new ways to discover information and get answers to their most important questions… Most humans have unbound curiosity and their context strings or their query strings are much more conversational.”

For SEO professionals, Google recommends accommodating these changes by:

  • Creating content that directly answers user questions
  • Adding more video content
  • Developing detailed FAQs and Q&A sections

AI Mode Creates New Discovery Opportunities?

Google also presented its AI mode as a potential way to increase content discovery through what they termed a “fanning technique.”

They explained:

“When we get into AI mode, it’s a similar functionality because we are also doing the fanning technique where you’re having many more queries go out. If you ask the question, it’s looking at a variety of different versions of that, which is giving more websites a chance to be considered.

We’re researching more sites, pulling in more information from more sites and summarizing. And that’s more linked opportunities for the publishers as well as the sites that are pushing the content to have access to it.”

Whether these theoretical opportunities translate to actual traffic remains to be seen.

Measurement Challenges

For marketers, the situation is complicated because Google’s reporting systems don’t differentiate between clicks from traditional search, AI overviews, and AI mode.

When asked if these different placements are shown separately in ad reporting, the Google representatives confirmed:

“We do not. Within the search term reporting, they’re not specifically broken out by the placement in that way. And that’s because the reporting is tied to what’s actionable for advertisers.”

This lack of transparency makes it impossible for publishers to verify Google’s claims independently.

The Road Ahead

While Google presents an optimistic view of traffic quality from AI-enhanced search, the lack of specific data places marketers in a precarious position.

Publishers and SEO professionals must now create their own measurement methods to assess whether these allegedly “more qualified clicks” truly offer greater value despite their reduced numbers.

For now, content creators are being asked to adjust their strategies to align with Google’s vision while having little choice but to accept the company’s quality claims on faith alone.

What’s Draining Your PPC Budget and How to Stop It [Webinar] via @sejournal, @hethr_campbell

You’ve crafted the perfect ad, fine-tuned the targeting, and even carved out a healthy budget. The clicks are rolling in, but the conversions just aren’t there. What’s going wrong?

For many businesses, the problem isn’t the ad. It’s what happens after the click.

Where PPC Performance Falls Apart

Missed calls. Slow follow-ups. Confusing handoffs between marketing and sales. 

These are the quiet killers of campaign ROI, and they often go unnoticed until leads have already slipped through the cracks.

That’s why we’re bringing you a must-attend session that tackles this head-on.

How to Fix the Number One Reason PPC Campaigns Fail

In this webinar, you’ll learn how to identify and patch lead leaks at every stage of your funnel. 

It’s designed for marketing teams that want to stop wasting ad spend and start converting more of the traffic they’ve already paid for.

What you’ll walk away with
✅ Actionable steps to improve PPC lead follow-up
✅ A framework to spot weak points in your funnel
✅ Tools and tips to drive better ROI from your existing campaigns

Meagan McLoughlin, Principal Marketing Manager at CallRail, will walk you through strategies that turn interest into action. You’ll also get a behind-the-scenes look at VoiceAssist, CallRail’s new AI-powered tool that qualifies calls around the clock.

And don’t miss insights from Einstein Industries, a top-performing agency partner, who will share real-world PPC lessons you can apply right away.

If you can’t attend live, no worries. Register now, and we’ll send you the full recording so you can watch when it works best for you.

Explaining Agentic SEO To The C-Level via @sejournal, @TaylorDanRW

AI has quickly carved out a central role in SEO, with widespread adoption across tasks like content generation, image, and video creation.

Generative AI makes headlines and is exciting, but the operational use of AI can be just as impactful.

This evolution is now being referred to as Agentic SEO. It’s not just about chasing rankings, keywords, or pleasing Google’s algorithms.

It’s about using AI to get more done with less friction. While I won’t define Agentic SEO here, Vincent Terrasi offers a great deep dive if you want to explore further.

What Is Agentic SEO?

Agentic SEO relies on AI agents powered by large language models such as OpenAI, Claude, Perplexity, Gemini, or Llama.

These agents autonomously or semi-autonomously take on tasks that have traditionally demanded heavy manual effort from humans. They are virtual team members helping cut down repetitive, low-value work.

These AI agents are used for various tasks that are becoming standard in SEO today. They can:

  • Research and generate content ideas at scale.
  • Analyze data pulled from third-party tools.
  • Compare the similarity of content across large sets of webpages.
  • Generate metadata and suggest internal linking strategies across massive content libraries.

Agentic SEO isn’t a substitute for SEO professionals. It acts more like a power-up, amplifying what your team can accomplish without replacing the strategic thinking and expertise they bring.

Human insight still drives the engine. AI clears the path for better focus.

The key message for C-suite executives and business leaders is this: Agent-based workflows offer a real operational advantage.

As competitors adopt these technologies, staying still may mean quietly losing your edge. Embracing AI in your SEO strategy doesn’t just keep you in the game. It pushes you forward.

The Benefits Of Agentic SEO

From a leadership perspective, it’s essential to recognize that Agentic SEO isn’t just another tool to add to the stack; it’s a shift in how SEO teams operate.

This new operational model supports faster execution, broader experimentation, and a more intelligent allocation of resources.

While it might not directly drive growth overnight, the boost in efficiency makes it easier to scale impact and hit key performance indicators (KPIs) with fewer roadblocks.

Increased Productivity

With Agentic SEO, teams can manage larger workloads without a proportional increase in staff.

That doesn’t mean you eliminate headcount or growth, but those on the team can handle significantly more, analyzing big datasets, cutting down on menial tasks, and spending more time on high-impact work.

This is especially valuable for in-house teams under pressure to deliver results on lean budgets.

It also strengthens the case for expanding your team’s scope and potential by replacing the mundane with critical thinking and strategic analysis.

Faster Execution And More Experimentation

By automating routine tasks, Agentic SEO enables SEO professionals to shift their energy toward strategy, creative problem-solving, and SEO experimentation.

With the busywork handled, they can test more ideas, iterate on content faster, and adapt quickly to shifting trends.

Over time, this makes teams more effective and helps individuals grow their skill sets and adjust in real-time.

Improved Consistency And Quality Control

AI agents can follow structure, apply formatting standards, and flag inconsistencies across the content of hundreds of pages, keyword clusters, or multiple datasets overlaid on each other.

This reduces human error and boosts quality, which is particularly important at the enterprise level.

It also allows team members who may not be data-savvy to perform and share insights without always relying on analysts, reducing bottlenecks and speeding up workflows.

Tighter Alignment With Broader AI Strategy

Agentic SEO sits at the intersection of AI, data, and marketing operations. It’s a logical and relatively low-risk next step for companies already exploring AI across other departments.

Tools are readily available, costs are reasonable, and results are measurable. This makes Agentic SEO a practical way to extend your AI investment into an area with clear return on investment (ROI).

What This Means For Your Organization

Agentic SEO changes how SEO teams function. For leadership, that means rethinking staffing, budgets, and how you scale operations.

You’ll be able to run larger campaigns with leaner teams, experiment more often, and shift hiring priorities toward support roles that strengthen your overall marketing engine.

Engineers and product teams will spend less time on repetitive site audits and documentation, while AI agents help with QA and standard operating procedures.

Data teams can use agents to uncover trends faster, correlating them with external factors. I always use the example of spotting sales patterns between rainy days and umbrella sales, without the delay of manual analysis.

The good news is you don’t need to overhaul your entire system. Many existing workflows can be restructured into agents. Start small. Scale as you see results.

Final Thoughts

Agentic SEO isn’t just another passing trend. It’s a fundamental shift in how SEO work gets done.

As AI agents grow more capable and easier to implement, teams that learn to work alongside them will outperform those clinging to old methods.

This isn’t about replacing people.

It’s about breaking through the time sinks and bottlenecks that limit your SEO team’s potential.

With the busywork out of the way, your experts can do what they do best: Think bigger, move faster, and deliver results at a scale traditional methods can’t match.

More Resources:


Featured Image: Ton Wanniwat/Shutterstock

CMS Market Share Trends: Top Content Management Systems (May 2025) via @sejournal, @theshelleywalsh

WordPress has dominated the content management system (CMS) space since launching in 2003, and while it’s still the leader, the market is shifting in ways worth paying attention to.

As of May 2025, WordPress powers 43.5% of websites surveyed and holds a 61.2% share among sites that use a CMS, according to W3Techs. That’s still a commanding lead, but down nearly four percentage points from its 2022 peak.

For the first time in its 20-year reign, WordPress’s market share has shown a sustained decline, and competitors like Wix, Squarespace, and Shopify are quietly chipping away at its dominance.

At the same time, the share of websites not using any CMS has dropped from 32.3% in 2023 to 28.9% in 2025, showing continued adoption of managed platforms and hosted builders.

In this report, we’ll break down the current CMS landscape, compare major platforms, and explore what these shifts mean for SEO professionals and digital marketers.

How Large Is The CMS Market?

According to W3Techs, 71.1% of websites have a CMS, and Netcraft reports just under 200 million live websites.

From this, we can assume that the current market size for content management systems is approximately 143.5 million websites.

Top 10 CMS By Market Share (Globally), May 2025

CMS (as of May 2025) Launched Type Market Share Usage
No CMS 28.9%
1 WordPress 2003 Open source 61.2% 43.5%
2 Shopify 2006 SaaS 6.7% 4.7%
3 Wix 2006 SaaS 5.3% 3.87%
4 Squarespace 2004 SaaS 3.3% 2.3%
5 Joomla 2005 Open source 2.1% 1.5%
6 Drupal 2001 Open source 1.2% 0.8%
7 Webflow  2013 SaaS 1.2% 0.8%
8 Adobe Systems 2013 Open source 1.1% 0.8%
9 Tilda 2014 SaaS 0.9% 0.7%
10 Duda 2008 SaaS 0.9% 0.7%

Data from W3Techs, May 2025. (WooCommerce and Elementor are not listed in the table above since they’re WordPress plugins and not standalone CMS platforms.)

What Is The Most Widely Used CMS?

Other CMS

*Graphs are separated due to the dominance of the WordPress market share.

WordPress

WordPress has held the dominant market share almost since its launch in 2003.

Between 2014 and 2022, its usage across all websites grew by 105.7% – a surge that helped cement its position as the default choice for much of the web.

However, we’re seeing a market share decline by nearly five percentage points in the last three years.

This marks the first sustained dip in its growth trajectory. It’s a trend that could continue as easier-to-use platforms gain ground and some users report frustrations with plugin compatibility, core updates, and security management.

Read more: Should You Still Use WordPress?

Shopify

As the second-most popular CMS today, and a long way behind WordPress, Shopify market share currently stands at 6.7%

It is used by 4.7% of all websites surveyed.

Wix

Wix continues its slow and steady climb, powering 3.8% of all websites.

Its growth could be attributed to the work they do on branding.

Squarespace

Squarespace has shown steady growth over the past decade, with its CMS market share growing from 0.3% in 2014 to 3.3% in 2025, and usage across all websites rising to 2.3%.

Its growth could be attributed to the increasing demand for low-maintenance, design-forward platforms.

Read more: WordPress Vs. Squarespace – Which One Is Better?

Joomla And Drupal

Joomla and Drupal have seen a steady decline in market share, dropping from the top 3 to positions 5 and 6.

This shift likely reflects a broader trend where more user-friendly, SaaS-based platforms are capturing the attention of small businesses and non-technical users.

No CMS

Between 2024 and May 2025, websites operating without a CMS dropped by 8%, continuing a trend away from custom-coded solutions.

During the same period, websites using WordPress grew by just less than 1%.

The decline in “no CMS” websites signals an ongoing trend toward more structured, manageable platforms for site creation.

No CMS vs WordPress

WordPress Vs. Joomla Vs. Drupal Market Share

Screenshot from W3Techs, May 2025

Since 2024, Joomla has decreased its market share by 16%, while Drupal has declined by 25%.

Together, they once held 14.8% of the CMS market share in 2014 – now that figure sits at just 3.3%.

They’ve slipped from the No. 2 and No. 3 spots to No. 5 and No. 6, overtaken by faster-growing platforms like Wix and Squarespace in 2022.

Joomla, in particular, had strong momentum early on – briefly surpassed WordPress in search interest until around 2008, according to Google Trends – but it hasn’t kept pace with modern platform demands.

Google TrendsScreenshot from Google Trends, May 2025

Why did these popular content management systems decline so much?

It’s most likely due to the strength of third-party support for WordPress with plugins and themes, making it much more accessible.

The growth of website builders, such as Wix and Squarespace, indicates that small businesses want a more straightforward managed solution, and they have started to nibble on market share from the bottom.

Website Builders Market Share: Wix Vs. Squarespace

Screenshot from W3Techs, May 2025

From January to May this year, Wix’s market share grew by 13%, while Squarespace rose by 3.1%.

If we look at the website builders, their growth is a strong indication of where the market might go in the future.

Zooming out, between May 2024 and May 2025, the market share of:

  • Shopify grew by 6.3%.
  • Wix grew by 35.9%.
  • Squarespace grew by 10%.

When we compare the 2.7% market share contraction of WordPress over the last year to the other players, we have to ask, “What’s driving the shift?”

SaaS web builders such as Wix and Squarespace don’t require coding knowledge and offer a hosted website that makes it more accessible for a small business to get a web presence quickly.

No need to arrange a hosting solution, install a website, and set up your own email. A web builder neatly does all this for you.

WordPress is not known as a complicated platform to use, but it does require some coding knowledge and an understanding of how websites are built.

On the other hand, a website builder is a much easier route to market, without the need to understand what is happening in the back end.

Read more: Wix Changed How Websites Are Built And Why You Should Pay Attention

Elementor

Elementor is the most widely used WordPress page builder, installed on 17.3% of all websites with a known CMS and 12.3% of alls sites surveyed (not shown below) – more than Wix and Squarespace combined – though it functions as a plugin within WordPress, not a standalone CMS.

Screenshot from W3Techs, May 2025

While not a CMS on its own, it’s a major player in shaping how WordPress is used.

However, because it’s a third-party plugin and not a CMS, it isn’t listed in the top 10 CMS above.

If we compare the volume of traffic to the number of CMS, we can see that WordPress is in the golden section, up and to the right, clearly favored by sites with more traffic.

Based on usage among higher-ranked domains, Joomla fits into a niche of fewer installs but more high-traffic sites, indicating that more professional sites are using it.

Squarespace and Wix are to the left and down, highlighting that they are installed on fewer sites with less traffic. It strongly indicates that they are used more by small websites and small businesses.

Elementor bridges the gap between the two and has the weight of the WordPress market share, but is used by sites with less traffic.

The appetite is growing for drag-and-drop, plug-and-play solutions that make having a web presence accessible for anyone. This is the space to watch.

Ecommerce CMS Market Share: WooCommerce Vs. Shopify

Screenshot from W3Techs, May 2025

WooCommerce has a market share of 12.7%, while Shopify has 6.7%.

The ecommerce CMS space echoes a pattern similar to that of website builders.

WooCommerce powers 9% of all existing websites, making it the most widely adopted ecommerce plugin by far.

It doesn’t appear in W3Techs’ top CMS list because it is a WordPress plugin, but it’s a key factor in WordPress’s enduring popularity.

Looking at the distribution, we can see a clear pattern emerge. In comparison to other ecommerce CMS platforms, WooCommerce is dominant.

It has more market share than its competitors combined: Shopify (6.7%) + PrestaShop (0.9%) + OpenCart (0.6%) + Magento (0.5%) = 8.7% market share.

Screenshot from W3Techs, May 2025

Smaller sites might favor WooCommerce, but it has the WordPress platform’s weight for market access and, therefore, more installs, much like Elementor.

Shopify surged during the pandemic, with market share growing by 52.9% from 2020 to 2021 and then 26.9% from 2021 to 2022 – far more than any other platform.

It dipped in 2023 and recovered in 2024. By 2025, it held steady at 6.7%.

Why Does CMS Market Share Matter To Someone Working In SEO?

For SEO professionals, keeping a close eye on CMS market share trends is essential because they influence site architecture, plugin availability, and technical SEO flexibility.

As the market fragments, one-size-fits-all optimization strategies no longer cut it.

WordPress continues to lead, but website builders such as Wix, Squarespace, and Shopify gain traction, indicating where market growth lies, especially for small businesses.

If more SMBs are switching to website builders, understanding the limitations and intricacies of these platforms for SEO could be a competitive advantage.

Shopify now runs on 4.7% of all websites surveyed (not just sites with a CMS) – that’s a potential market of 61 million sites.

With their increasing market share, specializing in Shopify SEO could be a strategic move for an SEO professional.

Wix and Squarespace are growing, too. As more small businesses adopt these platforms, getting fluent in their ecosystems could set you apart in a crowded market.

WordPress might be dominant now, but it’s also where the most competition is. Sometimes, aligning with a more niche CMS can be a strategic move for new client opportunities.

More resources:


All data collected from W3Techs, May 2025, unless otherwise indicated. 

See the W3techs methodology page for where the data is gathered from.


Featured Image: Paulo Bobita/Search Engine Journal