Web Almanac Data Reveals CMS Plugins Are Setting Technical SEO Standards (Not SEOs) via @sejournal, @chrisgreenseo

If more than half the web runs on a content management system, then the majority of technical SEO standards are being positively shaped before an SEO even starts work on it. That’s the lens I took into the 2025 Web Almanac SEO chapter (for clarity, I co-authored the 2025 Web Almanac SEO chapter referenced in this article).

Rather than asking how individual optimization decisions influence performance, I wanted to understand something more fundamental: How much of the web’s technical SEO baseline is determined by CMS defaults and the ecosystems around them.

SEO often feels intensely hands-on – perhaps too much so. We debate canonical logic, structured data implementation, crawl control, and metadata configuration as if each site were a bespoke engineering project. But when 50%+ of pages in the HTTP Archive dataset sit on CMS platforms, those platforms become the invisible standard-setters. Their defaults, constraints, and feature rollouts quietly define what “normal” looks like at scale.

This piece explores that influence using 2025 Web Almanac and HTTP Archive data, specifically:

  • How CMS adoption trends track with core technical SEO signals.
  • Where plugin ecosystems appear to shape implementation patterns.
  • And how emerging standards like llms.txt are spreading as a result.

The question is not whether SEOs matter. It’s whether we’ve been underestimating who sets the baseline for the modern web.

The Backbone Of Web Design

The 2025 CMS chapter of the Web Almanac saw a milestone hit with CMS adoption; over 50% of pages are on CMSs. In case you were unsold on how much of the web is carried by CMSs, over 50% of 16 million websites is a significant amount.

Screenshot from Web Almanac, February 2026

With regard to which CMSs are the most popular, this again may not be surprising, but it is worth reflecting on with regard to which has the most impact.

Image by author, February 2026

WordPress is still the most used CMS, by a long way, even if it has dropped marginally in the 2024 data. Shopify, Wix, Squarespace, and Joomla trail a long way behind, but they still have a significant impact, especially Shopify, on ecommerce specifically.

SEO Functions That Ship As Defaults In CMS Platforms

CMS platform defaults are important, this – I believe – is that a lot of basic technical SEO standards are either default setups or for the relatively small number of websites that have dedicated SEOs or people who at least build to/work with SEO best practice.

When we talk about “best practice,” we’re on slightly shaky ground for some, as there isn’t a universal, prescriptive view on this one, but I would consider:

  • Descriptive “SEO-friendly” URLs.
  • Editable title and meta description.
  • XML sitemaps.
  • Canonical tags.
  • Meta robots directive changing.
  • Structured data – at least a basic level.
  • Robots.txt editing.

Of the main CMS platforms, here is what they – self-reportedly – have as “default.” Note: For some platforms – like Shopify – they would say they’re SEO-friendly (and to be honest, it’s “good enough”), but many SEOs would argue that they’re not friendly enough to pass this test. I’m not weighing into those nuances, but I’d say both Shopify and those SEOs make some good points.

CMS SEO-friendly URLs Title & meta description UI XML sitemap Canonical tags Robots meta support Basic structured data Robots.txt
WordPress Yes Partial (theme-dependent) Yes Yes Yes Limited (Article, BlogPosting) No (plugin or server access required)
Shopify Yes Yes Yes Yes Limited Product-focused Limited (editable via robots.txt.liquid, constrained)
Wix Yes Guided Yes Yes Limited Basic Yes (editable in UI)
Squarespace Yes Yes Yes Yes Limited Basic No (platform-managed, no direct file control)
Webflow Yes Yes Yes Yes Yes Manual JSON-LD Yes (editable in settings)
Drupal Yes Partial (core) Yes Yes Yes Minimal (extensible) Partial (module or server access)
Joomla Yes Partial Yes Yes Yes Minimal Partial (server-level file edit)
Ghost Yes Yes Yes Yes Yes Article No (server/config level only)
TYPO3 Yes Partial Yes Yes Yes Minimal Partial (config or extension-based)

Based on the above, I would say that most SEO basics can be covered by most CMSs “out of the box.” Whether they work well for you, or you cannot achieve the exact configuration that your specific circumstances require, are two other important questions – ones which I am not taking on. However, it often comes down to these points:

  1. It is possible for these platforms to be used badly.
  2. It is possible that the business logic you need will break/not work with the above.
  3. There are many more advanced SEO features that aren’t out of the box, that are just as important.

We are talking about foundations here, but when I reflect on what shipped as “default” 15+ years ago, progress has been made.

Fingerprints Of Defaults In The HTTP Archive Data

Given that a lot of CMSs ship with these standards, do these SEO defaults correlate with CMS adoption? In many ways, yes. Let’s explore this in the HTTP Archive data.

Canonical Tag Adoption Correlates With CMS

Combining canonical tag adoption data with (all) CMS adoption over the last four years, we can see that for both mobile and desktop, the trends seem to follow each other pretty closely.

Image by author, February 2026
Image by author, February 2026

Running a simple Pearson correlation over these elements, we can see this strong correlation even clearer, with canonical tag implementation and the presence of self-canonical URLs.

Image by author, February 2026

What differs is the mobile correlation of canonicalized URLs; that seems to be a negative correlation on mobile and a lower (but still positive) correlation on desktop. A drop in canonicalized pages is largely causing this negative correlation, and the reasons behind this could be many (and harder to be sure of).

Canonical tags are a crucial element for technical SEO; their continued adoption does certainly seem to track the growth in CMS use, too.

Schema.org Data Types Correlate With CMS

Schema.org types against CMS adoption show similar trends, but are less definitive overall. There are many different types of Schema.org, but if we plot CMS adoption against the ones most common to SEO concerns, we can observe a broadly rising picture.

Image by author, February 2026

With the exception of Schema.org WebSite, we can see CMS growth and structured data following similar trends.

But we must note that Schema.org adoption is quite considerably lower than CMSs overall. This could be due to most CMS defaults being far less comprehensive with Schema.org. When we look at specific CMS examples (shortly), we’ll see far-stronger links.

Schema.org implementation is still mostly intentional, specialist, and not as widespread as it could be. If I were a search engine or creating an AI Search tool, would I rely on universal adoption of these, seeing the data like this? Possibly not.

Robots.txt

Given that robots.txt is a single file that has some agreed standards behind it, its implementation is far simpler, so we could anticipate higher levels of adoption than Schema.org.

The presence of a robots.txt is pretty important, mostly to limit crawl of search engines to specific areas of the site. We are starting to see an evolution – we noted in the 2025 Web Almanac SEO chapter – that the robots.txt is used even more as a governance piece, rather than just housekeeping. A key sign that we’re using our key tools differently in the AI search world.

But before we consider the more advanced implementations, how much of a part does a CMS have in ensuring a robots.txt is present? Looks like over the last four years, CMS platforms are driving a significant amount more of robots.txt files serving a 200 response:

Image by author, February 2026

What is more curious, however, is when you consider the file of the robots.txt files. Non-CMS platforms have robots.txt files that are significantly larger.

Image by author, February 2026

Why could this be? Are they more advanced in non-CMS platforms, longer files, more bespoke rules? Most probably in some cases, but we’re missing another impact of a CMSs standards – compliant (valid) robots.txt files.

A lot of robots.txt files serve a valid 200 response, but often they’re not txt files, or they’re redirecting to 404 pages or similar. When we limit this list to only files that contain user-agent declarations (as a proxy), we see a different story.

Image by author, February 2026

Approaching 14% of robots.txt files served on non-CMS platforms are likely not even robots.txt files.

A robots.txt is easy to set up, but it is a conscious decision. If it’s forgotten/overlooked, it simply won’t exist. A CMS makes it more likely to have a robots.txt, and what’s more, when it is in place, it makes it easier to manage/maintain – which IS key.

WordPress Specific Defaults

CMS platforms, it seems, cover the basics, but more advanced options – which still need to be defaults – often need additional SEO tools to enable.

Interrogating WordPress-specific sites with the HTTP Archive data will be easiest as we get the largest sample, and the Wapalizer data gives a reliable way to judge the impact of WordPress-specific SEO tools.

From the Web Almanac, we can see which SEO tools are the most installed on WordPress sites.

Screenshot from Web Almanac, February 2026

For anyone working within SEO, this is unlikely to be surprising. If you are an SEO and worked on WordPress, there is a high chance you have used either of the top three. What IS worth considering right now is that while Yoast SEO is by far the most prevalent within the data, it is seen on barely over 15% of sites. Even the most popular SEO plugin on the most popular CMS is still a relatively small share.

Of these top three plugins, let’s first consider what the differences of their “defaults” are. These are similar to some of WordPress’s, but we can see many more advanced features that come as standard.

SEO Capability All-in-One SEO Yoast SEO Rank Math
Title tag control Yes (global + per-post) Yes Yes
Meta description control Yes Yes Yes
Meta robots UI Yes (index/noindex etc.) Yes Yes
Default meta robots output Explicit index,follow Explicit index,follow Explicit index,follow
Canonical tags Auto self-canonical Auto self-canonical Auto self-canonical
Canonical override (per URL) Yes Yes Yes
Pagination canonical handling Limited Historically opinionated More configurable
XML sitemap generation Yes Yes Yes
Sitemap URL filtering Basic Basic More granular
Inclusion of noindex URLs in sitemap Possible by default Historically possible Configurable
Robots.txt editor Yes (plugin-managed) Yes Yes
Robots.txt comments/signatures Yes Yes Yes
Redirect management Yes Limited (free) Yes
Breadcrumb markup Yes Yes Yes
Structured data (JSON-LD) Yes (templated) Yes (templated) Yes (templated, broad)
Schema type selection UI Yes Limited Extensive
Schema output style Plugin-specific Plugin-specific Plugin-specific
Content analysis/scoring Basic Heavy (readability + SEO) Heavy (SEO score)
Keyword optimization guidance Yes Yes Yes
Multiple focus keywords Paid Paid Free
Social metadata (OG/Twitter) Yes Yes Yes
Llms.txt generation Yes – enabled by default Yes – one-check enable Yes – one-check enable
AI crawler controls Via robots.txt Via robots.txt Via robots.txt

Editable metadata, structured data, robots.txt, sitemaps, and, more recently, llms.txt are the most notable. It is worth noting that a lot of the functionality is more “back-end,” so not something we’d be as easily able to see in the HTTP Archive data.

Structured Data Impact From SEO Plugins

We can see (above) that structured data implementation and CMS adoption do correlate; what is more interesting here is to understand where the key drivers themselves are.

Viewing the HTTP Archive data with a simple segment (SEO plugins vs. no SEO plugins), from the most recent scoring paints a stark picture.

Image by author, February 2026

When we limit the Schema.org @types to the most associated with SEO, it is really clear that some structured data types are pushed really hard using SEO plugins. They are not completely absent. People may be using lesser-known plugins or coding their own solutions, but ease of implementation is implicit in the data.

Robots Meta Support

Another finding from the SEO Web Almanac 2025 chapter was that “follow” and “index” directives were the most prevalent, even though they’re technically redundant, as having no meta robots directives is implicitly the same thing.

Screenshot from Web Almanac 2025, February 2026

Within the chapter number crunching itself, I didn’t dig in much deeper, but knowing that all major SEO WordPress plugins have “index,follow” as default, I was eager to see if I could make a stronger connection in the data.

Where SEO plugins were present on WordPress, “index, follow” was set on over 75% of root pages vs. <5% of WordPress sites without SEO plugins>

Image by author, February 2026

Given the ubiquity of WordPress and SEO plugins, this is likely a huge contributor to this particular configuration. While this is redundant, it isn’t wrong, but it is – again – a key example of whether one or more of the main plugins establish a de facto standard like this, it really shapes a significant portion of the web.

Diving Into LLMs.txt

Another key area of change from the 2025 Web Almanac was the introduction of the llms.txt file. Not an explicit endorsement of the file, but rather a tacit acknowledgment that this is an important data point in the AI Search age.

From the 2025 data, just over 2% of sites had a valid llms.txt file and:

  • 39.6% of llms.txt files are related to All-in-One SEO.
  • 3.6% of llms.txt files are related to Yoast SEO.

This is not necessarily an intentional act by all those involved, especially as Rank Math enables this by default (not an opt-in like Yoast and All-in-One SEO).

Image by author, February 2026

Since the first data was gathered on July 25, 2025 if we take a month-by-month view of the data, we can see further growth since. It is hard not to see this as growing confidence in this markup OR at least, that it’s so easy to enable, more people are likely hedging their bets.

Conclusion

The Web Almanac data suggests that SEO, at a macro level, moves less because of individual SEOs and more because WordPress, Shopify, Wix, or a major plugin ships a default.

  • Canonical tags correlate with CMS growth.
  • Robots.txt validity improves with CMS governance.
  • Redundant “index,follow” directives proliferate because plugins make them explicit.
  • Even llms.txt is already spreading through plugin toggles before it even gets full consensus.

This doesn’t diminish the impact of SEO; it reframes it. Individual practitioners still create competitive advantage, especially in advanced configuration, architecture, content quality, and business logic. But the baseline state of the web, the technical floor on which everything else is built, is increasingly set by product teams shipping defaults to millions of sites.

Perhaps we should consider that if CMSs are the infrastructure layer of modern SEO, then plugin creators are de facto standards setters. They deploy “best practice” before it becomes doctrine

This is how it should work, but I am also not entirely comfortable with this. They normalize implementation and even create new conventions simply by making them zero-cost. Standards that are redundant have the ability to endure because they can.

So the question is less about whether CMS platforms impact SEO. They clearly do. The more interesting question is whether we, as SEOs, are paying enough attention to where those defaults originate, how they evolve, and how much of the web’s “best practice” is really just the path of least resistance shipped at scale.

An SEO’s value should not be interpreted through the amount of hours they spend discussing canonical tags, meta robots, and rules of sitemap inclusion. This should be standard and default. If you want to have an out-sized impact on SEO, lobby an existing tool, create your own plugin, or drive interest to influence change in one.

More Resources:  


Featured Image: Prostock-studio/Shutterstock

Inside Chicago’s surveillance panopticon

Early on the morning of September 2, 2024, a Chicago Transit Authority Blue Line train was the scene of a random and horrific mass shooting. Four people were shot and killed on a westbound train as it approached the suburb of Forest Park. 

The police swiftly activated a digital dragnet—a surveillance network that connects thousands of cameras in the city. 

The process began with a quick review of the transit agency’s surveillance cameras, which captured the alleged gunman shooting the victims execution style. Law enforcement followed the suspect, through real-time footage, across the rapid-­transit system. Police officials circulated the images to transit staff and to thousands of officers. An officer in the adjacent suburb of Riverdale recognized the suspect from a previous arrest. By the time he was captured at another train station, just 90 minutes after the shooting, authorities already had his name, address, and previous arrest history.

Little of this process would come as much surprise to Chicagoans. The city has tens of thousands of surveillance cameras—up to 45,000, by some estimates. That’s among the highest numbers per capita in the US. Chicago boasts one of the largest license plate reader systems in the country, and the ability to access audio and video surveillance from independent agencies such as the Chicago Public Schools, the Chicago Park District, and the public transportation system as well as many residential and commercial security systems such as Ring doorbell cameras. 

Law enforcement and security advocates say this vast monitoring system protects public safety and works well. But activists and many residents say it’s a surveillance panopticon that creates a chilling effect on behavior and violates guarantees of privacy and free speech. 

Black and Latino communities in Chicago have historically been targeted by excessive policing and surveillance, says Lance Williams, a scholar of urban violence at Northeastern Illinois University. That scrutiny has created new problems without delivering the promised safety, he suggests. In order to “solve the problem of crime or violence and make these communities safer,” he says, “you have to deal with structural problems,” such as the shortage of livable-wage jobs, affordable housing, and mental-health services across the city.

Recent years have seen some effective pushback against the surveillance. Until recently, for example, the city was the largest customer of ShotSpotter acoustic sensors, which are designed to detect gunfire and alert police. The system was introduced in a small area on the South Side in 2012. By 2018, an area of about 136 square miles—some 60% of the city—was covered by the acoustic surveillance network.

Critics questioned ShotSpotter’s effectiveness and objected that the sensors were installed largely in Black and Latino neighborhoods. Those critiques gained urgency with the fatal shooting in March 2021 of a 13-year-old, Adam Toledo, by police responding to a ShotSpotter alert. The tragedy became the touchstone of the #StopShotSpotter protest movement and one of the major issues in Brandon Johnson’s successful mayoral campaign in 2023. When he reached office, Johnson followed through, ending the city’s contract with SoundThinking, the San Francisco Bay Area company behind ShotSpotter. In total, it’s estimated, the city paid more than $53 million for the system. 

In response to a request for comment, SoundThinking said that ShotSpotter enables law enforcement “to reach the scene faster, render aid to victims, and locate evidence more effectively.” It said the company “plays no part in the selection of deployment areas” but added: “We believe communities experiencing the highest levels of gun violence deserve the same rapid emergency response as any other neighborhood.” 

While there has been successful resistance to police surveillance in the nation’s third-largest city, there are also countervailing forces: Governments and officials in Chicago and the surrounding suburbs are moving to expand the use of surveillance, also in response to public pressure. Even the victory against acoustic surveillance might be short-lived. Early last year, the city issued a request for proposals for gun violence detection technology. 

Many people in and around Chicago—digital privacy and surveillance activists, defense attorneys, law enforcement officials, and ordinary citizens—are part of this push and pull. Here are some of their stories. 


Alejandro Ruizesparza and Freddy Martinez
Cofounders, Lucy Parsons Labs

Oak Park, a quiet suburb at Chicago’s western border, is the birthplace of Ernest Hemingway. It includes the world’s largest collection of Frank Lloyd Wright–designed buildings and homes. 

Until recently, the village of Oak Park was also the center of a three-year-long campaign against an unwelcome addition to its manicured lawns and Prairie-style architecture: automated license plate readers from a company called Flock Safety. These are high-speed cameras that automatically scan license plates to look for stolen or wanted vehicles, or for drivers with outstanding warrants. 

Freddy Martinez (left) and Alejandro Ruizesparza (right) direct Lucy Parsons Labs, a charitable organization focused on digital rights.
AKILAH TOWNSEND

An Oak Park group called Freedom to Thrive—made up of parents, activists, lawyers, data scientists, and many others—suspected that this technology was not a good or equitable addition to their neighborhood. So the group engaged the Chicago-based nonprofit Lucy Parsons Labs to help navigate the often intimidating process of requesting license plate reader data under the Illinois Freedom of Information Act.

Lucy Parsons Labs, which is named for a turn-of-the-century Chicago labor organizer, investigates technologies such as license plate readers, gunshot detection systems, and police bodycams. 

LPL provides digital security and public records training to a variety of groups and is frequently called on to help community members audit and analyze surveillance systems that are targeting their neighborhoods. It’s led by two first-­generation Mexican-Americans from the city’s Southwest Side. Alejandro Ruizesparza has a background in community organizing and data science. Freddy Martinez was also a community organizer and has a background in physics. 

The group is now approaching its 10th year, but it was an all-volunteer effort until 2022. That’s when LPL received its first unrestricted, multi-year operational grant from a large foundation: the Chicago-based John D. and Catherine T. MacArthur Foundation, known worldwide for its so-called “genius grants.” A grant from the Ford Foundation followed the next year. 

The additional resources—a significant amount compared with the previous all-volunteer budget, acknowledges Ruizesparza—meant the two cofounders and two volunteers became full-time employees. But the group is determined not to become “too comfortable” and lose its edge. There is a tenacity to Lucy Parsons Labs’ work—a “sense of scrappiness,” they say—because “we did so much of this work with no money.” 

One of LPL’s primary strategies is filing extensive FOIA requests for raw data sets of police surveillance. The process can take a while, but it often reveals issues. 

In the case of Oak Park, the FOIA requests were just one tool that Freedom to Thrive and LPL used to sort out what was going on. The data revealed that in the first 10 months of operation, the eight Flock license plate readers the town had deployed scanned 3,000,000 plates. But only 42 scans led to an alert—an infinitesimal yield of 0.000014%. 

At the same time, the impact was disproportionate. While Oak Park’s population of about 53,000 is only 19% Black, Black drivers made up 85% of those flagged by the Flock cameras, seemingly amplifying what were already concerning racial disparities in the village’s traffic stops. Flock did not respond to a request for comment.

“We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”

Freddy Martinez, cofounder, Lucy Parsons Labs

LPL brings a mix of radical politics and critical theory to its mission. Most surveillance technologies are “largely extensions of the plantation systems,” says Ruizesparza. 

The comparison makes sense: Many slaveholding communities required enslaved persons to carry signed documents to leave plantations and wear badges with numbers sewn to their clothing. The group says it aims to empower local communities to push back against biased policing technologies through technical assistance, training, and litigation—and to de­mystify algorithms and surveillance tools in the process.

“When we talk to people, they realize that you don’t need to know how to run a regression to understand that a technology has negative implications on your life,” says Ruizesparza. “You don’t need to understand how circuits work to understand that you probably shouldn’t have all of these cameras embedded in only Black and brown regions of a city.”

The group came by some of its techniques through experimentation. “When LPL was first getting started, we didn’t really feel like FOIA would have been a good way of getting information. We didn’t know anything about it,” says Martinez. “Along the way, we were very successful in uncovering a lot of surveillance practices.” 

One of the covert surveillance practices uncovered by those aggressive FOIA requests, for example, was the Chicago Police Department’s use of “Stingray” equipment, portable surveillance devices deployed to track and monitor mobile phones. 

The contentious issue of Oak Park’s license plate readers was finally put to a vote in late August. The village trustees voted 5–2 to terminate the contract with Flock Safety. 

Since then, community-­based groups from across the country—as far away as California—have contacted LPL to say the Chicago collective’s work has inspired their own efforts, says Martinez: “We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”


Brian Strockis
Chief, Oak Brook Police Department

If you drive about 20 miles west of Chicago, you’ll find Oakbrook Center, one of the nation’s leading luxury shopping destinations. The open-air mall includes Neiman-Marcus, Louis Vuitton, and Gucci and attracts high-end shoppers from across the region. It’s also become a destination for retail theft crews that coordinate “smash and grabs” and often escape with thousands of dollars’ worth of inventory that can be quickly sold, such as sunglasses or luxury handbags. 

In early December, police say, a Chicago man tried to lead officers on what could have been a dangerous high-speed chase from the mall. Patrol cars raced to the scene. So did a “first responder drone,” built by Flock Safety and deployed by the Oak Brook Police Department.  

The drone identified the suspect vehicle from the mall parking lot using its license plate reader and snapped high-definition photos that were texted to officers on the ground. The suspect was later tracked to Chicago, where he was arrested. 

Brian Strockis, chief of the Oak Brook Police Department, led the way in introducing drones as first responders in the state of Illinois.
AKILAH TOWNSEND

This was the type of outcome that Brian Strockis, chief of the Oak Brook Police Department, hoped for when he pioneered the “drone as first responder,” or DFR, program in Illinois. A longtime member of the force, he joined the department almost 25 years ago as a patrol officer, worked his way up the brass ladder, and was awarded the top job in 2022. 

Oak Brook was the first municipality in Illinois to deploy a drone as a first responder. One of the main reasons, says Strockis, was to reduce the number of high-speed chases, which are potentially dangerous to officers, suspects, and civilians. A drone is also a more effective and cost-efficient way to deal with suspects in fleeing vehicles, says Strockis.

Police say there was the potential for a dangerous high-speed chase. Patrol cars raced to the scene. But the first unit to arrive was a drone.

“It’s a force multiplier in that we’re able to do more with less,” says the chief, who spoke with me in his office at Oak Brook’s Village Hall. 

The department’s drone autonomously launches from the roof of the building and responds to about 10 to 12 service calls per day, at speeds up to 45 miles per hour. It arrives at crime scenes before patrol officers in nine out of every 10 cases.

Next door to Village Hall is the Oak Brook Police Department’s real-time crime center, a large room with two video walls that integrates livestreams from the first-responder drone, handheld drones, traffic cameras, license plate readers, and about a thousand private security cameras. When I visited, the two DFR operators demonstrated how the machine can fly itself or be directed to locations from a destination entered on Google Maps. They sent it off to a nearby forest preserve and then directed it to return to the rooftop base, where it docks automatically, changes batteries, and charges. After the demo, one of the drone operators logged the flight, as required by state law.

Strockis says he is aware of the privacy concerns around using this technology but that protections are in place. 

For example, the drone cannot be used for random or mass surveillance, he says, because the camera is always pointed straight ahead during flight and does not angle down until it reaches its desired location. The drone’s payload does not include facial recognition technology, which is restricted by state law, he says. 

The drone video footage is invaluable, he adds, because “you are seeing the events as they’re transpiring from an angle that you wouldn’t otherwise be privy to.” 

It’s an extra layer of protection for the public as well as for the officers, says the chief: “For every incident that an officer responds to now, you have squad car and bodycam video. You likely have cell-phone video from the public, officers, complainants, from offenders. So adding this element is probably the best video source on a scene that the police are going to anyway.”


Mark Wallace
Executive director, Citizens to Abolish Red Light Cameras

Mark Wallace wears several hats. By day he is a real estate investor and mortgage lender. But he is probably best known to many Chicagoans—especially across the city’s largely African-American communities on the South and West Sides—as a talk radio host for the station WVON and one of the leading voices against the city’s extensive network of red-light and speed cameras. 

For the past two decades, city officials have maintained that the cameras—which are officially known as “automated enforcement”—are a crucial safety measure. They are also a substantial revenue stream, generating around $150 million a year and a total of some $2.5 billion since they were installed.

Urged on by a radio listener, Mark Wallace started organizing against Chicago’s red-light and speed cameras, a substantial revenue stream for the city that has been found to disproportionately burden majority Black and Latino areas.
AKILAH TOWNSEND

“The one thing that the cameras have the ability to do is generate a lot of money,” Wallace says. He describes the tickets as a “cash grab” that disproportionately affects Black and Latino communities.

A groundbreaking 2022 analysis by ProPublica found, in fact, that households in majority Black and Latino zip codes were ticketed at much higher rates than others, in part because the cameras in those areas were more likely to be installed near expressway ramps and on wider streets, which encouraged faster speeds. The tickets, which can quickly rack up late fees, were also found to cause more of a financial burden in such communities, the report found.

These were some of the same concerns that many people expressed on the radio and in meetings, Wallace says. 

Chicago’s automated traffic enforcement began in 2003, and it became the most extensive—and most lucrative—such program in the country. About 300 red-light cameras and 200 speed cameras are set up near schools and parks. The cost of the tickets can quickly double if they are not paid or contested—providing a windfall for the city.  

Wallace began his advocacy against the cameras soon after arriving at the radio station in the early 2010s. A younger listener called in and said, he recalls, “that he enjoyed the information that came from WVON but that we didn’t do anything.” The comment stuck with him, especially in light of WVON’s storied history. The station was closely involved in the civil rights movement of the 1960s and broadcast Martin Luther King Jr.’s speeches during his Chicago campaign.

Wallace hoped to change the caller’s perception about the station. He had firsthand experience with red-light cameras,  having been ticketed himself, and decided to take them on as a cause. He scheduled a meeting at his church for a Friday night, promoting it on his show. “More than 300 people showed up,” he remembers, chatting with me in the spacious project studio and office in the basement of his townhouse on the city’s South Side. “That said to me there are a lot of people who see this in­equity and injustice.” 

Wallace began using his platform on WVON—The People’s Show—to mobilize communities around social and economic justice, and many discussions revolved around the automated enforcement program. The cause gained traction after city and state officials were found to have taken thousands of dollars from technology and surveillance companies to make sure their cameras remained on the streets.

Wallace and his group, Citizens to Abolish Red Light Cameras, want to repeal the ordinances authorizing the city’s camera programs. That hasn’t happened so far, but political pressure from the group paved the way for a Chicago City Council ordinance that required public meetings before any red-light cameras are installed, removed, or relocated. The group hopes for more restrictions for speed cameras, too.

“It was never about me personally. It was about ensuring that we could demonstrate to people that you have power,” says Wallace. “If you don’t like something, as Barack Obama would say, get a pen and clipboard and go to work to fight to make these changes.” 


Jonathan Manes
Senior counsel, MacArthur Justice Center

Derick Scruggs, a 30-year-old father and licensed armed security guard, was working in the parking lot of an AutoZone on Chicago’s Southwest Side on April 19, 2021. That’s when he was detained, interrogated, and subjected to a “humiliating body search” by two Chicago police officers, Scruggs later attested. “I was just doing my job when police officers came at me, handcuffed me, and treated me like a criminal—just because I was near a ShotSpotter alert,” he says.

The officers found no evidence of a shooting and released Scruggs. But the next day, the police returned and arrested him for an alleged violation related to his security guard paperwork. Prosecutors later dismissed the charges, but he was held in custody overnight and was then fired from his job. “Because of what they did,” he says, “I lost my job, couldn’t work for months, and got evicted from my apartment.”

Jonathan Manes litigated cases related to detentions at Guantanamo Bay and the legality of drone strikes before turning his attention to Chicago’s implementation of gunshot detection technology.
AKILAH TOWNSEND

Scruggs is believed to be among thousands of Chicagoans who’ve been questioned, detained, or arrested by police because they were near the location of a ShotSpotter alert, according to an analysis by the City of Chicago Office of Inspector General. The case caught the attention of Jonathan Manes, a law professor at Northwestern and senior counsel at the MacArthur Justice Center, a public interest law firm. 

Manes previously worked in national security law, but when he joined the justice center about six years ago, he chose to focus squarely on the intersection of civil rights with police surveillance and technology. “My goal was to identify areas that weren’t well covered by other civil rights organizations but were a concern for people here in Chicago,” he says. 

“There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it.”

Jonathan Manes, senior counsel, MacArthur Justice Center

And when he and his colleagues looked into ShotSpotter, they revealed a disturbing problem: The system generated alerts that yielded no evidence of gun-­related crimes but were used by police as a pretext for other actions. There seemed to be “a pattern of people being stopped, detained, questioned, sometimes arrested, in response to a ShotSpotter alert—often resulting in charges that have nothing to do with guns,” Manes says. 

The system also directed a “massive number of police deployments onto the South and West Sides of the city,” Manes says. Those regions are home to most of Chicago’s Black and Latino residents. The research showed that 80% of the city’s Black population but only 30% of its white population lived in districts covered by the system. 

Manes brought Scruggs’s case into a lawsuit that he was already developing against the city’s use of ShotSpotter. In late 2025, he and his colleagues reached a settlement that prohibits police officers from doing what they did in Scruggs’s case—stopping or searching people simply because they are near the location of a gunshot detection alert. 

Chicago had already decommissioned ShotSpotter in 2024, but the agreement will cover any future gunshot detection systems. Manes is carefully watching to see what happens next.

Though Manes is pleased with the settlement, he points out that it narrowly focused on how police resources were used after the gunshot detection system was operational. “There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it,” he adds. He supports laws that require disclosure from local officials and law enforcement about what technologies are being proposed and how civil rights could be affected.  

More than two dozen jurisdictions nationwide have adopted surveillance transparency laws, including San Francisco, Seattle, Boston, and New York City. But so far Chicago is not on that list. 

Rod McCullom is a Chicago-based science and technology writer whose focus areas include AI, biometrics, cognition, and the science of crime and violence.  

The Download: Chicago’s surveillance network, and building better bras

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside Chicago’s surveillance panopticon

Chicago has tens of thousands of surveillance cameras—up to 45,000, by some estimates. 

That’s among the highest numbers per capita in the US. Chicago boasts one of the largest license plate reader systems in the country, and the ability to access audio and video surveillance from independent agencies such as the Chicago Public Schools, the Chicago Park District, and the public transportation system as well as many residential and commercial security systems such as Ring doorbell cameras.

Law enforcement and security advocates say this vast monitoring system protects public safety and works well. 

But activists and many residents say it’s a surveillance panopticon that creates a chilling effect on behavior and violates guarantees of privacy and free speech. Read the full story.

—Rod McCullom

Job titles of the future: Breast biomechanic

Twenty years ago, Joanna Wakefield-Scurr was having persistent pain in her breasts. Her doctor couldn’t diagnose the cause but said a good, supportive bra could help. A professor of biomechanics, Wakefield-Scurr thought she could do a little research and find a science-backed option. Two decades later, she’s still looking.

Wakefield-Scurr now leads an 18-person team at the Research Group in Breast Health at the University of Portsmouth in the UK. And as more women take up high-impact sports, the need to understand what makes a good bra grows, she says her lab can’t keep up with demand. Read the full story.

—Sara Harrison

These stories are both from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Inside ICE’s plans to build huge detention centers across the US
The identities of the personnel who authorized it have been revealed in metadata. (Wired $)
+ A UK tourist with a valid visa was detained by ICE for six weeks. (The Guardian)

2 The UAE says it was targeted by a wave of AI-backed cyberattacks
Authorities said the attacks marked a major shift in methods, but didn’t elaborate. (Bloomberg $)
+ New cybersecurity rules are hobbling small defense suppliers. (Reuters)+ AI is already making online crimes easier. It could get much worse. (MIT Technology Review)

3 What does the public really think about AI?
Tech leaders are worried they might not be fully onboard with their missions. (NYT $)
+ How social media encourages the worst of AI boosterism. (MIT Technology Review)

4 It looks like X really is pushing its users further to the right
As well as attracting more conservative thinkers in the first place. (NY Mag $)
+ The platform is currently disputing a major European fine. (Politico $)

5 Meet the farmers standing up to data center builders
They’re turning down deals worth millions for the land they’ve worked for decades. (The Guardian)
+ A data center venture launched at the White House isn’t delivering on its promises. (The Information $)
+ Data centers are amazing. Everyone hates them. (MIT Technology Review)

6 America has a plan to fight back against China’s AI
It hopes to send Tech Corps volunteers around the world to promote its own national efforts. (Rest of World)
+ China’s plan to lure in new AI customers? Bubble tea. (FT $)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

7 Clouds are a major climate problem ☁
They’re making it harder for scientists to model the weather accurately. (Quanta Magazine)
+ The building legal case for global climate justice. (MIT Technology Review)

8 AI is still hopeless at reading PDFs
But companies keep deploying it across work systems anyway. (The Verge)

9 A “Fitbit for farts” could help analyze your gastrointestinal health
If you don’t mind wearing a sensor tucked into your underwear, that is. (WSJ $)

10 Gen Z is fascinated by corporate culture ​​💼
TikTok’s “WorkTok” videos are very effective at romanticizing the daily grind. (FT $) 

Quote of the day

“It also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

—Sam Altman, CEO of OpenAI, compares the environmental impact of training AI’s vast models to the effort required to train a human during an event in India, TechCrunch reports.

One more thing

How one mine could unlock billions in EV subsidies

On a pine farm north of the tiny town of Tamarack, Minnesota, Talon Metals has uncovered one of America’s densest nickel deposits—and now it wants to begin extracting it.

If regulators approve the mine, it could mark the starting point in what the company claims would become the country’s first complete domestic nickel supply chain, running from the bedrock beneath the Minnesota earth to the batteries in electric vehicles across the nation.

MIT Technology Review wanted to provide a clearer sense of the law’s on-the-ground impact by zeroing in on a single project and examining how these rich subsidies could be unlocked at each point along the supply chain. Take a look at what we found out.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Alysa Liu’s gold medal-winning Winter Olympics figure skating route is truly amazing.
+ Mmm, delicious ancient Roman pizza.
+ It’s not every day you find 2,000 year-old footprints while walking your dog 👣
+ Nature is full of surprises, and so are the winners of this year’s Sony World Photography Awards.

The human work behind humanoid robots is being hidden

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In January, Nvidia’s Jensen Huang, the head of the world’s most valuable company, proclaimed that we are entering the era of physical AI, when artificial intelligence will move beyond language and chatbots into physically capable machines. (He also said the same thing the year before, by the way.)

The implication—fueled by new demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The new way is to replicate the way humans think, learn, and adapt while they work. The problem is that the lack of transparency about the human labor involved in training and operating such robots leaves the public both misunderstanding what robots can actually do and failing to see the strange new forms of work forming around them.

Consider how, in the AI era, robots often learn from humans who demonstrate how to do a chore. Creating this data at scale is now leading to Black Mirror–esque scenarios. A worker in Shanghai, for example, recently spent a week wearing a virtual-reality headset and an exoskeleton while opening and closing the door of a microwave hundreds of times a day to train the robot next to him, Rest of World reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it would partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across a variety of household environments.” (Figure did not respond to questions about this effort.)

Just as our words became training data for large language models, our movements are now poised to follow the same path. Except this future might leave humans with an even worse deal, and it’s already beginning. The roboticist Aaron Prather told me about recent work with a delivery company that had its workers wear movement-tracking sensors as they moved boxes; the data collected will be used to train robots. The effort to build humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.” 

Or consider tele-operation. Though the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is set to ship to homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the customer wants it to do a tricky task, a tele-operator from the company’s headquarters in Palo Alto, California, will pilot it, looking through its cameras to iron clothes or unload the dishwasher.

This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we know it will not exist in a world where tele-operators are doing chores in your house through a robot. And if home humanoids are not genuinely autonomous, the arrangement is better understood as a form of wage arbitrage that re-creates the dynamics of gig work while, for the first time, allowing physical tasks to be performed wherever labor is cheapest.

We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn on its own, even the best models require an awful lot of human feedback to work as desired.

These human workforces do not mean that AI is just vaporware. But when they remain invisible, the public consistently overestimates the machines’ actual capabilities.

That’s great for investors and hype, but it has consequences for everyone. When Tesla marketed its driver-assistance software as “Autopilot,” for example, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages). 

The same will be true for humanoid robots. If Huang is right, and physical AI is coming for our workplaces, homes, and public spaces, then the way we describe and scrutinize such technology matters. Yet robotics companies remain as opaque about training and tele-operation as AI firms are about their training data. If that does not change, we risk mistaking concealed human labor for machine intelligence—and seeing far more autonomy than truly exists.

Peptides are everywhere. Here’s what you need to know.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Want to lose weight? Get shredded? Stay mentally sharp? A wellness influencer might tell you to take peptides, the latest cure-all in the alternative medicine arsenal. People inject them. They snort them. They combine them into concoctions with superhero names, like the Wolverine stack.  

Matt Kaeberlein, a longevity researcher, first started hearing about peptides a few years ago. “At that point it was mostly functional medicine doctors that were using peptides,” he says, referring to physicians who embrace alternative medicine and supplements. “In the last six months, it’s kind of gone crazy.”

Peptides have gone mainstream. At the health-technology startup Superpower in Los Angeles, employees can get free peptide shots on Fridays. At a health food store in Phoenix, a sidewalk sign reads, “We have peptides!” At a tae kwon do center in South Carolina, a peptide wholesaler hosts an informational evening. On social media, they’re everywhere. And that popularity seems poised to grow; Department of Health and Human Services secretary Robert F. Kennedy Jr. has promised to end the FDA’s “aggressive suppression” of peptides.

The benefits and risks of many of these compounds, however, are largely unknown. Some of the most popular peptides have never been tested in human trials. They are sold for research purposes, not human consumption. Some are illegal knockoffs of wildly successful weight-loss medicines. The vast majority come from China, a fact that has some legislators worried. Last week, Senator Tom Cotton urged the head of the FDA to crack down on illegal shipments of peptides from China. In the absence of regulatory oversight, some people are sending the compounds they purchase off for independent testing just to ensure that the product is legit. 

What is a peptide?

A peptide is simply a short string of amino acids, the building blocks of proteins. “Scientists generally think of peptides as very small protein fragments, but we don’t really have a precise cutoff between a peptide and a protein,” says Paul Knoepfler, a stem-cell researcher at the University of California, Davis. Insulin is a peptide, as is human growth hormone. So are some neurotransmitters, like oxytocin. 

But when wellness influencers talk about peptides, they’re often referring to particular compounds—formulated as injections, pills, or nasal sprays—that have become trendy lately. 

Some of these peptides are FDA-approved prescription medications. GLP-1 medicines, for example, are approved to treat diabetes and obesity but are also easily accessible online to almost anyone who wants to use them. Many sites sell microdoses of GLP-1s with claims that they can “support longevity,” reduce cognitive decline, or curb inflammation. 

Many more peptides are experimental. “The majority fall into the unapproved bucket,” says Kaeberlein, who is chief executive officer of Optispan, a Seattle-based health-care technology company focused on longevity. That bucket includes drugs that promote the release of growth hormones, like TB-500, CJC-1295, and ipamorelin, and compounds said to promote tissue repair and wound healing, like BPC-157 and GHK-Cu. It’s primarily these unapproved compounds that have raised concerns. “Anybody can set up an online shop selling research-grade peptides,” says Tenille Davis, a pharmacist and chief advocacy officer at the Alliance for Pharmacy Compounding, a trade organization representing more than 600 pharmacies. “And nobody knows what’s even in the vials.”  

It’s not just fitness gurus, biohackers, and longevity fanatics who are taking these experimental drugs. Kaeberlein recalls hearing about an acquaintance whose doctor prescribed her unapproved peptides. She was “just a typical upper-middle-class woman,” he says. “That’s when it really hit me that this has sort of gone relatively mainstream.”

What do peptides do?

All kinds of things, purportedly. GHK-Cu is supposed to help with wound healing and collagen production. BPC-157 is said to promote tissue repair and curb inflammation, TB-500 to foster blood vessel formation. Here’s the caveat: The evidence for these benefits comes largely from animal studies and online testimonials, not human trials. “There’s no human clinical evidence to show that they even do what people are claiming that they do,” says Stuart Phillips, a muscle physiologist at McMaster University in Hamilton, Ontario. “So it could be just a giant rip-off.”

Some experimental peptides probably do have beneficial wound healing properties or regenerative effects, Kaeberlein says. For BPC-157, for example, “the animal data is compelling,” he says. But there are still plenty of unknowns: What is the right dosage? How long should you take it? What’s the best way to administer it? Those are questions that can be answered only through rigorous clinical trials. In the absence of those studies, doctors “just make up their own protocols,” he says. Some consumers go the DIY route, reconstituting powdered peptides and injecting their own concoctions at home. 

So why am I seeing ads for these peptide therapies if they’re not approved? 

Federal law prohibits companies from marketing medications that haven’t been approved. That includes most peptides, which are regulated as small molecules, not dietary supplements. (Two notable exceptions are collagen peptides and creatine peptides, often sold as powders.) The law is designed to protect consumers from drugs that haven’t been proved safe and effective.

But it doesn’t stop labs from making peptides for research purposes. “Most of the peptides being consumed in the marketplace now are being sold by these online companies that are selling them labeled for research use only,” Davis says. The vials often bear disclaimers that clearly say as much: “For research use only” or “Not for human consumption.” It’s illegal to market these products for human use, but “the websites make it pretty clear that the buyers are intended to be using these products themselves,” she says.

The practice isn’t legal, but enforcement has been sporadic. “FDA sends warning letters, shuts down companies. But because it’s all online, they have a really hard time keeping up with these entities,” Davis says. And companies have plenty of incentive to keep illegally marketing the products. “They can make millions of dollars without having to spend money and time doing research,” Knoepfler says. “It’s a cash grab.”

Compounding pharmacies, which are legally allowed to create bespoke medications by mixing bulk active ingredients, often get requests to dispense peptides, but most peptides don’t meet the eligibility criteria for compounding. This has always been the case, but in 2023 the FDA explicitly added several common experimental peptides to the list of bulk substances that cannot be compounded because of safety concerns. “It put an exclamation point on policy that was already in place,” Davis says.  

Many GLP-1 medications are available from compounding pharmacies. That used to be accepted because the drugs were in short supply. Now, however, supplies of most of these medications are stable, and sellers are under increasing pressure from regulators to stop mass-marketing these drugs. 

What’s the harm in trying them? 

Peptides sold for research purposes come from labs with little regulatory oversight. “When you buy stuff online intended for research grade, you have no idea what’s in the vial that you’re getting. You have no idea the sterility practices that it was manufactured under, or what sort of impurities might be in the vial,” Davis says.

Phillips has heard some people say they send their peptides for third-party testing to ensure that they’re pure, “like it’s some kind of flex,” he says. “And I’m like, ‘Well, you just proved that this stuff lives in the shadows, for crying out loud.’”

Finnrick Analytics, a peptide-testing startup in Austin, Texas, has analyzed the purity and potency of more than 5,000 samples of 15 different peptides from 173 vendors. The results show that the quality varies substantially from vendor to vendor and even batch to batch. For example, the company tested nearly 450 samples of BPC-157 from 64 vendors. In some cases, the vials sold as BPC-157 didn’t contain the compound at all. In those that did, the purity varied from about 82% to 100%. 

Perhaps more worrying, 8% of all the peptide samples Finnrick tested had measurable levels of endotoxins, bacterial fragments that can cause fever and chills or, in larger doses, septic shock. 

The health risks aren’t just hypothetical. In 2025, two women had to be hospitalized and placed on ventilators after receiving peptide injections at a longevity conference in Las Vegas. Both recovered, and it’s still not clear whether they reacted to the peptides themselves or to some impurity in the vials. 

“The idea that all peptides are safe and all peptides are natural is just nonsense,” Kaeberlein says. “I tend to consider myself fairly libertarian when it comes to what people want to do for their health,” he adds. “If you want to take an experimental drug, that’s up to you.” But the problem with unregulated experimental therapies is that it’s exceedingly difficult to assess benefit and harm. “The relatively small percentage of people that are bad actors will be bad actors, and they will dishonestly market this stuff to people who aren’t equipped to really understand the true risks and rewards,” he says.

And, like any drug, peptides come with a risk of side effects. For approved medications, these are detailed right on the package insert. But for many experimental peptides, there hasn’t been enough research to understand what those side effects might be. Some researchers have warned that peptides that promote growth or blood vessel formation might also foster the growth of cancers.  

For competitive athletes who use peptides, meanwhile, the risks include not just possible health problems but suspension. Some peptides, like BPC-157, are banned by the World Anti-Doping Agency. 

The FDA has undergone a pretty substantial overhaul under the Trump administration. Are the regulations around peptides likely to change? 

I don’t have a crystal ball, but it seems likely. In May 2025, US health secretary Robert F. Kennedy Jr. joined the longevity enthusiast and biohacker Gary Brecka on his podcast The Ultimate Human and promised to “end the war at FDA against alternative medicine—the war on stem cells, the war on chelating drugs, the war on peptides.”

Knoepfler anticipates that Kennedy will force the FDA to allow compounding of some of the most popular peptides, like BPC-157 and GHK-Cu. “Such a step would put public health at great risk, while giving compounders and likely wellness influencers a lot more profit,” he says. 

The FDA seems intent on cracking down on GLP-1 copycats, however. In early February, commissioner Marty Makary posted on X that the agency would take “swift action against companies mass-marketing illegal copycat drugs, claiming they are similar to FDA-approved products.”

AI Prompts Come to Search Console

Google Search Console’s AI analysis went public last week after being available to select users.

The feature, in the Performance > Search results tab, enables insights on a site’s organic search performance by entering generative AI-like prompts.

I’ll provide examples in this post.

Screenshot of Search Console's new AI analysis

Search Console’s AI analysis responds to genAI-like prompts. Click image to enlarge.

Query analysis

Instead of using prebuilt filters to discern various types of searches, users can enter prompts, such as:

  • “Show question-like queries.”
  • “Filter queries longer than [xx] characters.”
  • “Show informational queries.”

The feature initially responds to the prompts with regular expressions. For example, for the prompt, “Show queries with informational intent,” it created the following regex:

Queries matching regex: (what|how|why|guide|tutorial|explain|definition|example|best way to|steps to|tips for|reasons why|benefits of|difference between)

I could then click “Apply” to see the filtered results.

Similarly, I prompted “Show product research queries.” The feature responded:

Queries matching regex:(best|review|compare|vs|price|buy|alternatives|guide|recommendations|features|specs|cost|discount|deal|sale|where to buy|how to choose|top rated|pros and cons)

Again, I could “Apply” or “Dismiss” that regex.

Brand search

The AI analysis performed surprisingly well on brand-name searches. I prompted, “Show branded queries.” The regex responses included my brand name and a one- and two-word pattern:

Queries matching regex: (brandname|brand name)

Traffic drop

Search Console’s AI analysis can assemble traffic change reports. For example, I prompted:

  • “Show pages that lost the most clicks over the past 30 days.”
  • “Compare clicks last month with the same month of the previous year.”

Country-specific

Users can also evaluate organic search visibility in countries:

  • “Show me clicks, Average CTR, and Average Position of my queries in Canada last month.”

Here’s a prompt for traffic changes:

  • “Show pages that lost the most clicks over the past 30 days in Canada.”

Limitations

The new AI feature, while helpful, is not a game-changer. Inexperienced users do not typically know what to ask, while seasoned pros can go directly to the prebuilt filters.

Moreover, the AI integration works only with top filters. It cannot process requests for filters unavailable in the Performance reports. For example, it cannot respond to prompts for queries with an average position greater than 2, indicating room for improvement.

Screenshot of a Performance report showing a filter only on position 1.

Default Performance reports allow filtering only on position 1.

Google Discover Update: Early Data Shows Fewer Domains In US via @sejournal, @MattGSouthern

NewzDash published an analysis comparing Discover visibility before and after Google’s February 2026 Discover core update, using panel data from millions of US users tracked through its DiscoverPulse tool.

It compared pre-update (Jan 25-31) and post-update (Feb 8-14) windows across the top 1,000 domains and top 1,000 articles in the US, California, and New York.

For transparency, NewzDash is a news SEO tracking platform that sells Discover monitoring tools.

What The Data Shows

Google said the update targeted more locally relevant content, less sensational and clickbait content, and more in-depth, timely content from sites with topic expertise. The NewzDash data has early readings on all three.

NewzDash compared Discover feeds in California, New York, and the US as a whole. The three feeds mostly overlapped, but each state got local stories the others didn’t. New York-local domains appeared roughly five times more often in the New York feed than in the California feed, and vice versa.

In California, local articles in the top 100 placements rose from 10 to 16 in the post-update window. The local layer included content from publishers like SFGate and LA Times that didn’t appear in the national top 100 during the same period.

Clickbait reduction was harder to confirm. NewzDash acknowledged that headline markers alone can’t prove clickbait decreased. It did find that what it called ‘templated curiosity-gap patterns’ appeared to lose visibility. Yahoo’s presence in the US top 1,000 dropped from 11 to 6 articles, with zero items in the top 100 post-update.

Unique content categories grew across all three geographic views, but unique publishers shrank in the US (172 to 158 domains) and California (187 to 177). That combination suggests Discover is covering more topics but sending that distribution to a narrower set of publishers.

This pattern aligns with what early December core update analysis showed about specialized sites gaining ground over generalists.

X.com’s Growing Discover Presence

X.com posts from institutional accounts climbed from 3 to 13 items in the US top 100 Discover placements and from 2 to 14 in New York’s top 100.

NewzDash noted it had tracked X.com’s Discover growth since November and said the update appeared to accelerate the trend. Most top-performing X items came from established media brands.

The analysis noted it couldn’t prove or disprove whether X posts are cannibalizing publisher traffic in Discover, calling the data a “directional sanity check.” The open question is whether routing through X adds friction that could reduce click-through to owned pages.

Why This Matters

As we continue to monitor the Discover core update, we now have early data on what it seems to favor. Regional publishers with locally relevant content showed up more often in NewzDash’s post-update top lists.

Discover covered more topics in the post-update window, but fewer sites were getting that traffic in the US and California. Publishers without a clear topic focus could be on the wrong side of that trend.

Looking Ahead

This analysis covers an early window while the rollout is still being completed. The post-update measurement period overlaps with the Super Bowl, Winter Olympics, and ICC Men’s T20 World Cup, any of which could independently inflate News and Sports category visibility.

Google said it plans to expand the Discover core update beyond English-language US users in the months ahead.


Featured Image: joingate/Shutterstock

SerpApi Challenges Google’s Right To Sue Over SERP Scraping via @sejournal, @MattGSouthern

SerpApi filed a motion to dismiss Google’s federal lawsuit, two months after Google sued the company under the DMCA for allegedly bypassing its SearchGuard anti-scraping system.

The filing goes beyond disputing the technical allegations. SerpApi is challenging whether Google has the legal right to bring the case at all.

The Standing Question

SerpApi’s core argument is that the DMCA protects copyright owners, not companies that display others’ content.

Google’s complaint cited licensed images in Knowledge Panels, merchant-supplied photos in Shopping results, and third-party content in Maps as examples of copyrighted material SerpApi allegedly scraped.

SerpApi CEO Julien Khaleghy wrote that the content in Google’s search results belongs to publishers, authors, and creators, not to Google.

Khaleghy writes:

“Google is a website operator. It is not the copyright holder of the information it surfaces.”

Khaleghy argued that only a copyright holder can authorize access controls under the DMCA. Google, he wrote, is trying to assert those rights without the knowledge or consent of the creators whose work is at issue.

In the 31-page motion, SerpApi invokes the Supreme Court’s 2014 ruling in Lexmark International, Inc. v. Static Control Components, Inc., which established that a plaintiff must show injuries within the “zone of interests” the law was designed to protect. SerpApi argues Google’s alleged injuries, including infrastructure costs and lost ad revenue from automated queries, don’t fall within what the DMCA was built to address.

The Circumvention Question

SerpApi also disputes whether bypassing SearchGuard counts as circumvention under the DMCA.

Google alleged in December that SerpApi solved JavaScript challenges, used rotating IP addresses, and mimicked human browser behavior to get past SearchGuard.

Khaleghy wrote that the DMCA defines “to circumvent a technological measure,” in part, as “to descramble a scrambled work, to decrypt an encrypted work, or otherwise to avoid, bypass, remove, deactivate, or impair a technological measure,” and argued SerpApi does none of those things.

Khaleghy writes:

“We access publicly visible web pages, the same ones accessible to any browser. We do not break encryption. We do not disable authentication systems.”

The motion states Google “does not allege unscrambling or decryption of any work, or the impairment, deactivation, or removal of any access system.” SerpApi calls SearchGuard a bot-management tool, not a copyright access control.

Why This Matters

The outcome could reach beyond SerpApi. Google’s DMCA theory, if accepted, would let any platform displaying licensed third-party content use the statute to block automated access to publicly visible pages.

When we covered Google’s original filing in December, I noted the central question was whether SearchGuard qualifies as a DMCA-protected access control. SerpApi’s motion now adds a layer underneath that. Even if SearchGuard qualifies, SerpApi argues Google isn’t the right party to enforce it.

In a separate case decided on December 15, 2025, U.S. District Judge Sidney Stein dismissed Ziff Davis’s DMCA Section 1201(a) anti-circumvention claim tied to robots.txt against OpenAI, holding Ziff Davis failed to plausibly allege that robots.txt is a technological measure that effectively controls access, or that OpenAI circumvented it.

Google’s SearchGuard is more technically complex than a robots.txt directive, but both cases test whether the DMCA can be used to restrict automated access to publicly available content.

Looking Ahead

The hearing on SerpApi’s motion is scheduled for May 19, 2026. Google will file its opposition before then.

SerpApi also filed a motion to dismiss in a separate lawsuit brought by Reddit in October, which named SerpApi alongside Perplexity, Oxylabs, and AWMProxy. Both cases raise questions about using DMCA anti-circumvention claims to challenge bot evasion and automated access to pages that are viewable in a normal browser.


Featured Image: CrizzyStudio/Shutterstock

4 Sites That Recovered From Google’s December 2025 Core Update – What They Changed via @sejournal, @marie_haynes

The December 2025 core update had a significant impact on a large number of sites. Each of the sites below that have done well are either long term clients, past clients or sites that I have done a site review for. While we can never say with certainty what changed as the result of a change to Google’s core algorithms and systems, I’ll share some observations on what I think helped these sites improve.

1. Trust Matters Immensely

This first client, a medical eCommerce site, reached out to me in mid 2024 and we started on a long term engagement. A few days into our relationship they were strongly negatively impacted by the August 2024 core update. It was devastating.

When you are impacted by a core update, in most cases, you remain suppressed until another core update happens. It usually takes several core updates. And given that these only happen a few times a year, this site remained suppressed for quite some time.

We worked on a lot of things:

  • Improving blog post quality so it was not “commodity content”.
  • Improving page load time.
  • Optimizing images.
  • Improving FAQ content on product pages to help answer customer questions.
  • Creating helpful guides.
  • Improving product descriptions to better answer questions their customers have.
  • Adding more information about the E-E-A-T of authors.
  • Adding more authors with medical E-E-A-T.
  • Getting more reviews from satisfied customers.

While I think that all of the above helped contribute to a better assessment of quality for this site, I actually think that what helped the most had very little to do with SEO, but rather, was the result of the business working hard to truly improve upon customer service.

Core updates are tightly connected to E-E-A-T. Google says that trust is the most important aspect of E-E-A-T. The quality rater guidelines, which serve as guidance to help Google’s quality raters who help train their AI systems to improve in producing high quality search algorithms, mention “trust” 191 times.

For online stores, the raters are told that reliable customer service is vitally important.

Image Credit: Marie Haynes

A few bad reviews aren’t likely to tank your rankings, but this business had previously had significant logistical problems with shipping. They had been working hard to rectify these. Yet, if I asked AI Mode to tell me about the reputation of this company compared to their competitors, it would always tell me that there were serious concerns.

Here’s an interesting prompt you can use in AI Mode:

Make a chart showing the perceived trust in [url or brand] over time.

You can see that finally in 2025 the overall trust in this brand improved.

Image Credit: Marie Haynes

My suspicion is that these trust issues were the main driver in their core update suppression. I can’t say whether it was the improvement in customer trust that made a difference, the improvements in quality we made, or perhaps both. But these results were so good to see.

Image Credit: Marie Haynes

They continue to improve. Google recommends them more often in Popular Products carousels, ranks them more highly for many important terms and more importantly, drives far more sales for them now.

2. Original Content Takes A Lot Of Work

The next site is another site that was impacted by a core update.

This site is an affiliate site that writes about a large ticket product. They have a lot of competition from some big players in their industry. When I reviewed their site, one thing was obvious to me. While they had a lot of content, most of it offered essentially the same value as everyone else. This was frustrating considering they actually did purchase and review these products. What they were writing about was mostly a collection of known facts on these products rather than their personal experience. And what was experiential was buried in massive walls of text that were difficult for readers to navigate.

Google’s guidance on core updates recommends that if you were impacted, you should consider rewriting or restructuring your content to make it easier for your audience to read and navigate the page.

Image Credit: Marie Haynes

This site put an incredible amount of work into improving their content quality:

  • They purchased the products they reviewed and took detailed photos of everything they discussed. And videos. Really helpful videos.
  • The blog posts were written by an expert in their field. This already was the case, but we worked on making it more clear what their expertise was and why it was helpful.
  • We brainstormed with AI to help us come up with ideas for adding helpful unique information that was borne from their experience and not likely to be found on other sites.
  • We used Microsoft Clarity to identify aspects on pages that were frustrating users and worked to improve them.
  • We added interactive quizzes to help readers and drive engagement.
  • We worked on improving freshness for every important post, ensuring they were up to date with the latest information.
  • We worked to really get in the shoes of a searcher and understand what they wanted to see. We made sure that this information was easy to find even if a reader was skimming.
  • We broke up large walls of text into chunks with good headings that were easy to skim and navigate.
  • We noindexed pages that were talking on YMYL topics for which they lacked expertise.
  • We worked on improving core web vitals. (Note: I don’t think this is a huge ranking factor, but in this case the largest contentful paint was taking forever and likely frustrated users.)

Once again, it took many months of tireless work before improvements were seen! Rankings improved to the first page for many important keywords and some moved from page 4 to position #1-3.

Image Credit: Marie Haynes

3. Work To Improve User Experience

This next site was not a long term client, but rather, a site review I did for an eCommerce site in an YMYL niche. The SEO working on this site applied many of my recommendations and made some other smart changes as well including:

  • Improving site navigation and hierarchy.
  • Improved UX. They have a nicer, more modern font. The site looks more professional.
  • Improved customer checkout flow which improved checkout abandonment rates.
  • Improved their About Us page to add more information to demonstrate the brand’s experience and history. Note: I don’t think this matters immensely to Google’s algorithms as most of their assessment of trust is made from off-site signals, but it may help users feel more comfortable with engaging.
  • Produced content around some topics that were gaining public attention. This did help to truly earn some new links and mentions from authoritative sources.

After making these changes, the site was able to procure a knowledge panel for brand searches. And, search traffic is climbing.

Image Credit: Marie Haynes

4. First Hand Experience Can Really Help

This next site is another one that I did a site review for. It is a city guide that monetizes through affiliate links and sponsors. For every page I looked at I came to the same conclusion: There was nothing on this page that couldn’t be covered by an AI Overview. Almost every piece of information was essentially paraphrased from somewhere else on the web.

The most recent update to the rater guidelines increased the use of the word “paraphrased” from 3 mentions to 25. I think this applies to a lot of sites!

Image Credit: Marie Haynes

and

Image Credit: Marie Haynes

and also,

Image Credit: Marie Haynes

Yet, when I spoke with the site owner she shared to me that they had on-site writers who were truly writing from their experience.

While I don’t know specifically what changes this site owner has made, I looked at several pages that had seen nice improvements in conjunction with the core update and noticed the following improvements:

  • They’ve added video to some posts – filmed by their team.
  • There’s original photography from their team – not taken from elsewhere on the web. Not every photo is original, but quite a few of them are.
  • Added information to help readers make their decision, like “This place is best for…” or, “Must try dishes include…”
  • They wrote about their actual experiences. Rather than just sharing what dishes were available at a restaurant, they share which ones they tried and how they felt they stood out compared to other restaurants.
  • They’ve worked to keep content updated and fresh.

This site saw some nice improvements. However, they still have ground to gain as they previously were doing much better in the days before the helpful content updates.

Image Credit: Marie Haynes

Some Thoughts For Sites That Have Not Done Well

The December 2025 core update had a devastating negative impact on many sites. If you were impacted, your answer is unlikely to lie in technical SEO fixes, disavowing links or building new links. Google’s ranking systems are a collection of AI systems that work together with one goal in mind – to present searchers with pages that they are likely to find helpful. Many components of the ranking systems are deep learning systems which means that they improve on these recommendations over time.

I’d recommend the following for you:

1. Consider Whether The Brand Has Trust Issues

You can try the AI Mode prompt I used above. A few bad reviews is not going to cause you a core update suppression. But, a prolonged history of repeated customer service frustrations, fraud or anything else that significantly impacts your reputation can seriously impact your ability to rank. This is especially true if you are writing on YMYL topics.

2. Look At How Your Content Is Structured

It is a helpful exercise to look at which pages Google’s algorithms are ranking for your queries. If they don’t seem to make sense to you, look at how quickly they get people to the answer they are trying to find. I have found that often sites that are impacted make their readers scroll through a lot of fluff or ads to get to the important bits. Improve your headings – not for search engines, but for readers who are skimming. Put the important parts at the top. Or, if that’s not feasible, make it really easy for people to find the “main content”.

Here’s a good exercise – Open up the rater guidelines. These are guidelines for human raters who help Google understand if the AI systems are producing good, helpful rankings. CTRL-F for “main content” and see what you can learn.

3. Really Ask Yourself Whether Your Content Is Mostly “Commodity Content”

Commodity content is information that is widely available in many places on the web. There was a time when a business could thrive by writing pages that aggregate known information on a topic. Now that Google has AI Overviews and AI Mode, this type of page is much less valuable. You will still see some pages cited in AI Overviews that essentially parrot what is already in the AIO. Usually these are authoritative sites which are helpful for readers who want to see information from an authority rather than an AI answer.

Liz Reid from Google said these interesting words in an interview with the WSJ:

“What people click on in AI Overviews is content that is richer and deeper. That surface level AI generated content, people don’t want that, because if they click on that they don’t actually learn that much more than they previously got. They don’t trust the result any more across the web. So what we see with AI Overviews is that we sort of surface these sites and get fewer, what we call bounced clicks. A bounced click is like, you click on this site and you’re like, “Ah, I didn’t want that” and you go back. And so AI Overviews give some content and then we get to surface sort of deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not AI generated.”

Here is a good exercise to try on some of the pages that have declined with the core update. Give your url or copy your page’s content into your favourite LLM and use this prompt:

“What are 10 concepts that are discussed in this page? For each concept tell me whether this topic has been widely written about online. Does this content I am sharing with you add anything truly uniquely interesting and original to the body of knowledge that already exists? Your goal here is to be brutally honest and not just flatter me. I want to know if this page is likely to be considered commodity content or whether it truly is content that is richer and deeper than other pages available on the web.”

You can follow this up with this prompt:

“Give me 10 ideas that I can use to truly create content that goes deeper on these topics? How can I draw from my real world experience to produce this kind of content?”

Concluding Thoughts

I’ve been studying Google updates for a long time – since the early days of Panda and Penguin updates. I built a business on helping sites recover following Google update hits. However, over the years I have found it is increasingly more difficult for a site that is impacted by a Google update to recover. This is why today, although I do still love doing site reviews to give you ideas for improving, I generally decline doing work with sites that have been strongly impacted by Google updates. While recovery is possible, it generally takes a year or more of hard work and even then, recovery is not guaranteed as Google’s algorithms and people’s preferences are continually changing.

The sites that saw nice recovery with this Google update were sites that worked on things like:

  • Truly improving the world’s perception of their customer service.
  • Creating original and insightful content that was substantially better than other pages that exist.
  • Using their own imagery and videos in many cases.
  • Working hard to improve user experience.

If you missed it I recently published this video that talks about what we learned about the role of user satisfaction signals in Google’s algorithms. Traditional ranking factors create an initial pool of results. AI systems rerank them, working to predict what the searcher will find most helpful. And the quality raters as well as live users in live user tests help fine-tune these systems.

And here are some more blog posts that you may find helpful:

Ultimately, Google’s systems work to reward content that users are likely to find satisfying. Your goal is to be the most helpful result there is!

More Resources:


Read Marie’s newsletter AI News You Can Use, subscribe now.


Featured Image: Jack_the_sparow/Shutterstock

Agentic Commerce Optimization: A Technical Guide To Prepare For Google’s UCP via @sejournal, @alexmoss

In January, I wrote about the birth of agentic commerce through both Agentic Commerce Protocol (ACP) and Universal Commerce Protocol (UCP), and how this could impact us all as consumers, business owners, and SEOs. As we still sit on waitlists for both, this doesn’t mean that we can’t prepare for it.

UCP fixes a real-life problem for many, minimizing the fragmented commerce journey. Instead of building separate integrations for every agent platform as we have been mostly doing in the past, you can now [theoretically] integrate once and will integrate seamlessly with other tools and platforms.

But note here that, as opposed to ACP which focuses more so on the checkout → fulfillment → payment journey, UCP goes beyond this with six capabilities covering the entire commerce lifecycle.

This, of course, will impact an SEO’s ambit. As we shift from optimizing for clicks to optimizing for selection, we also need to ensure that it’s you/your client that is selected through data integrity, product signals, and AI-readable commerce capabilities. Structured data has always served an important role for the internet as a whole and will continue to be the driving force on how you can serve agents, crawlers, and humans in the best way possible.

I allude to a possible new acronym “ACO” – Agentic Commerce Optimization – and the following could be considered the closest we can get to guidelines on how we undertake it.

UCP Isn’t Coming, It’s Here

UCP was only announced in January, but there’s already confirmation that its capabilities are rolling out. On Feb. 11, 2026, Vidhya Srinivasan (VP/GM of Advertising & Commerce at Google) announced that Wayfair and Etsy now use UCP so that you can purchase directly within AI Mode, and was observed the next day by Brodie Clark.

UCP’s Six Layered Capabilities

On the day UCP was released, Google explained its methodology.

From this, I defined six core capabilities:

  1. Product Discovery – how agents find and surface your inventory during research.
  2. Cart Management – multi-item baskets, dynamic pricing, complex basket rules.
  3. Identity Linking – OAuth 2.0 authorization for personalized experiences and loyalty.
  4. Checkout – session creation, tax calculation, payment handling.
  5. Order Management – webhook-based lifecycle and logistical updates.
  6. Vertical Capabilities – extensible modules for specialized use cases like travel booking windows or subscription schedules.

UCP’s schema authoring guide shows how capabilities are defined through versioned JSON schemas, which act as the foundation of the protocol. When it comes to considering this as an SEO, properties such as offers, aggregateRating, and shippingDetails aren’t just for surfacing rich snippets, etc., for product discovery, they’re now what agents query during the entire process.

Schema Is, And Will Continue To Be, Essential

UCP’s technical specification uses its own JSON schema-based vocabulary. Whilst it doesn’t build on schema.org directly, it remains critical in the broader ecosystem. As Pascal Fleury Fleury said at Google Search Central Live in December, “schema is the glue that binds all these ontologies together”. UCP handles the transaction; schema.org helps agents decide who to transact with.

Ensure you’re on top of and populate product schema as much as you can. It may seem like SEO 101. Regardless, audit all of this now to ensure you’re not missing anything when UCP really rolls out.

This includes checks on:

  • Product schema (with complete coverage): All core fields: name, description, SKU, GTIN, brand, related images, and offers.
  • Offers must include: Price, priceCurrency, availability, URL, seller. Add aggregateRating and review to ensure you have positive third-party perspective.
  • Ensure all product variants output correctly.
  • Include shippingDetails with delivery estimates.
  • Organization and Brand: Assists with “Merchant of Record” verification. If you’re not an Organization, then fallback to Person.
  • Designated FAQPage: Ensure you have an FAQpage as these can be incorporated alongside product-level FAQs and used as part of its decision-making.

Prepare Your Merchant Center Feed

UCP will utilize your existing Merchant Center feed as the discovery layer. This means that beyond the normal on-site schema you provide, Merchant Center itself requires more details that you can populate within its platform.

  • Return policies (required to be a Merchant of Record): Complete all return costs, return windows, and policy links. These will be used not just within the checkout and transactional areas, but again a consideration for selection at all. Advanced accounts need policies at each sub-account level.
  • Customer support information: Not only would initial information be offered to the customer, but there may be ways in which entry-level customer support queries can be completely managed, thus increasing customer satisfaction while minimizing customer support agent capacity.
  • Agentic checkout eligibility: Add the native_commerce attribute to your feed, as products are only eligible here if this is set up.
  • Product identifiers: Each product must have an ID, and correlate to the product ID when using the checkout API.
  • Product consumer warnings: Any product warning should assert the consumer_notice attribute.

Google recommends that this be done through a supplemental data source in Merchant Center rather than modifying your primary feed, which would prevent incorrect formatting or other invalidation.

Lastly, double-check if the products you’re selling aren’t included within its product restrictions list, as there are several that, if you do offer those things, you should consider how to manage alongside the abilities of UCP.

Optimizing Conversational Commerce Attributes

Within the UCP blog post announcement, Srinivasan introduced a way for more clarity with conversational commerce attributes:

“…we’re announcing dozens of new data attributes in Merchant Center designed for easy discovery in the conversational commerce era, on surfaces like AI Mode, Gemini and Business Agent. These new attributes complement retailers’ existing data feeds and go beyond traditional keywords to include things like answers to common product questions, compatible accessories or substitutes.”

These provide further clarity (and therefore minimize hallucinations) during the discovery process in order to be selected or ruled out.

Not only would this incorporate product and brand-related FAQs, but take this a step further to also consider:

  • Compatibility: Potential up-sell opportunities.
  • Substitution: An opportunity for dealing with out-of-stock items.
  • Related products: Great for cross-sell opportunities.

Furthermore, this can be used to become even more specific, moving beyond basic attributes to agent-parseable details. Now, if a product is “purple” on a basic level, “dark purple” or even something unobvious, such as “Wolf” (real example below), may be more appropriate for finer detail while still falling under “purple.” The same can be considered for sizes, materials (or a mixture of materials), etc.

Multi-Modal Fan-Out Selection

When executed well, optimizing for conversational commerce attributes will increase the possibility of selection within fan-out query results. When considering some of these attributes, it is worth looking at tools, such as WordLift’s Visual Fan-Out simulator, which illustrates how a single image decomposes into multiple search intents, revealing which attributes agents may prioritize when performing query fan-out. But how would this look?

As an example, I used one product image and browsed downward three horizons. Using On’s Cloudsurfer Max as an example (used with permission):

Cloudsurfer Max in the colour “Wolf”
Image credit: On

Using just one product image, this is what is presented on the surface:

Screenshot from WordLift’s Visual Fan-Out simulator, February 2026

It immediately noticed that the product was On, and specifically from the Cloudsurfer range. Great start! Now let’s see what it sees over the horizon:

Screenshot from WordLift’s Visual Fan-Out simulator, February 2026
Screenshot from WordLift’s Visual Fan-Out simulator, February 2026
Screenshot from WordLift’s Visual Fan-Out simulator, February 2026

Here, you can draw inspiration or direction on how best to place yourself for potential and likely fan-out queries. With this example, I found it interesting that Horizon 2 mentions performance running gear as a large category, then when performing fan-out on that showed the related products around gear in general. This shows how wide LLMs consider selection and how you can present attributes to attract selection.

UCP’s Roadmap Is Expanding Into Multi-Verticals

UCP is already planning to go beyond one single purchase but expands beyond retail into travel, services, and other verticals. Its roadmap details several priorities over the coming year, including:

  • Multi‑item carts and complex baskets: Moving beyond single‑item checkout to native multi‑item carts, bundling, promotions, tax/shipping logic, and more realistic fulfillment handling.
  • Loyalty and account linking: Standardized loyalty program management and account linking so agents can apply points, member pricing, and benefits across merchants.
  • Post‑purchase support: Support for order tracking, returns, and customer‑service handoff so agents can manage customer support post-sale.
  • Personalization signals: Richer signals for cross‑sell/upsell, wishlists, history, and context‑based recommendations.
  • New verticals: Expansion beyond retail into travel, services, digital goods, and food/restaurant use cases via extensions to the protocol.

Each of the points above is worth further reading and consideration if this is something your brand may offer. Furthermore, its plans to expand beyond retail into travel, services, digital goods, and hospitality mean that, if you’re working within any of these verticals, you need to be even more prepared to ensure eligibility.

Social Proof And Third-Party Perspective

Regardless of how well you may optimize on-site to prepare for UCP, all this data integrity still needs to be validated by trusted third-party sources.

Third-party platforms, such as Trustpilot and G2, appear to be frequently cited and trusted among most of the LLMs, so I’d still advise that you continue to collect those positive brand and product reviews in order to satisfy consensus, resulting in more opportunities to be selected during product discovery.

TL;DR – Prepare Now

If you own or manage any form of ecommerce site, now is the time to ensure you’re preparing for UCP’s rollout as soon as possible. It’s only a matter of time, and with AI Mode spreading into default experiences, getting ahead of the rollout is essential.

  1. Join the UCP waitlist.
  2. Prepare Merchant Center: return policies, native_commerce attribute.
  3. Ensure your developers research and understand the UCP documentation.
  4. Populate conversational attributes: question-answers, compatibility, substitutes.
  5. Audit and improve any schema where applicable.

This is moving faster than most previous commerce shifts, and brands that wait for full rollout signals will already be behind. This isn’t a short-term LLM gimmick but is part of the largest change in the ecommerce space.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock