Innovation on the move

The Massachusetts Bay Trans­portation Authority moves hundreds of thousands of people across Greater Boston each day—thanks to a vast system of buses, trains, and ferries that depends on coordination among thousands of employees.

In this storied transit system, history runs deep: The Green Line still passes through the country’s oldest subway tunnels, built beneath the Boston Common at the end of the 19th century. Yet the MBTA is remarkably willing to explore new approaches, too. That’s thanks in large part to a trio of MIT alumni: Katie Choe ’98, SM ’00; Melissa Dullea ’00; and Karti Subramanian, MBA ’17. Together, they’ve been helping redefine what innovation looks like in one of the nation’s longest-running transit systems.

Choe in particular has been at the center of this push as the agency’s chief of staff since 2023, a position in which she took the lead in revamping organizational culture. She wrapped up her tenure at the T to become CEO of Virginia Railway Express (VRE) in January, but before leaving, she spoke to MIT Alumni News extensively about her role. Describing it as “owning everything and nothing at the same time,” Choe explained: “I’m here to make things happen. I find places where we have a sticky organizational knot that needs to be untied.”

Dullea, the MBTA’s senior director of service planning, is in charge of the team responsible for planning and scheduling every bus route in the system as well as the Red, Orange, Green, and Blue Lines. Her group also determines where buses operate and adapts both train and bus service patterns as the region changes.

Subramanian, the MBTA’s senior director of rider tools, leads a team that manages the agency’s digital ecosystem: the website, real-time signage, and the MBTA Go app, which offers riders live transit information—including arrival times, vehicle tracking, and closure updates—for buses, trains, and ferries.

Innovation, in Choe’s view, is a practical requirement in a system whose infrastructure dates back to the opening of the Tremont Street subway in 1897. There are old assets to maintain and modern expectations to meet, all with public resources that never stretch far enough. For years, she says, the instinct was to plan endlessly in hopes of pleasing everyone, only to end up pleasing no one because little actually moved forward. Resources were consumed by process rather than progress. 

The way out of that cycle was to rethink how projects are delivered, structure contracts differently, and streamline operations by relying more on in-house expertise. The result, she says, is an increasingly “can-do” culture that focuses less on drafting plans and more on producing results, a change she sees as essential to maintaining service reliability and supporting the region’s economic mobility. And while aging Red Line cars, which perform poorly in extreme cold, will continue to pose challenges until new cars replace them and planned service disruptions for needed repairs on all subway lines are ongoing, service is improving overall. Since spring 2024, the number of scheduled weekday trips on the Red, Orange, and Blue Lines has climbed steadily, thanks to extensive track repairs, new operating procedures, and the addition of more railcars. 

The new innovation mindset—including the emphasis on faster, more efficient project delivery and cross-department collaboration—is likely to shape the MBTA for years to come.

Innovation grounded in public service

Choe has spent her career in the public sector, a choice she attributes partly to a sense of responsibility cultivated at MIT. “The big differentiator at MIT is that when you graduate, you graduate with an expectation that you are going to change the world,” she says. 

After more than six years as chief engineer and director of construction management at Boston’s Department of Public Works, Choe joined the MBTA in early 2020. In 2023, she launched the Innovation Hub, an initiative that spotlights and promotes internal improvements, as part of the quest to deliver the best possible service to riders on the constrained budget of a public agency. “We need to constantly be thinking about how we can do that better,” she says. “How do we do it more efficiently? How do we actually keep our costs low, find new ways of doing things so that we can provide that service better for all of our riders?”

She adds, “When people come to me with an idea, I try really hard to support them with moving it forward. That’s the innovative culture that we’re trying to instill.”

The Innovation Hub gives employees a place to raise problems or suggest ideas and connects them with the partners and support needed to turn concepts into real projects. It also celebrates workforce creativity, hosting an annual Innovation Expo—a showcase similar to a poster session (“It’s essentially a science fair,” Choe says) that highlights projects from throughout the agency.

 “The energy that was in the room was just palpable,” she says of the first Innovation Expo, held in the summer of 2024. It showcased 34 completed projects, from maintenance upgrades and redesigned processes to data tools that streamlined field operations. The projects led to faster hiring, better safety practices, and more agile planning for disruptions—and many improved the employee experience as much as the rider experience. Choe sees the two as inseparable. “The better our employees can perform, the more we take care of them, the better the service to our riders is,” she says. 

“We should consider it normal and necessary for a transit agency to provide really accurate, really accessible, real-time information to its riders.”

Karti Subramanian, MBA ’17

She also helped oversee a welcome improvement to the systemwide discount program that low-income passengers can use for all forms of transit, from the commuter rail to The Ride, the door-to-door rideshare program for people with disabilities. The MBTA built an efficient system that verifies riders’ eligibility through existing public benefit programs, allowing approvals in about 30 seconds. Other agencies have since asked to learn how it works.

Meanwhile, Choe devoted considerable energy to mentoring. She helped lead programs to support women in the agency, met with new employee cohorts, and advised early-career staff on navigating large institutions. 

“I look for people who are willing to take risks and to put themselves out there,” she says. When she looks back at the things that have advanced her most in her own career, she adds, it’s “those moments that I’ve taken those risks.” For example, in 2022 she was asked to build and lead a team to transform the MBTA in response to findings from a Federal Transit Administration safety management inspection—and given 24 hours to decide whether she would. “It thrust me into the public spotlight with no room for failure,” she says. “The exposure to parts of the organization that I had had little interaction with and the forced fast learning curve set me up for the success of both the chief of staff role and my new position at VRE.”

Rethinking the bus network

Route planning and scheduling are at the heart of the rider experience. And in Dullea’s telling, this work is a complicated puzzle with many pieces.  

First, the planners decide where bus routes run, how frequently buses and trains arrive, and where bus stops are located. Then the schedulers turn those plans into reality, constructing work assignments that keep service as dependable as possible within the constraints of collective bargaining agreements, rest rules, and bus availability. “The service planners are the architects of the schedules,” she says. “The schedulers are the builders.”

Melissa Dullea sitting at a bus stop near a 104 bus to Malden
The MBTA’s senior director of service planning, Melissa Dullea ’00, leads the team responsible for planning and scheduling every bus and subway route in the system.
KEN RICHARDSON

Dullea’s path to transit began at MIT, where she was introduced to the MBTA’s planning work, including efforts to relocate the Orange Line in the 1980s and projects like the Urban Ring, an efficient rapid-bus system that was once proposed as a way of connecting the outer “spokes” of MBTA lines to reduce congestion downtown and link Greater Boston’s booming residential and commercial areas. This sparked a growing interest in the field and ultimately led her to write her undergraduate thesis on the MBTA assessment formula, which determines how much each community in the service district contributes annually to the system’s operating budget. “I was like, ‘Wow, you can have a career in transit. This is amazing,’” she says.

She joined the MBTA as a junior planner soon after graduating and now co-leads one of the agency’s largest planning efforts: the Bus Network Redesign (BNR), part of the broader Better Bus Project.

“We’re not in an industry where you can move fast and break things. We want to have a focus on improving the customer experience.”

Melissa Dullea ’00

The redesign began with a fundamental question: How can the bus network reflect where people need to go today? To find out, her team used anonymized cell-phone data to map the patterns of people’s travel by all modes—including public transit, driving, walking, and biking—and then weighted the data to prioritize communities that rely more on transit. They combined algorithmic modeling with human judgment, narrowing an estimated 14 million computer-generated corridors—potential pathways where demand suggested a bus route could run—into a workable network that would better meet observed travel demand.

“We wanted to make sure that the bus network would be relevant for how people travel now, and not just how we’ve always done things,” she says.

And their methodology allowed them to improve upon their previous practice of checking for discrimination at the end of planning. “We were able to lead with equity,” she says. 

The final plan nearly doubled the number of routes where buses run every 15 minutes or less and expanded coverage in Chelsea, Everett, Malden, and Revere. The Commonwealth recently recognized the project with an equity award.

When the pandemic led to a shortage of bus drivers, implementation paused. But Dullea’s team and others in the agency used the setback to rethink hiring, training, and job quality. 

“We’ve been working to build back,” Dullea says. The ability to hire committed drivers—and keep them on the job—depends on providing a good work environment. “We’ve been doing a lot of work on just making the experience of being an operator better,” she says.

For example, Dullea’s team helped redesign schedules that often saddled operators with long unpaid breaks in the middle of the day. By hiring part-timers who work a single peak period without a break, the T has reduced the average unpaid break time by half.

Dullea’s MIT training prepared her for the challenge, teaching her to analyze complicated systems and follow her intellectual curiosity. 

“When I was an undergrad, I just realized I loved cities,” she says. “And I was like, ‘How can I turn that love for the urban environment into a career and solve real-world problems that can help people?’”

Building a better digital front door

Subramanian founded a software company serving nonprofits before arriving at MIT for graduate school. His transition to government work—and eventually to the MBTA—was driven by a belief in public service and in government as a force for good. 

“I really wanted to serve the public sector in some way,” he says.

Subramanian resists calling his work “innovation.” He sees it instead as delivering the basic information riders should expect from a modern transit system. 

“We should consider it normal and necessary for a transit agency to provide really accurate, really accessible, real-time information to its riders,” he says. “Doing it might be new and different and require new ways of working.”

At a large agency, achieving that goal is far from simple. To start, Subramanian embedded team members in the operations groups managing more than 170 bus routes and the four subway lines with an eye to building better dispatching tools. This work also created data feeds that his team made publicly available—and used to create the MBTA Go app. But before building it, they asked what value it could add in a world where riders already use Google Maps and third-party apps like Transit. The answer was operational insight. 

“We know more about MBTA operations than Google Maps does,” he says. “So we can publish insight into what’s happening that a third party like the Transit app that’s designing for 200 cities at a time, or Google Maps that’s designing for 200,000 cities at a time, will never think to show.”

Karti Subramaian walking with his phone
As senior director of rider tools, Karti Subramanian, MBA ’17, leads the team that manages the agency’s digital ecosystem.
KEN RICHARDSON

A key area where that kind of information pays off is accessibility—a defining focus for Subramanian, whose son has cerebral palsy. He’s partnered with the MBTA’s System-Wide Accessibility Department to create the Accessible Technology Program, which brings riders with disabilities into the design process. 

His team conducts extensive user research, interviewing and riding alongside people who use mobility devices, depend on elevators, or have low vision, to understand the barriers they encounter on trains and buses and in stations. Through this hands-on approach, Subramanian’s team gains direct insight into the everyday obstacles riders face and how small design decisions can create or remove them.

“For me, this twin personal/professional journey has been probably the most wonderful part of this job,” he says. “An amazing amount of work and leadership has gone into making the MBTA one of the—if not the—most accessible transit systems in the US.”

The work is grounded in long institutional history. A landmark 2006 settlement under the Americans with Disabilities Act created a dedicated accessibility office within the MBTA, which continues to drive systemwide improvements.

Subramanian attributes his approach in part to lessons from MIT about the public origins of much modern technology. “So much of the kind of now very tech-forward innovation … came from early government R&D,” he says. 

To him, that lesson underscores the value of public service. “To do foundational things right in government actually is very high leverage,” he says, adding that it’s currently dramatically undervalued and underappreciated. 

Improving within constraints

Change at the MBTA unfolds within a highly regulated, risk-averse setting.

“Innovation takes some acceptance of failure, and that’s hard in a public environment,” Choe says. “We’re aspirational but not reckless.”

Most ideas under consideration, whether they’re crowding indicators on the Orange Line or wayfinding tools for riders with low vision, get tested in limited, clearly labeled trials.

Dullea echoes the careful balance required in planning. “We’re not in an industry where you can move fast and break things,” she says. “We’re trying not to break things. We want to have a focus on improving the customer experience.”

For Subramanian, the most significant challenges are often internal. His team works closely with operations groups, embedding technologists in bus garages and rail divisions to understand daily barriers. This partnership led to a mobile dispatching tool that replaced clipboards and a single-channel radio for managing nearly a thousand buses.

It has also helped his group become deeply integrated across the agency, forming an increasingly connected, data-driven operation. “We’re really proud of the extent to which we have built trust within the organization to bring this product way of thinking to a different set of problems,” he says. 

Advancing the economic engine of Greater Boston 

Choe sees the transit agency as a public service and a key support for opportunity across the region. 

“Many of our riders rely on the MBTA to get to their jobs, to get to their health-care appointments, to get to critical areas of their life,” she says. “If we cannot provide those services, then we’ve really shut them off from that economic mobility.”

That responsibility directed her leadership. “Every single person is impacted on a daily basis by the work that I do,” she said in October. “Every improvement that I make is making someone’s life better, and that knowledge sits very deeply in my heart.”

Despite the challenges, she remains optimistic about the MBTA’s future. 

“We have so much buy-in right now from the governor and the legislature,” she said. “It’s allowing us to do things in a little bit bolder manner than what we have done in the past. So I think our future is really bright.”

A culture of collaboration and aspiration

The MBTA also benefited from a partnership that spanned more than a decade with MIT’s Transit Lab, which supported the agency’s work with data analysis and service evaluation. Researchers at the Transit Lab helped the T interpret CharlieCard data to understand travel patterns and contributed the analytical framework for the agency’s Service Delivery Policy, which defines how the MBTA measures its own performance. 

Following the productive collaboration with the MIT Transit Lab, Choe sees potential to deepen the agency’s connection with the Institute if the MBTA joins the MIT Transit Research Consortium. Run by the Transit Lab and the MIT Mobility Initiative, the consortium includes both US and non-US transit agencies, and it offers members workshops as well as insights into MIT’s ongoing transit research. “There’s an opportunity there to figure out how to bridge the gap between amazing research work that’s happening and the on-the-ground applications of that research,” she says.

At the moment, Choe says, the MBTA is investing in electrification and digital infrastructure and exploring AI-assisted maintenance—and sustaining a culture of openness to change will be key. The Innovation Hub is dividing into two branches, one supporting employee-driven ideas and another exploring emerging technologies like AI and autonomous systems.

“People are already interested in this,” she says. “So why are we not harnessing that excitement?”

Her work aimed to continue building a collaborative, curious workplace where new ideas translate into improved service. As she put it, “I want to work in an environment and a culture that is collaborative and aspirational all the time.”

Her colleagues share that goal: to keep the MBTA evolving, grounded in public service, and positioned to deliver a modern system for Greater Boston. 

“It’s not just that we have a plan on the shelf that says this is what we want to do,” she says. “It is what are we doing right now to build toward this best-in-class, amazing, modernized, incredible system that serves the Commonwealth of Massachusetts.” 

New Book on the Purpose of Business

The newest offering from management guru Joseph Pine takes his acclaimed 1999 bestseller “The Experience Economy” a step further. That book foresaw the rise of consumer experiences as drivers of brand loyalty, more than material goods alone. Certainly such experiences have mushroomed over the last 25 years.

In “The Transformation Economy,” Pine now argues that experiences are not enough. Consumers want products and services that improve their lives and businesses. According to Pine’s “the progression of economic value,” agrarian manufacturing led to the industrial economy, which in turn led to services, and finally to experiences.

Cover of The Transformation Economy

The Transformation Economy

Pine contends that people buy from companies to reach a goal. Businesses that understand those goals (what buyers hope to improve) can offer greater value. And that value can establish the price, more than the cost of materials or services.

The first chapter defines transformation in the context of what businesses sell. “You are what you charge for,” Pine asserts.

The second introduces the idea of human flourishing — “the true purpose of business” — in four spheres: health and well-being, knowledge and wisdom, wealth and prosperity, and purpose and meaning. The book cites Equinox Fitness Clubs and Fender Musical Instruments as offering transformation beyond selling gym memberships and guitars. Other examples include Eataly (food products), Burning Man (outdoor festival), the U.S. Army, Princess Cruises, and more, with insights from business leaders.

Pine explains how companies shift from offering experiences to transformations and the types and levels of each, all illustrated with helpful diagrams.

The final two chapters include questions for readers to apply in their own business. The book also includes copious notes and a comprehensive index.

What Customers Want

Most of Pine’s ideas fit high-end service businesses such as health and wellness, travel, and finance. Getting to know each customer personally would be a tall order for a company selling groceries or household consumables.

However, Pine’s “jobs to be done” framework — what customers want to accomplish — is useful for any company. He states that “customers often don’t know what they want, and even when they do, they can’t always articulate it. You need to draw it out of them.”

Pine’s writing is conversational and clear. His “Mass Customization,” in 1992, was a best business book by both the Financial Times and Library Journal.

He co-authored “Infinite Possibility” in 2011, “Authenticity” in 2007, and “The Experience Economy” in 1999. The latter was an instant classic, still in Amazon’s top 10 for the product marketing category after 25 years and two updated editions.

AI-SEO Is A Change Management Problem via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

AI-SEO transformation will fail at the alignment layer, not the tactics layer. 25 years of transformation research, spanning 10,800+ participants across industries, reveals that the gap between successful and failed initiatives isn’t technical skill. It’s organizational readiness.

What you’ll get:

  • Why AI SEO implementation challenges are people and process problems, not technical ones.
  • The specific alignment failures that kill AI-SEO initiatives before tactics ever get tested.
  • A sequenced approach that transforms you from channel executor to organizational translator.

The underlying infrastructure of AI SEO – retrieval-augmented generation, citation selection, answer synthesis – operates on different principles than the crawl-index-rank paradigm SEO teams previously mastered. And unlike past shifts, the old playbook doesn’t bend to fit the new reality.

AI SEO is different. It’s not just an algorithm update: This is a search product change and a user behavior movement.

Our classic instinct is to respond with tactics: prompt optimization, entity markup increase, LLM-specific structured data, citation acquisition strategies.

These aren’t wrong. But long-term, it’s likely AI SEO strategies will fail, and the reason isn’t tactical incompetence or lack of staying up-to-date and flexible. It’s internal organizational misalignment.

Organizations with structured change management are 8× more likely to meet transformation objectives. The same principle applies to AI-SEO. (Image Credit: Kevin Indig)

Your marketing team – and your executive team – is being asked to transform their understanding of SEO during a period of unprecedented change fatigue. Those who have survived two decades of algorithm updates are expertly adaptable, but reeducation is required because LLMs are a new product, not just another layer of search.

And this, of course, is the alignment layer fail.

Image Credit: Kevin Indig

In AI SEO, misalignment has specific symptoms:

  1. Conflicting definitions of success: One stakeholder wants “rankings in ChatGPT.” Another wants brand mentions. A third wants citation links. A fourth wants traffic recovery. Every experiment gets judged against a different standard, and no one has agreed which matters most or how they’ll be measured. (Although our AI Overview and AI Mode studies confirm brand mentions are more valuable than citations.)
  2. Metrics mismatch with leadership expectations: Executives ask for increased traffic in a growing zero-click environment. Classic SEO reports on influence metrics; leadership sees declining sessions and questions the investment. In our December 2025 Growth Memo reader survey, 84% of respondents said they feel their current LLM visibility measurement approach is inaccurate. Teams can’t prove value because no one has agreed on how value would be proven.
  3. Turf fragmentation: AI SEO touches SEO, content, brand, product, PR, and (at times) legal. Without explicit ownership and a baseline, agreed-upon understanding of your brand’s AI SEO approach, each team runs experiments in its silo. No one synthesizes learning. Conflicting tactics cancel each other out.
  4. Premature tactics without a shared foundation: This looks like “Let’s test prompts” without agreeing on what success means; “Let’s scale AI content to mitigate click loss” without understanding AI-assisted versus AI-generated content limits; “Let SEO handle AI” while product, PR, and legal stay uninvolved.
  5. Panic-testing instead of strategic reorientation: Teams deploy short-term tactics reactively rather than reorienting the whole ship for better long-term outcomes.

This is classic change management failure: unclear mandate, fragmented ownership, mismatched incentives. No amount of tactical excellence or smart strategy pivots can fix it.

Layering AI SEO tactics + tools on top without structured change management compounds fatigue and accelerates burnout. The “scrappy resilience” that has carried the industry in the past can’t be assumed to instantly apply to this new channel without a strategic transition.

A baseline understanding of organizational change management matters in the AI SEO era … because most organizational transformations fail or underperform.

Your AI-SEO initiative is no different, even if changes in SEO seem contained to your marketing and product teams and stakeholders, rather than the larger organization or brand as a whole.

I’d argue that AI SEO falls into the category of industry transformation that affects your brand and org. And from decades of research, failure and underperformance are the statistical norm for these big transitions – seasoned leaders know this already. No wonder they’re skeptical of your AI SEO plans.

One McKinsey survey found fewer than one-third of teams succeed at both improving performance and sustaining improvements during significant shifts. BCG’s forensic analysis of 825 executives across 70 companies found transformation success at 30%.

Multiple major consulting firms’ independent research shows that most change transformations underperform.

Assuming that tactical excellence alone will carry you – without strategic reeducation and thoughtful change management as our industry shifts – is assuming you’re the exception to the rule.

The correlation between the quality of managing a big shift and your project’s success is dramatic:

Image Credit: Kevin Indig

The gap between excellent and poor represents a nearly 8x improvement. Even the jump from poor to fair quadruples success rates.

BCG’s 2020 analysis reinforces this from a different angle, noting six critical factors that increase successful transformation odds from 30% to 80%:

  • Integrated strategy with clear goals: This is where a carefully crafted AI SEO strategy comes in, one that not only outlines growth goals, but also clear testing and what successful outcomes look like.
  • Leadership commitment from the CEO through middle management: If you’re a consultant or agency, this step can’t be skipped, especially if they have an in-house team assisting in executing the strategy.
  • High-caliber talent deployment: Or I would argue, high-quality reeducation of existing talent – make sure all operators have a baseline shared understanding of what has changed about SEO, how LLM outputs work, what the brand’s goals are, and how it will be executed.
  • Flexible, agile governance: Teams should have the ability to deal with individual challenges without losing sight of the broader goals, including removing barriers quickly.
  • Effective monitoring: Establish core, agreed-upon KPIs to measure what winning would look like, and note what actions were taken when.
  • Modern/updated technology: Your SEO team needs the right tools to succeed, but they also need to know how to use them effectively. Don’t skip allotting time for integration of new workflows and AI monitoring systems.

Marketing teams that treat AI-SEO simply as a technical project to execute or tactics to update are leaving an 8× multiplier on the table.

  • BCG’s 2024 AI implementation study found that roughly 70% of change implementation hurdles relate to people and processes. Only about 10% of challenges were purely technical.
  • A 2024 Kyndryl survey found that while 95% of senior executives reported investing in AI, only 14% felt they had successfully aligned workforce strategies.

Your brand’s ability to test, update tactics, learn AI workflows, implement structured data, and optimize for LLM retrieval is not the bottleneck you need to be concerned about.

The real concern is whether your team – leadership, cross-functional team partners, and frontline executors/operators – is aligned on what AI SEO means, why and how you’re making changes from your classic SEO approach, what success looks like, and who owns outcomes.

Active and visible executive sponsorship is the No. 1 contributor to change success, cited 3-to-1 more frequently than any other factor, according to 25 years of benchmarking research by Prosci. Your first step as the person leading the AI SEO charge for your brand (or across your clients) is to earn executive buy-in.

But the head of SEO cannot transform a brand’s understanding and approach to AI SEO alone. Bain’s 2024 research emphasized that successful transformations “drive change from the middle of the organization out.”

Keep in mind, financial benefits can compound quickly: One research analysis of 600 organizations found “change accelerators” experience greater revenue growth than companies with below-average change effectiveness.

Image Credit: Kevin Indig

Alignment isn’t just a feeling; it’s observable. You’ll know when you get there:

  • Stakeholders can talk through AI SEO without hyperfocusing on tools.
  • Teams agree on what to stop prioritizing (not just what to start).
  • Cross-functional partners have explicit ownership stakes.

Alignment isn’t happening when:

  • Everyone is good with “experimenting with” or “investing in” LLM visibility, but no one owns outcomes.
  • Success gets retroactively defined, or
  • Leadership asks, “What happened to traffic?” when you report influence metrics.

Noah Greenberg, CEO at Stacker, outlined this pretty clearly in a recent LinkedIn post: Step 0 in your AI SEO transformation is to become the expert.

Screenshot from LinkedIn by Kevin Indig, February 2026

New responsibilities:

  • Translating new, confusing AI-based search concepts into plain language (see this clever LinkedIn post by Lilly Ray as a perfect illustration).
  • Educating stakeholders on the structural differences between classic search engines and LLM retrieval – guiding teams to explain why your CEO doesn’t see the same LLM output when they look up the brand vs. what you’re reporting.
  • Explaining the tradeoffs, not just opportunities.
  • Setting expectations executives won’t like at first, but need to hear (traffic loss or slower growth than in years prior).

This is uncomfortable. Less direct control. More indirect influence. Higher stakes.

Your mindset – as the change agent for your clients or organization – centers on three principles:

  1. Honesty over confidence. What we don’t know: the precise value of an AI mention. What we do know: your brand not appearing for related topics is a measurable miss.
  2. Progress over perfection. Alignment doesn’t require certainty. It requires shared uncertainty, agreeing on what you’re testing and how you’ll learn.
  3. Translation over broadcasting. The same strategic message needs adaptation for ICs (how their work changes), managers (how they report success), and executives (how budgets should shift). Uniform communication fails; translated communication scales.

Do this in order:

  1. Write the one-sentence AI SEO mandate for your organization. If you can’t explain AI SEO in one sentence to leadership, you’re not ready to execute.
  2. Complete a high-level SWOT. Identify where your organization has existing strengths and gaps. The Brand SEO scorecard from The Great Decoupling will walk you through.
  3. Replace or supplement legacy KPIs. Add LLM visibility estimates alongside classic KPIs (rankings, sessions) to start the transition. Reporting both builds the case for the shift without abandoning the old model cold.
  4. Name cross-functional owners explicitly. Who owns brand mentions in LLM outputs: SEO, PR, or brand? Who owns citation link acquisition: SEO or content? Ambiguity is the enemy.
  5. Provide baseline education at every level. ICs need to understand how LLM retrieval differs from crawl-index-rank. Executives need to understand why slowed organic traffic or zero-click growth doesn’t mean zero impact.
  6. Kill one SEO practice without a fight. Success means everyone understands why, and you don’t receive pushback. If you can’t retire one outdated tactic without internal conflict, you haven’t achieved alignment.
  7. Only then change workflows and tactics. Tactics deployed on an unaligned organization waste resources and burn credibility. Tactics deployed on an aligned organization compound advantage.

Featured Image: Paulo Bobita/Search Engine Journal

Web Almanac Data Reveals CMS Plugins Are Setting Technical SEO Standards (Not SEOs) via @sejournal, @chrisgreenseo

If more than half the web runs on a content management system, then the majority of technical SEO standards are being positively shaped before an SEO even starts work on it. That’s the lens I took into the 2025 Web Almanac SEO chapter (for clarity, I co-authored the 2025 Web Almanac SEO chapter referenced in this article).

Rather than asking how individual optimization decisions influence performance, I wanted to understand something more fundamental: How much of the web’s technical SEO baseline is determined by CMS defaults and the ecosystems around them.

SEO often feels intensely hands-on – perhaps too much so. We debate canonical logic, structured data implementation, crawl control, and metadata configuration as if each site were a bespoke engineering project. But when 50%+ of pages in the HTTP Archive dataset sit on CMS platforms, those platforms become the invisible standard-setters. Their defaults, constraints, and feature rollouts quietly define what “normal” looks like at scale.

This piece explores that influence using 2025 Web Almanac and HTTP Archive data, specifically:

  • How CMS adoption trends track with core technical SEO signals.
  • Where plugin ecosystems appear to shape implementation patterns.
  • And how emerging standards like llms.txt are spreading as a result.

The question is not whether SEOs matter. It’s whether we’ve been underestimating who sets the baseline for the modern web.

The Backbone Of Web Design

The 2025 CMS chapter of the Web Almanac saw a milestone hit with CMS adoption; over 50% of pages are on CMSs. In case you were unsold on how much of the web is carried by CMSs, over 50% of 16 million websites is a significant amount.

Screenshot from Web Almanac, February 2026

With regard to which CMSs are the most popular, this again may not be surprising, but it is worth reflecting on with regard to which has the most impact.

Image by author, February 2026

WordPress is still the most used CMS, by a long way, even if it has dropped marginally in the 2024 data. Shopify, Wix, Squarespace, and Joomla trail a long way behind, but they still have a significant impact, especially Shopify, on ecommerce specifically.

SEO Functions That Ship As Defaults In CMS Platforms

CMS platform defaults are important, this – I believe – is that a lot of basic technical SEO standards are either default setups or for the relatively small number of websites that have dedicated SEOs or people who at least build to/work with SEO best practice.

When we talk about “best practice,” we’re on slightly shaky ground for some, as there isn’t a universal, prescriptive view on this one, but I would consider:

  • Descriptive “SEO-friendly” URLs.
  • Editable title and meta description.
  • XML sitemaps.
  • Canonical tags.
  • Meta robots directive changing.
  • Structured data – at least a basic level.
  • Robots.txt editing.

Of the main CMS platforms, here is what they – self-reportedly – have as “default.” Note: For some platforms – like Shopify – they would say they’re SEO-friendly (and to be honest, it’s “good enough”), but many SEOs would argue that they’re not friendly enough to pass this test. I’m not weighing into those nuances, but I’d say both Shopify and those SEOs make some good points.

CMS SEO-friendly URLs Title & meta description UI XML sitemap Canonical tags Robots meta support Basic structured data Robots.txt
WordPress Yes Partial (theme-dependent) Yes Yes Yes Limited (Article, BlogPosting) No (plugin or server access required)
Shopify Yes Yes Yes Yes Limited Product-focused Limited (editable via robots.txt.liquid, constrained)
Wix Yes Guided Yes Yes Limited Basic Yes (editable in UI)
Squarespace Yes Yes Yes Yes Limited Basic No (platform-managed, no direct file control)
Webflow Yes Yes Yes Yes Yes Manual JSON-LD Yes (editable in settings)
Drupal Yes Partial (core) Yes Yes Yes Minimal (extensible) Partial (module or server access)
Joomla Yes Partial Yes Yes Yes Minimal Partial (server-level file edit)
Ghost Yes Yes Yes Yes Yes Article No (server/config level only)
TYPO3 Yes Partial Yes Yes Yes Minimal Partial (config or extension-based)

Based on the above, I would say that most SEO basics can be covered by most CMSs “out of the box.” Whether they work well for you, or you cannot achieve the exact configuration that your specific circumstances require, are two other important questions – ones which I am not taking on. However, it often comes down to these points:

  1. It is possible for these platforms to be used badly.
  2. It is possible that the business logic you need will break/not work with the above.
  3. There are many more advanced SEO features that aren’t out of the box, that are just as important.

We are talking about foundations here, but when I reflect on what shipped as “default” 15+ years ago, progress has been made.

Fingerprints Of Defaults In The HTTP Archive Data

Given that a lot of CMSs ship with these standards, do these SEO defaults correlate with CMS adoption? In many ways, yes. Let’s explore this in the HTTP Archive data.

Canonical Tag Adoption Correlates With CMS

Combining canonical tag adoption data with (all) CMS adoption over the last four years, we can see that for both mobile and desktop, the trends seem to follow each other pretty closely.

Image by author, February 2026
Image by author, February 2026

Running a simple Pearson correlation over these elements, we can see this strong correlation even clearer, with canonical tag implementation and the presence of self-canonical URLs.

Image by author, February 2026

What differs is the mobile correlation of canonicalized URLs; that seems to be a negative correlation on mobile and a lower (but still positive) correlation on desktop. A drop in canonicalized pages is largely causing this negative correlation, and the reasons behind this could be many (and harder to be sure of).

Canonical tags are a crucial element for technical SEO; their continued adoption does certainly seem to track the growth in CMS use, too.

Schema.org Data Types Correlate With CMS

Schema.org types against CMS adoption show similar trends, but are less definitive overall. There are many different types of Schema.org, but if we plot CMS adoption against the ones most common to SEO concerns, we can observe a broadly rising picture.

Image by author, February 2026

With the exception of Schema.org WebSite, we can see CMS growth and structured data following similar trends.

But we must note that Schema.org adoption is quite considerably lower than CMSs overall. This could be due to most CMS defaults being far less comprehensive with Schema.org. When we look at specific CMS examples (shortly), we’ll see far-stronger links.

Schema.org implementation is still mostly intentional, specialist, and not as widespread as it could be. If I were a search engine or creating an AI Search tool, would I rely on universal adoption of these, seeing the data like this? Possibly not.

Robots.txt

Given that robots.txt is a single file that has some agreed standards behind it, its implementation is far simpler, so we could anticipate higher levels of adoption than Schema.org.

The presence of a robots.txt is pretty important, mostly to limit crawl of search engines to specific areas of the site. We are starting to see an evolution – we noted in the 2025 Web Almanac SEO chapter – that the robots.txt is used even more as a governance piece, rather than just housekeeping. A key sign that we’re using our key tools differently in the AI search world.

But before we consider the more advanced implementations, how much of a part does a CMS have in ensuring a robots.txt is present? Looks like over the last four years, CMS platforms are driving a significant amount more of robots.txt files serving a 200 response:

Image by author, February 2026

What is more curious, however, is when you consider the file of the robots.txt files. Non-CMS platforms have robots.txt files that are significantly larger.

Image by author, February 2026

Why could this be? Are they more advanced in non-CMS platforms, longer files, more bespoke rules? Most probably in some cases, but we’re missing another impact of a CMSs standards – compliant (valid) robots.txt files.

A lot of robots.txt files serve a valid 200 response, but often they’re not txt files, or they’re redirecting to 404 pages or similar. When we limit this list to only files that contain user-agent declarations (as a proxy), we see a different story.

Image by author, February 2026

Approaching 14% of robots.txt files served on non-CMS platforms are likely not even robots.txt files.

A robots.txt is easy to set up, but it is a conscious decision. If it’s forgotten/overlooked, it simply won’t exist. A CMS makes it more likely to have a robots.txt, and what’s more, when it is in place, it makes it easier to manage/maintain – which IS key.

WordPress Specific Defaults

CMS platforms, it seems, cover the basics, but more advanced options – which still need to be defaults – often need additional SEO tools to enable.

Interrogating WordPress-specific sites with the HTTP Archive data will be easiest as we get the largest sample, and the Wapalizer data gives a reliable way to judge the impact of WordPress-specific SEO tools.

From the Web Almanac, we can see which SEO tools are the most installed on WordPress sites.

Screenshot from Web Almanac, February 2026

For anyone working within SEO, this is unlikely to be surprising. If you are an SEO and worked on WordPress, there is a high chance you have used either of the top three. What IS worth considering right now is that while Yoast SEO is by far the most prevalent within the data, it is seen on barely over 15% of sites. Even the most popular SEO plugin on the most popular CMS is still a relatively small share.

Of these top three plugins, let’s first consider what the differences of their “defaults” are. These are similar to some of WordPress’s, but we can see many more advanced features that come as standard.

SEO Capability All-in-One SEO Yoast SEO Rank Math
Title tag control Yes (global + per-post) Yes Yes
Meta description control Yes Yes Yes
Meta robots UI Yes (index/noindex etc.) Yes Yes
Default meta robots output Explicit index,follow Explicit index,follow Explicit index,follow
Canonical tags Auto self-canonical Auto self-canonical Auto self-canonical
Canonical override (per URL) Yes Yes Yes
Pagination canonical handling Limited Historically opinionated More configurable
XML sitemap generation Yes Yes Yes
Sitemap URL filtering Basic Basic More granular
Inclusion of noindex URLs in sitemap Possible by default Historically possible Configurable
Robots.txt editor Yes (plugin-managed) Yes Yes
Robots.txt comments/signatures Yes Yes Yes
Redirect management Yes Limited (free) Yes
Breadcrumb markup Yes Yes Yes
Structured data (JSON-LD) Yes (templated) Yes (templated) Yes (templated, broad)
Schema type selection UI Yes Limited Extensive
Schema output style Plugin-specific Plugin-specific Plugin-specific
Content analysis/scoring Basic Heavy (readability + SEO) Heavy (SEO score)
Keyword optimization guidance Yes Yes Yes
Multiple focus keywords Paid Paid Free
Social metadata (OG/Twitter) Yes Yes Yes
Llms.txt generation Yes – enabled by default Yes – one-check enable Yes – one-check enable
AI crawler controls Via robots.txt Via robots.txt Via robots.txt

Editable metadata, structured data, robots.txt, sitemaps, and, more recently, llms.txt are the most notable. It is worth noting that a lot of the functionality is more “back-end,” so not something we’d be as easily able to see in the HTTP Archive data.

Structured Data Impact From SEO Plugins

We can see (above) that structured data implementation and CMS adoption do correlate; what is more interesting here is to understand where the key drivers themselves are.

Viewing the HTTP Archive data with a simple segment (SEO plugins vs. no SEO plugins), from the most recent scoring paints a stark picture.

Image by author, February 2026

When we limit the Schema.org @types to the most associated with SEO, it is really clear that some structured data types are pushed really hard using SEO plugins. They are not completely absent. People may be using lesser-known plugins or coding their own solutions, but ease of implementation is implicit in the data.

Robots Meta Support

Another finding from the SEO Web Almanac 2025 chapter was that “follow” and “index” directives were the most prevalent, even though they’re technically redundant, as having no meta robots directives is implicitly the same thing.

Screenshot from Web Almanac 2025, February 2026

Within the chapter number crunching itself, I didn’t dig in much deeper, but knowing that all major SEO WordPress plugins have “index,follow” as default, I was eager to see if I could make a stronger connection in the data.

Where SEO plugins were present on WordPress, “index, follow” was set on over 75% of root pages vs. <5% of WordPress sites without SEO plugins>

Image by author, February 2026

Given the ubiquity of WordPress and SEO plugins, this is likely a huge contributor to this particular configuration. While this is redundant, it isn’t wrong, but it is – again – a key example of whether one or more of the main plugins establish a de facto standard like this, it really shapes a significant portion of the web.

Diving Into LLMs.txt

Another key area of change from the 2025 Web Almanac was the introduction of the llms.txt file. Not an explicit endorsement of the file, but rather a tacit acknowledgment that this is an important data point in the AI Search age.

From the 2025 data, just over 2% of sites had a valid llms.txt file and:

  • 39.6% of llms.txt files are related to All-in-One SEO.
  • 3.6% of llms.txt files are related to Yoast SEO.

This is not necessarily an intentional act by all those involved, especially as Rank Math enables this by default (not an opt-in like Yoast and All-in-One SEO).

Image by author, February 2026

Since the first data was gathered on July 25, 2025 if we take a month-by-month view of the data, we can see further growth since. It is hard not to see this as growing confidence in this markup OR at least, that it’s so easy to enable, more people are likely hedging their bets.

Conclusion

The Web Almanac data suggests that SEO, at a macro level, moves less because of individual SEOs and more because WordPress, Shopify, Wix, or a major plugin ships a default.

  • Canonical tags correlate with CMS growth.
  • Robots.txt validity improves with CMS governance.
  • Redundant “index,follow” directives proliferate because plugins make them explicit.
  • Even llms.txt is already spreading through plugin toggles before it even gets full consensus.

This doesn’t diminish the impact of SEO; it reframes it. Individual practitioners still create competitive advantage, especially in advanced configuration, architecture, content quality, and business logic. But the baseline state of the web, the technical floor on which everything else is built, is increasingly set by product teams shipping defaults to millions of sites.

Perhaps we should consider that if CMSs are the infrastructure layer of modern SEO, then plugin creators are de facto standards setters. They deploy “best practice” before it becomes doctrine

This is how it should work, but I am also not entirely comfortable with this. They normalize implementation and even create new conventions simply by making them zero-cost. Standards that are redundant have the ability to endure because they can.

So the question is less about whether CMS platforms impact SEO. They clearly do. The more interesting question is whether we, as SEOs, are paying enough attention to where those defaults originate, how they evolve, and how much of the web’s “best practice” is really just the path of least resistance shipped at scale.

An SEO’s value should not be interpreted through the amount of hours they spend discussing canonical tags, meta robots, and rules of sitemap inclusion. This should be standard and default. If you want to have an out-sized impact on SEO, lobby an existing tool, create your own plugin, or drive interest to influence change in one.

More Resources:  


Featured Image: Prostock-studio/Shutterstock

Inside Chicago’s surveillance panopticon

Early on the morning of September 2, 2024, a Chicago Transit Authority Blue Line train was the scene of a random and horrific mass shooting. Four people were shot and killed on a westbound train as it approached the suburb of Forest Park. 

The police swiftly activated a digital dragnet—a surveillance network that connects thousands of cameras in the city. 

The process began with a quick review of the transit agency’s surveillance cameras, which captured the alleged gunman shooting the victims execution style. Law enforcement followed the suspect, through real-time footage, across the rapid-­transit system. Police officials circulated the images to transit staff and to thousands of officers. An officer in the adjacent suburb of Riverdale recognized the suspect from a previous arrest. By the time he was captured at another train station, just 90 minutes after the shooting, authorities already had his name, address, and previous arrest history.

Little of this process would come as much surprise to Chicagoans. The city has tens of thousands of surveillance cameras—up to 45,000, by some estimates. That’s among the highest numbers per capita in the US. Chicago boasts one of the largest license plate reader systems in the country, and the ability to access audio and video surveillance from independent agencies such as the Chicago Public Schools, the Chicago Park District, and the public transportation system as well as many residential and commercial security systems such as Ring doorbell cameras. 

Law enforcement and security advocates say this vast monitoring system protects public safety and works well. But activists and many residents say it’s a surveillance panopticon that creates a chilling effect on behavior and violates guarantees of privacy and free speech. 

Black and Latino communities in Chicago have historically been targeted by excessive policing and surveillance, says Lance Williams, a scholar of urban violence at Northeastern Illinois University. That scrutiny has created new problems without delivering the promised safety, he suggests. In order to “solve the problem of crime or violence and make these communities safer,” he says, “you have to deal with structural problems,” such as the shortage of livable-wage jobs, affordable housing, and mental-health services across the city.

Recent years have seen some effective pushback against the surveillance. Until recently, for example, the city was the largest customer of ShotSpotter acoustic sensors, which are designed to detect gunfire and alert police. The system was introduced in a small area on the South Side in 2012. By 2018, an area of about 136 square miles—some 60% of the city—was covered by the acoustic surveillance network.

Critics questioned ShotSpotter’s effectiveness and objected that the sensors were installed largely in Black and Latino neighborhoods. Those critiques gained urgency with the fatal shooting in March 2021 of a 13-year-old, Adam Toledo, by police responding to a ShotSpotter alert. The tragedy became the touchstone of the #StopShotSpotter protest movement and one of the major issues in Brandon Johnson’s successful mayoral campaign in 2023. When he reached office, Johnson followed through, ending the city’s contract with SoundThinking, the San Francisco Bay Area company behind ShotSpotter. In total, it’s estimated, the city paid more than $53 million for the system. 

In response to a request for comment, SoundThinking said that ShotSpotter enables law enforcement “to reach the scene faster, render aid to victims, and locate evidence more effectively.” It said the company “plays no part in the selection of deployment areas” but added: “We believe communities experiencing the highest levels of gun violence deserve the same rapid emergency response as any other neighborhood.” 

While there has been successful resistance to police surveillance in the nation’s third-largest city, there are also countervailing forces: Governments and officials in Chicago and the surrounding suburbs are moving to expand the use of surveillance, also in response to public pressure. Even the victory against acoustic surveillance might be short-lived. Early last year, the city issued a request for proposals for gun violence detection technology. 

Many people in and around Chicago—digital privacy and surveillance activists, defense attorneys, law enforcement officials, and ordinary citizens—are part of this push and pull. Here are some of their stories. 


Alejandro Ruizesparza and Freddy Martinez
Cofounders, Lucy Parsons Labs

Oak Park, a quiet suburb at Chicago’s western border, is the birthplace of Ernest Hemingway. It includes the world’s largest collection of Frank Lloyd Wright–designed buildings and homes. 

Until recently, the village of Oak Park was also the center of a three-year-long campaign against an unwelcome addition to its manicured lawns and Prairie-style architecture: automated license plate readers from a company called Flock Safety. These are high-speed cameras that automatically scan license plates to look for stolen or wanted vehicles, or for drivers with outstanding warrants. 

Freddy Martinez (left) and Alejandro Ruizesparza (right) direct Lucy Parsons Labs, a charitable organization focused on digital rights.
AKILAH TOWNSEND

An Oak Park group called Freedom to Thrive—made up of parents, activists, lawyers, data scientists, and many others—suspected that this technology was not a good or equitable addition to their neighborhood. So the group engaged the Chicago-based nonprofit Lucy Parsons Labs to help navigate the often intimidating process of requesting license plate reader data under the Illinois Freedom of Information Act.

Lucy Parsons Labs, which is named for a turn-of-the-century Chicago labor organizer, investigates technologies such as license plate readers, gunshot detection systems, and police bodycams. 

LPL provides digital security and public records training to a variety of groups and is frequently called on to help community members audit and analyze surveillance systems that are targeting their neighborhoods. It’s led by two first-­generation Mexican-Americans from the city’s Southwest Side. Alejandro Ruizesparza has a background in community organizing and data science. Freddy Martinez was also a community organizer and has a background in physics. 

The group is now approaching its 10th year, but it was an all-volunteer effort until 2022. That’s when LPL received its first unrestricted, multi-year operational grant from a large foundation: the Chicago-based John D. and Catherine T. MacArthur Foundation, known worldwide for its so-called “genius grants.” A grant from the Ford Foundation followed the next year. 

The additional resources—a significant amount compared with the previous all-volunteer budget, acknowledges Ruizesparza—meant the two cofounders and two volunteers became full-time employees. But the group is determined not to become “too comfortable” and lose its edge. There is a tenacity to Lucy Parsons Labs’ work—a “sense of scrappiness,” they say—because “we did so much of this work with no money.” 

One of LPL’s primary strategies is filing extensive FOIA requests for raw data sets of police surveillance. The process can take a while, but it often reveals issues. 

In the case of Oak Park, the FOIA requests were just one tool that Freedom to Thrive and LPL used to sort out what was going on. The data revealed that in the first 10 months of operation, the eight Flock license plate readers the town had deployed scanned 3,000,000 plates. But only 42 scans led to an alert—an infinitesimal yield of 0.000014%. 

At the same time, the impact was disproportionate. While Oak Park’s population of about 53,000 is only 19% Black, Black drivers made up 85% of those flagged by the Flock cameras, seemingly amplifying what were already concerning racial disparities in the village’s traffic stops. Flock did not respond to a request for comment.

“We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”

Freddy Martinez, cofounder, Lucy Parsons Labs

LPL brings a mix of radical politics and critical theory to its mission. Most surveillance technologies are “largely extensions of the plantation systems,” says Ruizesparza. 

The comparison makes sense: Many slaveholding communities required enslaved persons to carry signed documents to leave plantations and wear badges with numbers sewn to their clothing. The group says it aims to empower local communities to push back against biased policing technologies through technical assistance, training, and litigation—and to de­mystify algorithms and surveillance tools in the process.

“When we talk to people, they realize that you don’t need to know how to run a regression to understand that a technology has negative implications on your life,” says Ruizesparza. “You don’t need to understand how circuits work to understand that you probably shouldn’t have all of these cameras embedded in only Black and brown regions of a city.”

The group came by some of its techniques through experimentation. “When LPL was first getting started, we didn’t really feel like FOIA would have been a good way of getting information. We didn’t know anything about it,” says Martinez. “Along the way, we were very successful in uncovering a lot of surveillance practices.” 

One of the covert surveillance practices uncovered by those aggressive FOIA requests, for example, was the Chicago Police Department’s use of “Stingray” equipment, portable surveillance devices deployed to track and monitor mobile phones. 

The contentious issue of Oak Park’s license plate readers was finally put to a vote in late August. The village trustees voted 5–2 to terminate the contract with Flock Safety. 

Since then, community-­based groups from across the country—as far away as California—have contacted LPL to say the Chicago collective’s work has inspired their own efforts, says Martinez: “We became almost de facto experts in navigating the process and the law. I think that sort of speaks to some of the DIY punk aesthetic.”


Brian Strockis
Chief, Oak Brook Police Department

If you drive about 20 miles west of Chicago, you’ll find Oakbrook Center, one of the nation’s leading luxury shopping destinations. The open-air mall includes Neiman-Marcus, Louis Vuitton, and Gucci and attracts high-end shoppers from across the region. It’s also become a destination for retail theft crews that coordinate “smash and grabs” and often escape with thousands of dollars’ worth of inventory that can be quickly sold, such as sunglasses or luxury handbags. 

In early December, police say, a Chicago man tried to lead officers on what could have been a dangerous high-speed chase from the mall. Patrol cars raced to the scene. So did a “first responder drone,” built by Flock Safety and deployed by the Oak Brook Police Department.  

The drone identified the suspect vehicle from the mall parking lot using its license plate reader and snapped high-definition photos that were texted to officers on the ground. The suspect was later tracked to Chicago, where he was arrested. 

Brian Strockis, chief of the Oak Brook Police Department, led the way in introducing drones as first responders in the state of Illinois.
AKILAH TOWNSEND

This was the type of outcome that Brian Strockis, chief of the Oak Brook Police Department, hoped for when he pioneered the “drone as first responder,” or DFR, program in Illinois. A longtime member of the force, he joined the department almost 25 years ago as a patrol officer, worked his way up the brass ladder, and was awarded the top job in 2022. 

Oak Brook was the first municipality in Illinois to deploy a drone as a first responder. One of the main reasons, says Strockis, was to reduce the number of high-speed chases, which are potentially dangerous to officers, suspects, and civilians. A drone is also a more effective and cost-efficient way to deal with suspects in fleeing vehicles, says Strockis.

Police say there was the potential for a dangerous high-speed chase. Patrol cars raced to the scene. But the first unit to arrive was a drone.

“It’s a force multiplier in that we’re able to do more with less,” says the chief, who spoke with me in his office at Oak Brook’s Village Hall. 

The department’s drone autonomously launches from the roof of the building and responds to about 10 to 12 service calls per day, at speeds up to 45 miles per hour. It arrives at crime scenes before patrol officers in nine out of every 10 cases.

Next door to Village Hall is the Oak Brook Police Department’s real-time crime center, a large room with two video walls that integrates livestreams from the first-responder drone, handheld drones, traffic cameras, license plate readers, and about a thousand private security cameras. When I visited, the two DFR operators demonstrated how the machine can fly itself or be directed to locations from a destination entered on Google Maps. They sent it off to a nearby forest preserve and then directed it to return to the rooftop base, where it docks automatically, changes batteries, and charges. After the demo, one of the drone operators logged the flight, as required by state law.

Strockis says he is aware of the privacy concerns around using this technology but that protections are in place. 

For example, the drone cannot be used for random or mass surveillance, he says, because the camera is always pointed straight ahead during flight and does not angle down until it reaches its desired location. The drone’s payload does not include facial recognition technology, which is restricted by state law, he says. 

The drone video footage is invaluable, he adds, because “you are seeing the events as they’re transpiring from an angle that you wouldn’t otherwise be privy to.” 

It’s an extra layer of protection for the public as well as for the officers, says the chief: “For every incident that an officer responds to now, you have squad car and bodycam video. You likely have cell-phone video from the public, officers, complainants, from offenders. So adding this element is probably the best video source on a scene that the police are going to anyway.”


Mark Wallace
Executive director, Citizens to Abolish Red Light Cameras

Mark Wallace wears several hats. By day he is a real estate investor and mortgage lender. But he is probably best known to many Chicagoans—especially across the city’s largely African-American communities on the South and West Sides—as a talk radio host for the station WVON and one of the leading voices against the city’s extensive network of red-light and speed cameras. 

For the past two decades, city officials have maintained that the cameras—which are officially known as “automated enforcement”—are a crucial safety measure. They are also a substantial revenue stream, generating around $150 million a year and a total of some $2.5 billion since they were installed.

Urged on by a radio listener, Mark Wallace started organizing against Chicago’s red-light and speed cameras, a substantial revenue stream for the city that has been found to disproportionately burden majority Black and Latino areas.
AKILAH TOWNSEND

“The one thing that the cameras have the ability to do is generate a lot of money,” Wallace says. He describes the tickets as a “cash grab” that disproportionately affects Black and Latino communities.

A groundbreaking 2022 analysis by ProPublica found, in fact, that households in majority Black and Latino zip codes were ticketed at much higher rates than others, in part because the cameras in those areas were more likely to be installed near expressway ramps and on wider streets, which encouraged faster speeds. The tickets, which can quickly rack up late fees, were also found to cause more of a financial burden in such communities, the report found.

These were some of the same concerns that many people expressed on the radio and in meetings, Wallace says. 

Chicago’s automated traffic enforcement began in 2003, and it became the most extensive—and most lucrative—such program in the country. About 300 red-light cameras and 200 speed cameras are set up near schools and parks. The cost of the tickets can quickly double if they are not paid or contested—providing a windfall for the city.  

Wallace began his advocacy against the cameras soon after arriving at the radio station in the early 2010s. A younger listener called in and said, he recalls, “that he enjoyed the information that came from WVON but that we didn’t do anything.” The comment stuck with him, especially in light of WVON’s storied history. The station was closely involved in the civil rights movement of the 1960s and broadcast Martin Luther King Jr.’s speeches during his Chicago campaign.

Wallace hoped to change the caller’s perception about the station. He had firsthand experience with red-light cameras,  having been ticketed himself, and decided to take them on as a cause. He scheduled a meeting at his church for a Friday night, promoting it on his show. “More than 300 people showed up,” he remembers, chatting with me in the spacious project studio and office in the basement of his townhouse on the city’s South Side. “That said to me there are a lot of people who see this in­equity and injustice.” 

Wallace began using his platform on WVON—The People’s Show—to mobilize communities around social and economic justice, and many discussions revolved around the automated enforcement program. The cause gained traction after city and state officials were found to have taken thousands of dollars from technology and surveillance companies to make sure their cameras remained on the streets.

Wallace and his group, Citizens to Abolish Red Light Cameras, want to repeal the ordinances authorizing the city’s camera programs. That hasn’t happened so far, but political pressure from the group paved the way for a Chicago City Council ordinance that required public meetings before any red-light cameras are installed, removed, or relocated. The group hopes for more restrictions for speed cameras, too.

“It was never about me personally. It was about ensuring that we could demonstrate to people that you have power,” says Wallace. “If you don’t like something, as Barack Obama would say, get a pen and clipboard and go to work to fight to make these changes.” 


Jonathan Manes
Senior counsel, MacArthur Justice Center

Derick Scruggs, a 30-year-old father and licensed armed security guard, was working in the parking lot of an AutoZone on Chicago’s Southwest Side on April 19, 2021. That’s when he was detained, interrogated, and subjected to a “humiliating body search” by two Chicago police officers, Scruggs later attested. “I was just doing my job when police officers came at me, handcuffed me, and treated me like a criminal—just because I was near a ShotSpotter alert,” he says.

The officers found no evidence of a shooting and released Scruggs. But the next day, the police returned and arrested him for an alleged violation related to his security guard paperwork. Prosecutors later dismissed the charges, but he was held in custody overnight and was then fired from his job. “Because of what they did,” he says, “I lost my job, couldn’t work for months, and got evicted from my apartment.”

Jonathan Manes litigated cases related to detentions at Guantanamo Bay and the legality of drone strikes before turning his attention to Chicago’s implementation of gunshot detection technology.
AKILAH TOWNSEND

Scruggs is believed to be among thousands of Chicagoans who’ve been questioned, detained, or arrested by police because they were near the location of a ShotSpotter alert, according to an analysis by the City of Chicago Office of Inspector General. The case caught the attention of Jonathan Manes, a law professor at Northwestern and senior counsel at the MacArthur Justice Center, a public interest law firm. 

Manes previously worked in national security law, but when he joined the justice center about six years ago, he chose to focus squarely on the intersection of civil rights with police surveillance and technology. “My goal was to identify areas that weren’t well covered by other civil rights organizations but were a concern for people here in Chicago,” he says. 

“There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it.”

Jonathan Manes, senior counsel, MacArthur Justice Center

And when he and his colleagues looked into ShotSpotter, they revealed a disturbing problem: The system generated alerts that yielded no evidence of gun-­related crimes but were used by police as a pretext for other actions. There seemed to be “a pattern of people being stopped, detained, questioned, sometimes arrested, in response to a ShotSpotter alert—often resulting in charges that have nothing to do with guns,” Manes says. 

The system also directed a “massive number of police deployments onto the South and West Sides of the city,” Manes says. Those regions are home to most of Chicago’s Black and Latino residents. The research showed that 80% of the city’s Black population but only 30% of its white population lived in districts covered by the system. 

Manes brought Scruggs’s case into a lawsuit that he was already developing against the city’s use of ShotSpotter. In late 2025, he and his colleagues reached a settlement that prohibits police officers from doing what they did in Scruggs’s case—stopping or searching people simply because they are near the location of a gunshot detection alert. 

Chicago had already decommissioned ShotSpotter in 2024, but the agreement will cover any future gunshot detection systems. Manes is carefully watching to see what happens next.

Though Manes is pleased with the settlement, he points out that it narrowly focused on how police resources were used after the gunshot detection system was operational. “There is a need for much broader structural change to how the city chooses to use surveillance technology and then deploys it,” he adds. He supports laws that require disclosure from local officials and law enforcement about what technologies are being proposed and how civil rights could be affected.  

More than two dozen jurisdictions nationwide have adopted surveillance transparency laws, including San Francisco, Seattle, Boston, and New York City. But so far Chicago is not on that list. 

Rod McCullom is a Chicago-based science and technology writer whose focus areas include AI, biometrics, cognition, and the science of crime and violence.  

The Download: Chicago’s surveillance network, and building better bras

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside Chicago’s surveillance panopticon

Chicago has tens of thousands of surveillance cameras—up to 45,000, by some estimates. 

That’s among the highest numbers per capita in the US. Chicago boasts one of the largest license plate reader systems in the country, and the ability to access audio and video surveillance from independent agencies such as the Chicago Public Schools, the Chicago Park District, and the public transportation system as well as many residential and commercial security systems such as Ring doorbell cameras.

Law enforcement and security advocates say this vast monitoring system protects public safety and works well. 

But activists and many residents say it’s a surveillance panopticon that creates a chilling effect on behavior and violates guarantees of privacy and free speech. Read the full story.

—Rod McCullom

Job titles of the future: Breast biomechanic

Twenty years ago, Joanna Wakefield-Scurr was having persistent pain in her breasts. Her doctor couldn’t diagnose the cause but said a good, supportive bra could help. A professor of biomechanics, Wakefield-Scurr thought she could do a little research and find a science-backed option. Two decades later, she’s still looking.

Wakefield-Scurr now leads an 18-person team at the Research Group in Breast Health at the University of Portsmouth in the UK. And as more women take up high-impact sports, the need to understand what makes a good bra grows, she says her lab can’t keep up with demand. Read the full story.

—Sara Harrison

These stories are both from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Inside ICE’s plans to build huge detention centers across the US
The identities of the personnel who authorized it have been revealed in metadata. (Wired $)
+ A UK tourist with a valid visa was detained by ICE for six weeks. (The Guardian)

2 The UAE says it was targeted by a wave of AI-backed cyberattacks
Authorities said the attacks marked a major shift in methods, but didn’t elaborate. (Bloomberg $)
+ New cybersecurity rules are hobbling small defense suppliers. (Reuters)+ AI is already making online crimes easier. It could get much worse. (MIT Technology Review)

3 What does the public really think about AI?
Tech leaders are worried they might not be fully onboard with their missions. (NYT $)
+ How social media encourages the worst of AI boosterism. (MIT Technology Review)

4 It looks like X really is pushing its users further to the right
As well as attracting more conservative thinkers in the first place. (NY Mag $)
+ The platform is currently disputing a major European fine. (Politico $)

5 Meet the farmers standing up to data center builders
They’re turning down deals worth millions for the land they’ve worked for decades. (The Guardian)
+ A data center venture launched at the White House isn’t delivering on its promises. (The Information $)
+ Data centers are amazing. Everyone hates them. (MIT Technology Review)

6 America has a plan to fight back against China’s AI
It hopes to send Tech Corps volunteers around the world to promote its own national efforts. (Rest of World)
+ China’s plan to lure in new AI customers? Bubble tea. (FT $)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

7 Clouds are a major climate problem ☁
They’re making it harder for scientists to model the weather accurately. (Quanta Magazine)
+ The building legal case for global climate justice. (MIT Technology Review)

8 AI is still hopeless at reading PDFs
But companies keep deploying it across work systems anyway. (The Verge)

9 A “Fitbit for farts” could help analyze your gastrointestinal health
If you don’t mind wearing a sensor tucked into your underwear, that is. (WSJ $)

10 Gen Z is fascinated by corporate culture ​​💼
TikTok’s “WorkTok” videos are very effective at romanticizing the daily grind. (FT $) 

Quote of the day

“It also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

—Sam Altman, CEO of OpenAI, compares the environmental impact of training AI’s vast models to the effort required to train a human during an event in India, TechCrunch reports.

One more thing

How one mine could unlock billions in EV subsidies

On a pine farm north of the tiny town of Tamarack, Minnesota, Talon Metals has uncovered one of America’s densest nickel deposits—and now it wants to begin extracting it.

If regulators approve the mine, it could mark the starting point in what the company claims would become the country’s first complete domestic nickel supply chain, running from the bedrock beneath the Minnesota earth to the batteries in electric vehicles across the nation.

MIT Technology Review wanted to provide a clearer sense of the law’s on-the-ground impact by zeroing in on a single project and examining how these rich subsidies could be unlocked at each point along the supply chain. Take a look at what we found out.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Alysa Liu’s gold medal-winning Winter Olympics figure skating route is truly amazing.
+ Mmm, delicious ancient Roman pizza.
+ It’s not every day you find 2,000 year-old footprints while walking your dog 👣
+ Nature is full of surprises, and so are the winners of this year’s Sony World Photography Awards.

The human work behind humanoid robots is being hidden

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In January, Nvidia’s Jensen Huang, the head of the world’s most valuable company, proclaimed that we are entering the era of physical AI, when artificial intelligence will move beyond language and chatbots into physically capable machines. (He also said the same thing the year before, by the way.)

The implication—fueled by new demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The new way is to replicate the way humans think, learn, and adapt while they work. The problem is that the lack of transparency about the human labor involved in training and operating such robots leaves the public both misunderstanding what robots can actually do and failing to see the strange new forms of work forming around them.

Consider how, in the AI era, robots often learn from humans who demonstrate how to do a chore. Creating this data at scale is now leading to Black Mirror–esque scenarios. A worker in Shanghai, for example, recently spent a week wearing a virtual-reality headset and an exoskeleton while opening and closing the door of a microwave hundreds of times a day to train the robot next to him, Rest of World reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it would partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across a variety of household environments.” (Figure did not respond to questions about this effort.)

Just as our words became training data for large language models, our movements are now poised to follow the same path. Except this future might leave humans with an even worse deal, and it’s already beginning. The roboticist Aaron Prather told me about recent work with a delivery company that had its workers wear movement-tracking sensors as they moved boxes; the data collected will be used to train robots. The effort to build humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.” 

Or consider tele-operation. Though the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is set to ship to homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the customer wants it to do a tricky task, a tele-operator from the company’s headquarters in Palo Alto, California, will pilot it, looking through its cameras to iron clothes or unload the dishwasher.

This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we know it will not exist in a world where tele-operators are doing chores in your house through a robot. And if home humanoids are not genuinely autonomous, the arrangement is better understood as a form of wage arbitrage that re-creates the dynamics of gig work while, for the first time, allowing physical tasks to be performed wherever labor is cheapest.

We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn on its own, even the best models require an awful lot of human feedback to work as desired.

These human workforces do not mean that AI is just vaporware. But when they remain invisible, the public consistently overestimates the machines’ actual capabilities.

That’s great for investors and hype, but it has consequences for everyone. When Tesla marketed its driver-assistance software as “Autopilot,” for example, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages). 

The same will be true for humanoid robots. If Huang is right, and physical AI is coming for our workplaces, homes, and public spaces, then the way we describe and scrutinize such technology matters. Yet robotics companies remain as opaque about training and tele-operation as AI firms are about their training data. If that does not change, we risk mistaking concealed human labor for machine intelligence—and seeing far more autonomy than truly exists.

Peptides are everywhere. Here’s what you need to know.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Want to lose weight? Get shredded? Stay mentally sharp? A wellness influencer might tell you to take peptides, the latest cure-all in the alternative medicine arsenal. People inject them. They snort them. They combine them into concoctions with superhero names, like the Wolverine stack.  

Matt Kaeberlein, a longevity researcher, first started hearing about peptides a few years ago. “At that point it was mostly functional medicine doctors that were using peptides,” he says, referring to physicians who embrace alternative medicine and supplements. “In the last six months, it’s kind of gone crazy.”

Peptides have gone mainstream. At the health-technology startup Superpower in Los Angeles, employees can get free peptide shots on Fridays. At a health food store in Phoenix, a sidewalk sign reads, “We have peptides!” At a tae kwon do center in South Carolina, a peptide wholesaler hosts an informational evening. On social media, they’re everywhere. And that popularity seems poised to grow; Department of Health and Human Services secretary Robert F. Kennedy Jr. has promised to end the FDA’s “aggressive suppression” of peptides.

The benefits and risks of many of these compounds, however, are largely unknown. Some of the most popular peptides have never been tested in human trials. They are sold for research purposes, not human consumption. Some are illegal knockoffs of wildly successful weight-loss medicines. The vast majority come from China, a fact that has some legislators worried. Last week, Senator Tom Cotton urged the head of the FDA to crack down on illegal shipments of peptides from China. In the absence of regulatory oversight, some people are sending the compounds they purchase off for independent testing just to ensure that the product is legit. 

What is a peptide?

A peptide is simply a short string of amino acids, the building blocks of proteins. “Scientists generally think of peptides as very small protein fragments, but we don’t really have a precise cutoff between a peptide and a protein,” says Paul Knoepfler, a stem-cell researcher at the University of California, Davis. Insulin is a peptide, as is human growth hormone. So are some neurotransmitters, like oxytocin. 

But when wellness influencers talk about peptides, they’re often referring to particular compounds—formulated as injections, pills, or nasal sprays—that have become trendy lately. 

Some of these peptides are FDA-approved prescription medications. GLP-1 medicines, for example, are approved to treat diabetes and obesity but are also easily accessible online to almost anyone who wants to use them. Many sites sell microdoses of GLP-1s with claims that they can “support longevity,” reduce cognitive decline, or curb inflammation. 

Many more peptides are experimental. “The majority fall into the unapproved bucket,” says Kaeberlein, who is chief executive officer of Optispan, a Seattle-based health-care technology company focused on longevity. That bucket includes drugs that promote the release of growth hormones, like TB-500, CJC-1295, and ipamorelin, and compounds said to promote tissue repair and wound healing, like BPC-157 and GHK-Cu. It’s primarily these unapproved compounds that have raised concerns. “Anybody can set up an online shop selling research-grade peptides,” says Tenille Davis, a pharmacist and chief advocacy officer at the Alliance for Pharmacy Compounding, a trade organization representing more than 600 pharmacies. “And nobody knows what’s even in the vials.”  

It’s not just fitness gurus, biohackers, and longevity fanatics who are taking these experimental drugs. Kaeberlein recalls hearing about an acquaintance whose doctor prescribed her unapproved peptides. She was “just a typical upper-middle-class woman,” he says. “That’s when it really hit me that this has sort of gone relatively mainstream.”

What do peptides do?

All kinds of things, purportedly. GHK-Cu is supposed to help with wound healing and collagen production. BPC-157 is said to promote tissue repair and curb inflammation, TB-500 to foster blood vessel formation. Here’s the caveat: The evidence for these benefits comes largely from animal studies and online testimonials, not human trials. “There’s no human clinical evidence to show that they even do what people are claiming that they do,” says Stuart Phillips, a muscle physiologist at McMaster University in Hamilton, Ontario. “So it could be just a giant rip-off.”

Some experimental peptides probably do have beneficial wound healing properties or regenerative effects, Kaeberlein says. For BPC-157, for example, “the animal data is compelling,” he says. But there are still plenty of unknowns: What is the right dosage? How long should you take it? What’s the best way to administer it? Those are questions that can be answered only through rigorous clinical trials. In the absence of those studies, doctors “just make up their own protocols,” he says. Some consumers go the DIY route, reconstituting powdered peptides and injecting their own concoctions at home. 

So why am I seeing ads for these peptide therapies if they’re not approved? 

Federal law prohibits companies from marketing medications that haven’t been approved. That includes most peptides, which are regulated as small molecules, not dietary supplements. (Two notable exceptions are collagen peptides and creatine peptides, often sold as powders.) The law is designed to protect consumers from drugs that haven’t been proved safe and effective.

But it doesn’t stop labs from making peptides for research purposes. “Most of the peptides being consumed in the marketplace now are being sold by these online companies that are selling them labeled for research use only,” Davis says. The vials often bear disclaimers that clearly say as much: “For research use only” or “Not for human consumption.” It’s illegal to market these products for human use, but “the websites make it pretty clear that the buyers are intended to be using these products themselves,” she says.

The practice isn’t legal, but enforcement has been sporadic. “FDA sends warning letters, shuts down companies. But because it’s all online, they have a really hard time keeping up with these entities,” Davis says. And companies have plenty of incentive to keep illegally marketing the products. “They can make millions of dollars without having to spend money and time doing research,” Knoepfler says. “It’s a cash grab.”

Compounding pharmacies, which are legally allowed to create bespoke medications by mixing bulk active ingredients, often get requests to dispense peptides, but most peptides don’t meet the eligibility criteria for compounding. This has always been the case, but in 2023 the FDA explicitly added several common experimental peptides to the list of bulk substances that cannot be compounded because of safety concerns. “It put an exclamation point on policy that was already in place,” Davis says.  

Many GLP-1 medications are available from compounding pharmacies. That used to be accepted because the drugs were in short supply. Now, however, supplies of most of these medications are stable, and sellers are under increasing pressure from regulators to stop mass-marketing these drugs. 

What’s the harm in trying them? 

Peptides sold for research purposes come from labs with little regulatory oversight. “When you buy stuff online intended for research grade, you have no idea what’s in the vial that you’re getting. You have no idea the sterility practices that it was manufactured under, or what sort of impurities might be in the vial,” Davis says.

Phillips has heard some people say they send their peptides for third-party testing to ensure that they’re pure, “like it’s some kind of flex,” he says. “And I’m like, ‘Well, you just proved that this stuff lives in the shadows, for crying out loud.’”

Finnrick Analytics, a peptide-testing startup in Austin, Texas, has analyzed the purity and potency of more than 5,000 samples of 15 different peptides from 173 vendors. The results show that the quality varies substantially from vendor to vendor and even batch to batch. For example, the company tested nearly 450 samples of BPC-157 from 64 vendors. In some cases, the vials sold as BPC-157 didn’t contain the compound at all. In those that did, the purity varied from about 82% to 100%. 

Perhaps more worrying, 8% of all the peptide samples Finnrick tested had measurable levels of endotoxins, bacterial fragments that can cause fever and chills or, in larger doses, septic shock. 

The health risks aren’t just hypothetical. In 2025, two women had to be hospitalized and placed on ventilators after receiving peptide injections at a longevity conference in Las Vegas. Both recovered, and it’s still not clear whether they reacted to the peptides themselves or to some impurity in the vials. 

“The idea that all peptides are safe and all peptides are natural is just nonsense,” Kaeberlein says. “I tend to consider myself fairly libertarian when it comes to what people want to do for their health,” he adds. “If you want to take an experimental drug, that’s up to you.” But the problem with unregulated experimental therapies is that it’s exceedingly difficult to assess benefit and harm. “The relatively small percentage of people that are bad actors will be bad actors, and they will dishonestly market this stuff to people who aren’t equipped to really understand the true risks and rewards,” he says.

And, like any drug, peptides come with a risk of side effects. For approved medications, these are detailed right on the package insert. But for many experimental peptides, there hasn’t been enough research to understand what those side effects might be. Some researchers have warned that peptides that promote growth or blood vessel formation might also foster the growth of cancers.  

For competitive athletes who use peptides, meanwhile, the risks include not just possible health problems but suspension. Some peptides, like BPC-157, are banned by the World Anti-Doping Agency. 

The FDA has undergone a pretty substantial overhaul under the Trump administration. Are the regulations around peptides likely to change? 

I don’t have a crystal ball, but it seems likely. In May 2025, US health secretary Robert F. Kennedy Jr. joined the longevity enthusiast and biohacker Gary Brecka on his podcast The Ultimate Human and promised to “end the war at FDA against alternative medicine—the war on stem cells, the war on chelating drugs, the war on peptides.”

Knoepfler anticipates that Kennedy will force the FDA to allow compounding of some of the most popular peptides, like BPC-157 and GHK-Cu. “Such a step would put public health at great risk, while giving compounders and likely wellness influencers a lot more profit,” he says. 

The FDA seems intent on cracking down on GLP-1 copycats, however. In early February, commissioner Marty Makary posted on X that the agency would take “swift action against companies mass-marketing illegal copycat drugs, claiming they are similar to FDA-approved products.”

AI Prompts Come to Search Console

Google Search Console’s AI analysis went public last week after being available to select users.

The feature, in the Performance > Search results tab, enables insights on a site’s organic search performance by entering generative AI-like prompts.

I’ll provide examples in this post.

Screenshot of Search Console's new AI analysis

Search Console’s AI analysis responds to genAI-like prompts. Click image to enlarge.

Query analysis

Instead of using prebuilt filters to discern various types of searches, users can enter prompts, such as:

  • “Show question-like queries.”
  • “Filter queries longer than [xx] characters.”
  • “Show informational queries.”

The feature initially responds to the prompts with regular expressions. For example, for the prompt, “Show queries with informational intent,” it created the following regex:

Queries matching regex: (what|how|why|guide|tutorial|explain|definition|example|best way to|steps to|tips for|reasons why|benefits of|difference between)

I could then click “Apply” to see the filtered results.

Similarly, I prompted “Show product research queries.” The feature responded:

Queries matching regex:(best|review|compare|vs|price|buy|alternatives|guide|recommendations|features|specs|cost|discount|deal|sale|where to buy|how to choose|top rated|pros and cons)

Again, I could “Apply” or “Dismiss” that regex.

Brand search

The AI analysis performed surprisingly well on brand-name searches. I prompted, “Show branded queries.” The regex responses included my brand name and a one- and two-word pattern:

Queries matching regex: (brandname|brand name)

Traffic drop

Search Console’s AI analysis can assemble traffic change reports. For example, I prompted:

  • “Show pages that lost the most clicks over the past 30 days.”
  • “Compare clicks last month with the same month of the previous year.”

Country-specific

Users can also evaluate organic search visibility in countries:

  • “Show me clicks, Average CTR, and Average Position of my queries in Canada last month.”

Here’s a prompt for traffic changes:

  • “Show pages that lost the most clicks over the past 30 days in Canada.”

Limitations

The new AI feature, while helpful, is not a game-changer. Inexperienced users do not typically know what to ask, while seasoned pros can go directly to the prebuilt filters.

Moreover, the AI integration works only with top filters. It cannot process requests for filters unavailable in the Performance reports. For example, it cannot respond to prompts for queries with an average position greater than 2, indicating room for improvement.

Screenshot of a Performance report showing a filter only on position 1.

Default Performance reports allow filtering only on position 1.

Google Discover Update: Early Data Shows Fewer Domains In US via @sejournal, @MattGSouthern

NewzDash published an analysis comparing Discover visibility before and after Google’s February 2026 Discover core update, using panel data from millions of US users tracked through its DiscoverPulse tool.

It compared pre-update (Jan 25-31) and post-update (Feb 8-14) windows across the top 1,000 domains and top 1,000 articles in the US, California, and New York.

For transparency, NewzDash is a news SEO tracking platform that sells Discover monitoring tools.

What The Data Shows

Google said the update targeted more locally relevant content, less sensational and clickbait content, and more in-depth, timely content from sites with topic expertise. The NewzDash data has early readings on all three.

NewzDash compared Discover feeds in California, New York, and the US as a whole. The three feeds mostly overlapped, but each state got local stories the others didn’t. New York-local domains appeared roughly five times more often in the New York feed than in the California feed, and vice versa.

In California, local articles in the top 100 placements rose from 10 to 16 in the post-update window. The local layer included content from publishers like SFGate and LA Times that didn’t appear in the national top 100 during the same period.

Clickbait reduction was harder to confirm. NewzDash acknowledged that headline markers alone can’t prove clickbait decreased. It did find that what it called ‘templated curiosity-gap patterns’ appeared to lose visibility. Yahoo’s presence in the US top 1,000 dropped from 11 to 6 articles, with zero items in the top 100 post-update.

Unique content categories grew across all three geographic views, but unique publishers shrank in the US (172 to 158 domains) and California (187 to 177). That combination suggests Discover is covering more topics but sending that distribution to a narrower set of publishers.

This pattern aligns with what early December core update analysis showed about specialized sites gaining ground over generalists.

X.com’s Growing Discover Presence

X.com posts from institutional accounts climbed from 3 to 13 items in the US top 100 Discover placements and from 2 to 14 in New York’s top 100.

NewzDash noted it had tracked X.com’s Discover growth since November and said the update appeared to accelerate the trend. Most top-performing X items came from established media brands.

The analysis noted it couldn’t prove or disprove whether X posts are cannibalizing publisher traffic in Discover, calling the data a “directional sanity check.” The open question is whether routing through X adds friction that could reduce click-through to owned pages.

Why This Matters

As we continue to monitor the Discover core update, we now have early data on what it seems to favor. Regional publishers with locally relevant content showed up more often in NewzDash’s post-update top lists.

Discover covered more topics in the post-update window, but fewer sites were getting that traffic in the US and California. Publishers without a clear topic focus could be on the wrong side of that trend.

Looking Ahead

This analysis covers an early window while the rollout is still being completed. The post-update measurement period overlaps with the Super Bowl, Winter Olympics, and ICC Men’s T20 World Cup, any of which could independently inflate News and Sports category visibility.

Google said it plans to expand the Discover core update beyond English-language US users in the months ahead.


Featured Image: joingate/Shutterstock