The era of AI persuasion in elections is about to begin

In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence.

Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason.

But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do.

In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply.  

The challenge is that modern AI doesn’t just copy voices or faces; it holds conversations, reads emotions, and tailors its tone to persuade. And it can now command other AIs—directing image, video, and voice models to generate the most convincing content for each target. Putting these pieces together, it’s not hard to imagine how one could build a coordinated persuasion machine. One AI might write the message, another could create the visuals, another could distribute it across platforms and watch what works. No humans required.

A decade ago, mounting an effective online influence campaign typically meant deploying armies of people running fake accounts and meme farms. Now that kind of work can be automated—cheaply and invisibly. The same technology that powers customer service bots and tutoring apps can be repurposed to nudge political opinions or amplify a government’s preferred narrative. And the persuasion doesn’t have to be confined to ads or robocalls. It can be woven into the tools people already use every day—social media feeds, language learning apps, dating platforms, or even voice assistants built and sold by parties trying to influence the American public. That kind of influence could come from malicious actors using the APIs of popular AI tools people already rely on, or from entirely new apps built with the persuasion baked in from the start.

And it’s affordable. For less than a million dollars, anyone can generate personalized, conversational messages for every registered voter in America. The math isn’t complicated. Assume 10 brief exchanges per person—around 2,700 tokens of text—and price them at current rates for ChatGPT’s API. Even with a population of 174 million registered voters, the total still comes in under $1 million. The 80,000 swing voters who decided the 2016 election could be targeted for less than $3,000. 

Although this is a challenge in elections across the world, the stakes for the United States are especially high, given the scale of its elections and the attention they attract from foreign actors. If the US doesn’t move fast, the next presidential election in 2028, or even the midterms in 2026, could be won by whoever automates persuasion first. 

The 2028 threat 

While there have been indications that the threat AI poses to elections is overblown, a growing body of research suggests the situation could be changing. Recent studies have shown that GPT-4 can exceed the persuasive capabilities of communications experts when generating statements on polarizing US political topics, and it is more persuasive than non-expert humans two-thirds of the time when debating real voters. 

Two major studies published yesterday extend those findings to real election contexts in the United States, Canada, Poland, and the United Kingdom, showing that brief chatbot conversations can move voters’ attitudes by up to 10 percentage points, with US participant opinions shifting nearly four times more than it did in response to tested 2016 and 2020 political ads. And when models were explicitly optimized for persuasion, the shift soared to 25 percentage points—an almost unfathomable difference.

While previously confined to well-resourced companies, modern large language models are becoming increasingly easy to use. Major AI providers like OpenAI, Anthropic, and Google wrap their frontier models in usage policies, automated safety filters, and account-level monitoring, and they do sometimes suspend users who violate those rules. But those restrictions apply only to traffic that goes through their platforms; they don’t extend to the rapidly growing ecosystem of open-source and open-weight models, which  can be downloaded by anyone with an internet connection. Though they’re usually smaller and less capable than their commercial counterparts, research has shown with careful prompting and fine-tuning, these models can now match the performance of leading commercial systems. 

All this means that actors, whether well-resourced organizations or grassroots collectives, have a clear path to deploying politically persuasive AI at scale. Early demonstrations have already occurred elsewhere in the world. In India’s 2024 general election, tens of millions of dollars were reportedly spent on AI to segment voters, identify swing voters, deliver personalized messaging through robocalls and chatbots, and more. In Taiwan, officials and researchers have documented China-linked operations using generative AI to produce more subtle disinformation, ranging from deepfakes to language model outputs that are biased toward messaging approved by the Chinese Communist Party.

It’s only a matter of time before this technology comes to US elections—if it hasn’t already. Foreign adversaries are well positioned to move first. China, Russia, Iran, and others already maintain networks of troll farms, bot accounts, and covert influence operators. Paired with open-source language models that generate fluent and localized political content, those operations can be supercharged. In fact, there is no longer a need for human operators who understand the language or the context. With light tuning, a model can impersonate a neighborhood organizer, a union rep, or a disaffected parent without a person ever setting foot in the country. Political campaigns themselves will likely be close behind. Every major operation already segments voters, tests messages, and optimizes delivery. AI lowers the cost of doing all that. Instead of poll-testing a slogan, a campaign can generate hundreds of arguments, deliver them one on one, and watch in real time which ones shift opinions.

The underlying fact is simple: Persuasion has become effective and cheap. Campaigns, PACs, foreign actors, advocacy groups, and opportunists are all playing on the same field—and there are very few rules.

The policy vacuum

Most policymakers have not caught up. Over the past several years, legislators in the US have focused on deepfakes but have ignored the wider persuasive threat.

Foreign governments have begun to take the problem more seriously. The European Union’s 2024 AI Act classifies election-related persuasion as a “high-risk” use case. Any system designed to influence voting behavior is now subject to strict requirements. Administrative tools, like AI systems used to plan campaign events or optimize logistics, are exempt. However, tools that aim to shape political beliefs or voting decisions are not.

By contrast, the United States has so far refused to draw any meaningful lines. There are no binding rules about what constitutes a political influence operation, no external standards to guide enforcement, and no shared infrastructure for tracking AI-generated persuasion across platforms. The federal and state governments have gestured toward regulation—the Federal Election Commission is applying old fraud provisions, the Federal Communications Commission has proposed narrow disclosure rules for broadcast ads, and a handful of states have passed deepfake laws—but these efforts are piecemeal and leave most digital campaigning untouched. 

In practice, the responsibility for detecting and dismantling covert campaigns has been left almost entirely to private companies, each with its own rules, incentives, and blind spots. Google and Meta have adopted policies requiring disclosure when political ads are generated using AI. X has remained largely silent on this, while TikTok bans all paid political advertising. However, these rules, modest as they are, cover only the sliver of content that is bought and publicly displayed. They say almost nothing about the unpaid, private persuasion campaigns that may matter most.

To their credit, some firms have begun publishing periodic threat reports identifying covert influence campaigns. Anthropic, OpenAI, Meta, and Google have all disclosed takedowns of inauthentic accounts. However, these efforts are voluntary and not subject to independent auditing. Most important, none of this prevents determined actors from bypassing platform restrictions altogether with open-source models and off-platform infrastructure.

What a real strategy would look like

The United States does not need to ban AI from political life. Some applications may even strengthen democracy. A well-designed candidate chatbot could help voters understand where the candidate stands on key issues, answer questions directly, or translate complex policy into plain language. Research has even shown that AI can reduce belief in conspiracy theories. 

Still, there are a few things the United States should do to protect against the threat of AI persuasion. First, it must guard against foreign-made political technology with built-in persuasion capabilities. Adversarial political technology could take the form of a foreign-produced video game where in-game characters echo political talking points, a social media platform whose recommendation algorithm tilts toward certain narratives, or a language learning app that slips subtle messages into daily lessons. Evaluations, such as the Center for AI Standards and Innovation’s recent analysis of DeepSeek, should focus on identifying and assessing AI products—particularly from countries like China, Russia, or Iran—before they are widely deployed. This effort would require coordination among intelligence agencies, regulators, and platforms to spot and address risks.

Second, the United States should lead in shaping the rules around AI-driven persuasion. That includes tightening access to computing power for large-scale foreign persuasion efforts, since many actors will either rent existing models or lease the GPU capacity to train their own. It also means establishing clear technical standards—through governments, standards bodies, and voluntary industry commitments—for how AI systems capable of generating political content should operate, especially during sensitive election periods. And domestically, the United States needs to determine what kinds of disclosures should apply to AI-generated political messaging while navigating First Amendment concerns.

Finally, foreign adversaries will try to evade these safeguards—using offshore servers, open-source models, or intermediaries in third countries. That is why the United States also needs a foreign policy response. Multilateral election integrity agreements should codify a basic norm: States that deploy AI systems to manipulate another country’s electorate risk coordinated sanctions and public exposure. 

Doing so will likely involve building shared monitoring infrastructure, aligning disclosure and provenance standards, and being prepared to conduct coordinated takedowns of cross-border persuasion campaigns—because many of these operations are already moving into opaque spaces where our current detection tools are weak. The US should also push to make election manipulation part of the broader agenda at forums like the G7 and OECD, ensuring that threats related to AI persuasion are treated not as isolated tech problems but as collective security challenges.

Indeed, the task of securing elections cannot fall to the United States alone. A functioning radar system for AI persuasion will require partnerships with our partners and allies. Influence campaigns are rarely confined by borders, and open-source models and offshore servers will always exist. The goal is not to eliminate them but to raise the cost of misuse and shrink the window in which they can operate undetected across jurisdictions.

The era of AI persuasion is just around the corner, and America’s adversaries are prepared. In the US, on the other hand, the laws are out of date, the guardrails too narrow, and the oversight largely voluntary. If the last decade was shaped by viral lies and doctored videos, the next will be shaped by a subtler force: messages that sound reasonable, familiar, and just persuasive enough to change hearts and minds.

For China, Russia, Iran, and others, exploiting America’s open information ecosystem is a strategic opportunity. We need a strategy that treats AI persuasion not as a distant threat but as a present fact. That means soberly assessing the risks to democratic discourse, putting real standards in place, and building a technical and legal infrastructure around them. Because if we wait until we can see it happening, it will already be too late.

Tal Feldman is a JD candidate at Yale Law School who focuses on technology and national security. Before law school, he built AI models across the federal government and was a Schwarzman and Truman scholar. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University who focuses on agentic AI and technology policy. Before Stanford, he was a privacy and security researcher at Google DeepMind and a Marshall scholar

The ads that sell the sizzle of genetic trait discrimination

One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, which promises a way for potential parents to use genetic tests to influence their baby’s traits, including eye color, hair color, and IQ.

Inside the station, every surface was wrapped with more ads—babies on turnstiles, on staircases, on banners overhead. “Think about it. Makeup and then genetic optimization,” exulted Kian Sadeghi, the 26-year-old founder of Nucleus Genomics, the startup running the ads. To his mind, one should be as accessible as the other. 

Nucleus is a young, attention-seeking genetic software company that says it can analyze genetic tests on IVF embryos to score them for 2,000 traits and disease risks, letting parents pick some and reject others. This is possible because of how our DNA shapes us, sometimes powerfully. As one of the subway banners reminded the New York riders: “Height is 80% genetic.”

The day after the campaign launched, Sadeghi and I had briefly sparred online. He’d been on X showing off a phone app where parents can click through traits like eye color and hair color. I snapped back that all this sounded a lot like Uber Eats—another crappy, frictionless future invented by entrepreneurs, but this time you’d click for a baby.

I agreed to meet Sadeghi that night in the station under a banner that read, “IQ is 50% genetic.” He appeared in a puffer jacket and told me the campaign would soon spread to 1,000 train cars. Not long ago, this was a secretive technology to whisper about at Silicon Valley dinner parties. But now? “Look at the stairs. The entire subway is genetic optimization. We’re bringing it mainstream,” he said. “I mean, like, we are normalizing it, right?”

Normalizing what, exactly? The ability to choose embryos on the basis of predicted traits could lead to healthier people. But the traits mentioned in the subway—height and IQ—focus the public’s mind toward cosmetic choices and even naked discrimination. “I think people are going to read this and start realizing: Wow, it is now an option that I can pick. I can have a taller, smarter, healthier baby,” says Sadeghi.

Entrepreneur Kian Sadeghi stands under advertising banner in the Broadway-Lafayette subway station in Manhattan, part of a campaign called “Have Your Best Baby.”
COURTESY OF THE AUTHOR

Nucleus got its seed funding from Founders Fund, an investment firm known for its love of contrarian bets. And embryo scoring fits right in—it’s an unpopular concept, and professional groups say the genetic predictions aren’t reliable. So far, leading IVF clinics still refuse to offer these tests. Doctors worry, among other things, that they’ll create unrealistic parental expectations. What if little Johnny doesn’t do as well on the SAT as his embryo score predicted?

The ad blitz is a way to end-run such gatekeepers: If a clinic won’t agree to order the test, would-be parents can take their business elsewhere. Another embryo testing company, Orchid, notes that high consumer demand emboldened Uber’s early incursions into regulated taxi markets. “Doctors are essentially being shoved in the direction of using it, not because they want to, but because they will lose patients if they don’t,” Orchid founder Noor Siddiqui said during an online event this past August.

Sadeghi prefers to compare his startup to Airbnb. He hopes it can link customers to clinics, becoming a digital “funnel” offering a “better experience” for everyone. He notes that Nucleus ads don’t mention DNA or any details of how the scoring technique works. That’s not the point. In advertising, you sell the sizzle, not the steak. And in Nucleus’s ad copy, what sizzles is height, smarts, and light-colored eyes.

It makes you wonder if the ads should be permitted. Indeed, I learned from Sadeghi that the Metropolitan Transportation Authority had objected to parts of the campaign. The metro agency, for instance, did not let Nucleus run ads saying “Have a girl” and “Have a boy,” even though it’s very easy to identify the sex of an embryo using a genetic test. The reason was an MTA policy that forbids using government-owned infrastructure to promote “invidious discrimination” against protected classes, which include race, religion and biological sex.

Since 2023, New York City has also included height and weight in its anti-discrimination law, the idea being to “root out bias” related to body size in housing and in public spaces. So I’m not sure why the MTA let Nucleus declare that height is 80% genetic. (The MTA advertising department didn’t respond to questions.) Perhaps it’s because the statement is a factual claim, not an explicit call to action. But we all know what to do: Pick the tall one and leave shorty in the IVF freezer, never to be born.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: political chatbot persuasion, and gene editing adverts

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI chatbots can sway voters better than political advertisements

The news: Chatting with a politically biased AI model is more effective than political ads at nudging both Democrats and Republicans to support presidential candidates of the opposing party, new research shows.

The catch: The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.  Read the full story.

—Michelle Kim 

The era of AI persuasion in elections is about to begin 

—Tal Feldman is a JD candidate at Yale Law School who focuses on technology and national security. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University who focuses on agentic AI and technology policy. 

The fear that elections could be overwhelmed by AI-generated realistic fake media has gone mainstream—and for good reason.

But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. AI chatbots can shift voters’ views by a substantial margin, far more than traditional political advertising tends to do.

In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply. Read the full story. 

The ads that sell the sizzle of genetic trait discrimination

—Antonio Regalado, senior editor for biomedicine

One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, which promises a way for potential parents to use genetic tests to influence their baby’s traits, including eye color, hair color, and IQ.

Inside the station, every surface was wrapped with more of its ads—babies on turnstiles, on staircases, on banners overhead. “Think about it. Makeup and then genetic optimization,” exulted Kian Sadeghi, the 26-year-old founder of Nucleus Genomics, the startup running the ads. 

The day after the campaign launched, Sadeghi and I had briefly sparred online. He’d been on X showing off a phone app where parents can click through traits like eye color and hair color. I snapped back that all this sounded a lot like Uber Eats—another crappy, frictionless future invented by entrepreneurs, but this time you’d click for a baby.

That night, I agreed to meet Sadeghi in the station under a banner that read, “IQ is 50% genetic.” Read on to see how Antonio’s conversation with Sadeghi went

This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The metaverse’s future looks murkier than ever
OG believer Mark Zuckerberg is planning deep cuts to the division’s budget. (Bloomberg $)
However some of that money will be diverted toward smart glasses and wearables. (NYT $)
Meta just managed to poach one of Apple’s top design chiefs. (Bloomberg $)

2 Kids are effectively AI’s guinea pigs
And regulators are slowly starting to take note of the risks. (The Economist $)
You need to talk to your kid about AI. Here are 6 things you should say. (MIT Technology Review)

3 How a group of women changed UK law on non-consensual deepfakes
It’s a big victory, and they managed to secure it with stunning speed. (The Guardian)
But bans on deepfakes take us only so far—here’s what else we need. (MIT Technology Review)
An AI image generator startup just leaked a huge trove of nude images. (Wired $) 

4 OpenAI is acquiring an AI model training startup
Its researchers have been impressed by the monitoring and de-bugging tools built by Neptune. (NBC)
It’s not just you: the speed of AI deal-making really is accelerating. (NYT $)

5 Russia has blocked Apple’s FaceTime video calling feature
It seems the Kremlin views any platform it doesn’t control as dangerous. (Reuters $)
How Russia killed its tech industry. (MIT Technology Review)

6 The trouble with AI browsers
This reviewer tested five of them and found them to be far more effort than they’re worth. (The Verge $)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

7 An anti-AI activist has disappeared 
Sam Kirchner went AWOL after failing to show up at a scheduled court hearing, and friends are worried. (The Atlantic$)

8 Taiwanese chip workers are creating a community in the Arizona desert
A TSMC project to build chip factories is rapidly transforming this corner of the US. (NYT $)

9 This hearing aid has become a status symbol 
Rich people with hearing issues swear by a product made by startup Fortell. (Wired $)
+ Apple AirPods can be a gateway hearing aid. (MIT Technology Review

10 A plane crashed after one of its 3D-printed parts melted 🛩🫠
Just because you can do something, that doesn’t mean you should. (BBC)

Quote of the day

“Some people claim we can scale up current technology and get to general intelligence…I think that’s bullshit, if you’ll pardon my French.”

—AI researcher Yann LeCun explains why he’s leaving Meta to set up a world-model startup, Sifted reports. 

One more thing

chromosome pairs with an additional chromosome highlighted

ILLUSTRATION SOURCES: NATIONAL HUMAN GENOME RESEARCH INSTITUTE

What to expect when you’re expecting an extra X or Y chromosome

Sex chromosome variations, in which people have a surplus or missing X or Y, occur in as many as one in 400 births. Yet the majority of people affected don’t even know they have them, because these conditions can fly under the radar.

As more expectant parents opt for noninvasive prenatal testing in hopes of ruling out serious conditions, many of them are surprised to discover instead that their fetus has a far less severe—but far less well-known—condition.

And because so many sex chromosome variations have historically gone undiagnosed, many ob-gyns are not familiar with these conditions, leaving families to navigate the unexpected news on their own. Read the full story.

—Bonnie Rochman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s never too early to start practicing your bûche de Noëlskills for the holidays.
+ Brandi Carlile, you will always be famous.
+ What do bartenders get up to after finishing their Thanksgiving shift? It’s time to find out.
+ Pitchfork’s controversial list of the best albums of the year is here!

Harnessing human-AI collaboration for an AI roadmap that moves beyond pilots

The past year has marked a turning point in the corporate AI conversation. After a period of eager experimentation, organizations are now confronting a more complex reality: While investment in AI has never been higher, the path from pilot to production remains elusive. Three-quarters of enterprises remain stuck in experimentation mode, despite mounting pressure to convert early tests into operational gains.

“Most organizations can suffer from what we like to call PTSD, or process technology skills and data challenges,” says Shirley Hung, partner at Everest Group. “They have rigid, fragmented workflows that don’t adapt well to change, technology systems that don’t speak to each other, talent that is really immersed in low-value tasks rather than creating high impact. And they are buried in endless streams of information, but no unified fabric to tie it all together.”

The central challenge, then, lies in rethinking how people, processes, and technology work together.

Across industries as different as customer experience and agricultural equipment, the same pattern is emerging: Traditional organizational structures—centralized decision-making, fragmented workflows, data spread across incompatible systems—are proving too rigid to support agentic AI. To unlock value, leaders must rethink how decisions are made, how work is executed, and what humans should uniquely contribute.

“It is very important that humans continue to verify the content. And that is where you’re going to see more energy being put into,” Ryan Peterson, EVP and chief product officer at Concentrix.

Much of the conversation centered on what can be described as the next major unlock: operationalizing human-AI collaboration. Rather than positioning AI as a standalone tool or a “virtual worker,” this approach reframes AI as a system-level capability that augments human judgment, accelerates execution, and reimagines work from end to end. That shift requires organizations to map the value they want to create; design workflows that blend human oversight with AI-driven automation; and build the data, governance, and security foundations that make these systems trustworthy.

“My advice would be to expect some delays because you need to make sure you secure the data,” says Heidi Hough, VP for North America aftermarket at Valmont. “As you think about commercializing or operationalizing any piece of using AI, if you start from ground zero and have governance at the forefront, I think that will help with outcomes.”

Early adopters are already showing what this looks like in practice: starting with low-risk operational use cases, shaping data into tightly scoped enclaves, embedding governance into everyday decision-making, and empowering business leaders, not just technologists, to identify where AI can create measurable impact. The result is a new blueprint for AI maturity grounded in reengineering how modern enterprises operate.

“Optimization is really about doing existing things better, but reimagination is about discovering entirely new things that are worth doing,” says Hung.

Watch the webcast.

This webcast is produced in partnership with Concentrix.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

CFO: Brands Rarely Max Out Meta Ads

Abir Syed is an accountant turned marketer turned chief financial officer. He says ecommerce marketing success largely depends on creative volume, and few merchants have exhausted any channel, much less Meta.

Abir is co-founder of UpCounting, an accounting and fractional CFO firm in Montréal, Canada. In our recent conversation, he shared common financial mistakes of merchants, key metrics to monitor, and, yes, how to grow ad revenue on Meta.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Who are you and what do you do?

Abir Syed: I am the co-founder of UpCounting, an accounting and fractional chief financial officer firm focused on ecommerce. We handle everything from basic bookkeeping and transactional work to high-level needs, such as due diligence, back-office implementations, cash flow forecasting, and financial modeling.

I am also a certified public accountant and previously ran both an ecommerce brand and a marketing agency. Most finance professionals lack hands-on experience in advertising or customer acquisition, but I have lived those challenges, and that background significantly shapes how I advise founders.

Marketing is usually an ecommerce brand’s most significant expense; understanding it is essential for providing meaningful financial guidance. So we structure our reporting, dashboards, and forecasting around the realities of ecommerce operations — not just accounting accuracy but actionable insights tied to contribution margin, customer behavior, marketing performance, and growth strategy.

Bandholz: What is the most common financial mistake founders make?

Syed: I see three major issues repeatedly. First, many founders track the wrong numbers. They monitor revenue or look at profit once a month, but rarely examine contribution margin or cash flow. Contribution margin is often ignored entirely, leading to major blind spots. Top-line revenue means little without understanding the economics underneath.

Second, operators often misunderstand what is required to enable growth. I am frequently asked to review struggling ad accounts. A recurring issue is underinvesting in creativity. Founders try to force growth by pushing return on ad spend harder, rather than improving the creative foundation required to scale spend while maintaining healthy acquisition costs.

Third, omnichannel brands frequently fail to separate channel performance. I see profit and loss statements with a single cost of goods sold line combining, say, Shopify, Amazon, and wholesale. Blending everything prevents founders from seeing how each channel is truly performing. Wholesale, for instance, operates on a very different cash cycle.

Bandholz: How often should operators review their financials?

Syed: It depends on the business’s size, complexity, and growth goals.

Most operators should review key historical metrics weekly — cash flow, expenses, and anything unusual moving through the business. A weekly cadence helps identify problems early.

More detailed reporting, such as margin and channel breakdowns, is usually best reviewed monthly. That interval provides cleaner data and enough distance to spot trends rather than reacting to noise.

The most overlooked piece is forecasting. Few brands build forward-looking financial models because it is difficult, yet essential for aggressive growth. Forecasting helps you understand the implications of scaling. Conservative operators can get away without it, but brands pushing hard need projections. Too many founders grow quickly with no plan, no modeling, and no clarity on future cash needs.

Bandholz: How do you decide if a marketing channel is maxed out?

Syed: It is difficult to know with total certainty, but in most cases, brands have not truly saturated a channel, especially Meta. There is usually far more room available than teams realize.

I often compare similar brands in the same category. One might spend $200,000 a month on Meta while also allocating resources to podcasts, TikTok, affiliates, and other channels. Another in the same space might spend $200,000 a day on Meta. They often have similar products, audiences, and brand quality. The difference is creative volume. The larger spender produces an enormous amount of fresh creative, while the other is effectively using a strategy from years ago.

Most brands have not come close to saturating Meta. They are simply underfunding creative strategy.

Increasing creative volume opens new audience pockets and helps find additional winning ads. If the creative that got you to $200,000 in monthly sales has plateaued, you must increase output to climb further. The more the creative volume, the higher the revenue. The pace depends on profitability, reinvestment capacity, creative quality, and a bit of luck.

Working with a media-buying agency that also produces creative can cost upwards of $7,000 per month, ideally under 10% of ad spend. Smaller brands may temporarily spend as much as 30%.

Bandholz: How should brands budget for bookkeeping?

Syed: Smaller brands face a minimum cost for competent bookkeeping. Hiring in-house rarely makes sense until the company is very large. A Shopify-only brand doing $1–5 million annually should expect to spend $2,000-$3,000 per month. Cheaper options exist, but the trade-off is often lower accuracy and weaker communication.

The challenge is that many founders cannot discern whether financial data is clean. It is similar to hiring an internet security expert when you lack technical knowledge — you might overlook major issues until something breaks. We have onboarded many clients who tried cheaper options, only to find their data was consistently incorrect.

To scale aggressively or make data-driven decisions, you need accurate, timely financials and guidance on interpreting them.

Once a brand surpasses roughly $5 million in annual sales, bookkeeping for multiple sales channels typically costs $5,000 to $8,000 per month.

Bandholz: Where can people support you, hire you, follow you?

Abir: Our site is UpCounting.com. I’m on LinkedInInstagram, and X.

YouTube Shorts Algorithm May Now Favor Fresh Over Evergreen via @sejournal, @MattGSouthern

YouTube appears to have changed how it recommends Shorts, according to analysts who work with some of the platform’s largest channels. The shift reportedly began in mid-September and deprioritizes videos older than roughly 30 days, favoring more recent uploads.

Mario Joos, a retention strategist who works with MrBeast, Stokes Twins, and Alan’s Universe, first identified the pattern after weeks of trying to explain a broad dip in performance across his clients. Dot Esports reports that Joos analyzed data across channels with 100 million to one billion monthly views and found a consistent drop in impressions for older Shorts.

What The Data Shows

Joos says YouTube has “changed the short-form content algorithm for the worse.” His analysis identified a threshold around 28-30 days. Shorts older than that window now receive far fewer impressions than they did before mid-September.

The pattern wasn’t immediately obvious in channel-wide analytics because newer content masked the decline. Only after filtering specifically for Shorts posted before the 30-day cutoff did the picture become clear.

Joos posted a graph detailing the drop-off for seven major Shorts channels, though he withheld their names for client sensitivity. Every chart showed the same moment: around September, older Shorts’ view counts dropped sharply and stayed far lower than before.

He described the change as “the flattening.” In his view, YouTube is pushing creators toward high-volume uploads at the expense of quality. Joos says he understands this approach from a corporate standpoint as a competitive response to TikTok, but warns it disproportionately affects creators who depend on their Shorts income.

Joos is explicit about his uncertainty. He calls this “a carefully constructed working theory and not a confirmed fact.” Some commenters on his analysis note they have not experienced similar drops on their channels. Others corroborate his findings.

Creators Confirm The Pattern

Tim Chesney, a creator with two billion lifetime views across his channels, confirmed the pattern on X. He wrote:

“Can confirm this is true. 2B views on this chart, and in September all of the evergreen videos simply tanked. I think pushing fresh content makes sense, but when you think about it, it makes investing into your content and spending time improving it, irrelevant.”

Chesney argues that the shift pushes creators to “produce more instead of better.” He warned that if the trend continues, YouTube will become a “trash bin” of low-effort content similar to what he sees on TikTok.

This echoes concerns from earlier in the year. In August, multiple creators documented synchronized view drops that appeared related to separate platform modifications. Gaming channel Bellular News documented precipitous declines in desktop viewership starting August 13, though that change appeared related to how YouTube counted views from browsers with ad-blocking software.

The September Shorts shift appears to be a distinct change affecting the recommendation algorithm rather than view counting methodology.

The Evergreen Value Proposition

For years, the case for video content has rested on compounding value. Unlike trend-dependent posts that fade quickly, evergreen videos continue generating views and revenue long after publication. One production investment pays off across months or years.

This model has been central to how creators and businesses justify video investment. A tutorial published today should still attract viewers next year. A how-to guide should compound views as search demand persists.

A recency-focused algorithm undermines that math. If older Shorts stop generating impressions after 30 days, the value equation changes. Creators would need to publish continuously to maintain visibility, shifting resources from quality to quantity.

The economics become punishing. Instead of building a library that works while you sleep, creators face a treadmill where last month’s content stops contributing. Revenue becomes dependent on constant production rather than accumulated assets.

The Broader Context

The reported Shorts change follows a familiar pattern for anyone who has watched Google Search evolve. Freshness signals have long played a role in ranking, sometimes appearing to override comprehensive, well-researched content.

For SEO professionals, this matters beyond YouTube. Video strategy has often been pitched as a hedge against organic search volatility. As AI Overviews and zero-click results reduce traffic from traditional search, YouTube has represented an alternative channel with different dynamics.

If YouTube is applying similar freshness-over-quality logic, that changes the risk calculus. Practitioners evaluating where to invest their content resources may find the same frustrations emerging across both platforms.

This also reflects a broader pattern in how Google communicates with creators. YouTube’s Creator Liaison position exists to bridge the gap between platform and creators, but analysts and creators consistently report limited transparency about algorithm changes. The company rarely confirms or explains modifications until long after creators have identified them through their own data analysis.

Why This Matters

The value proposition of evergreen Shorts depends on long-tail performance. A shift toward recency-based ranking would require higher publishing frequency to maintain the same visibility.

Practitioners frustrated with Google Search volatility may find similar dynamics emerging on YouTube. The promise of a stable alternative channel looks less reliable if algorithm changes can abruptly devalue your content library.

This also affects how you advise clients considering video investment. The traditional pitch of “build once, earn forever” requires qualification if evergreen content has an effective shelf life of 30 days.

What To Do Now

If you publish Shorts, check your analytics for view declines on content older than 30 days. Compare September 2025 performance against earlier months. Look specifically at videos that previously showed steady long-tail performance.

The pattern Joos identified spans channels of very different sizes and categories. That breadth suggests a platform-level change rather than isolated performance issues. Whether YouTube acknowledges it or not, the data these analysts are reporting points to a shift worth monitoring closely.

Looking Ahead

YouTube hasn’t confirmed any changes to Shorts ranking. Without official documentation, these remain analyst observations and creator reports.

During Google’s Q3 earnings call, Philipp Schindler noted that recommendation systems are “driving robust watch time growth” and that Gemini models are enabling “further discovery improvement.” The company didn’t specify how these improvements affect content distribution or whether recency now plays a larger role in recommendations.


Featured Image: Mijansk786/Shutterstock

PPC Pulse: AI Max Insights, Cyber Monday Trends & A New Google Asset via @sejournal, @brookeosmundson

The conversations shaping PPC this week focused on how AI interprets intent, how holiday demand played out across Shopping and Performance Max, and how Google is adding more automated language directly into ads.

Google shared more clarity around AI Max, while Adalysis shared AI Max match type behavior, retail analysts broke down early Cyber Monday performance trends, and a potential new Google automated ad asset surfaced that raises questions about brand control.

Here is what stands out for advertisers this week and where you should pay attention.

AI Max Clarifications & New Insights On Match Types

The conversation around AI Max is not slowing down.

A YouTube short circulating this week highlighted Google reaffirming a key message. Match types still serve a purpose, even as AI takes on more interpretation of intent.

This also aligns with a LinkedIn post from two weeks ago where Google Ads Liaison, Ginny Marvin, clarified some misconceptions around the use and functionality of AI Max. Specifically, around:

  • What AI Max is designed to do.
  • If AI Max repackages existing features.
  • What users should expect based on their current keyword match type setup.
  • How to measure incremental lift.
Screenshot taken by author from LinkedIn, December 2025

The post got a lot of chatter in the comments, specifically around Brad Geddes’s comment, with refuting information, stating:

We’re seeing many instances of AI max matching to exact match keywords or exact match variants. So when you look at your totals, the AI max column is a mixture of the AI max matches along with search terms your exact match keywords would have matched to if AI max didn’t exist.

This led Adalysis to publish a thoughtful breakdown of search term behavior within AI Max. The post shows clear examples where the model expands into adjacent intent that still feels relevant, but not necessarily tied to the exact keyword chosen.

This mirrors what many practitioners are already seeing. Search terms look broader. Relevance varies. The model relies on intention, not precision, which shifts how advertisers think about coverage.

Why This Matters For Advertisers

The bigger takeaway here is that your structure still steers the model. AI Max may evaluate intent more flexibly, but it is not inventing direction on its own.

It relies on the signals you set through match types, keyword groupings, and the guardrails you place around your campaigns. When advertisers downplay match types or assume AI will sort everything out, query quality usually becomes harder to manage.

A thoughtful keyword strategy gives the model clearer boundaries to work within. It also helps you understand why certain queries show up and how the system interpreted them.

The more intentional your structure, the more predictable your outcomes. This is the difference between AI supporting your strategy and AI creating a strategy for you.

Cyber Monday PPC Trends Across Shopping And PMax

Cyber Monday data and insights came in quickly this year. Optmyzr shared performance highlights from accounts it manages, showing steady results and more predictable cost patterns than many expected.

Some of its main findings included:

  • Brands spent more YoY to stay visible, even though impressions declined.
  • Clicks and CTR increased YoY.
  • Early conversion data reports decreased ROAS and increased CPA, but noted this isn’t final

Optmyzr reiterated that they would share more final details around conversions and ROAS at a later time due to conversion lag.

Mike Ryan also reviewed more than 2.5 million euros spent on Black Friday in PMax and Shopping spend across retailers and reported noticeable differences from previous years. Some of his findings were similar to Optmyzr, including that advertisers spent 31% more, but average order value (AOV) decreased 6%.

Essentially, advertiser spend efficiency decreased significantly YoY.

As he observed hourly trend data, he noted revenue peaked during early evening hours, advocating to keep budget healthy all throughout the day to capitalize on that intent.

Lastly, he found unique competition up 12%, and confirmed that Amazon still runs Shopping ads in Europe (while they’ve stopped running in the United States earlier this year).

Why This Matters For Advertisers

The data tells a consistent story. Attention is still there, but it is more expensive to earn. Optmyzr’s numbers show higher spend year over year, even as impressions dipped, which reinforces that visibility continues to cost more. Clicks and CTR were up across both ecommerce and lead gen, which signals that people were still shopping and comparing options. The interest is not gone. The price of reaching that interest simply climbed.

The bigger takeaway for advertisers is that strong engagement does not solve the efficiency problem. Costs rose across the board, which puts even more pressure on the post-click experience. When attention is not the constraint anymore, landing page clarity, offer strength, and conversion flow become the real differentiators. The accounts that invested in those areas will feel less of the margin squeeze that defined this year’s shopping window.

New Automated Ad Asset Appears In Google Ads

A new automated asset gained attention this week when Anthony Higman shared a screenshot showing Google testing a “What People Are Saying” asset.

Screenshot taken by author from LinkedIn, December 2025

The asset included AI-generated summary text that looked more like a sentiment recap than a traditional review snippet. What stood out is that the text did not appear to be pulled from the advertiser’s site or from structured reviews. It looked generated by Google based on potential store ratings and reviews.

This is another example of Google introducing language directly into ads, even before advertisers get official documentation or a clear explanation of how the text is produced. The extension reads confidently, but the source of the claims is not obvious.

That has already sparked discussion about accuracy, oversight, and how much creative control advertisers may lose as automated assets continue to expand.

Why This Matters For Advertisers

This asset signals that Google is continuing to explore new ways to surface AI-generated supporting text in ads. That makes oversight more important, simply because advertisers may see language that does not come directly from their own assets.

While the goal is to enhance relevance and provide helpful context to users, it also means brands should keep an eye on auto-applied assets to ensure the messaging aligns with how they want to show up in search. A quick review process can go a long way in avoiding surprises and keeping ad copy consistent with your broader strategy.

Theme Of The Week: Context Shapes Performance

Across all three updates, the common thread is how context influences outcomes.

AI Max decisions depend heavily on the structure you set. Cyber Monday performance reflected a market where attention remained strong but came at a higher cost, putting more weight on what happens after the click. The new automated extension shows Google continuing to experiment with ways to add context inside ads.

Together, these updates point to a simple reality. The more intentional you are with structure, creative, and user experience, the more predictable your results become, even as automation takes on a larger role.

More Resources:


Featured Image: Pixel-Shot/Shutterstock

Complete Crawler List For AI User-Agents [Dec 2025] via @sejournal, @vahandev

AI visibility plays a crucial role for SEOs, and this starts with controlling AI crawlers. If AI crawlers can’t access your pages, you’re invisible to AI discovery engines.

On the flip side, unmonitored AI crawlers can overwhelm servers with excessive requests, causing crashes and unexpected hosting bills.

User-agent strings are essential for controlling which AI crawlers can access your website, but official documentation is often outdated, incomplete, or missing entirely. So, we curated a verified list of AI crawlers from our actual server logs as a useful reference.

Every user-agent is validated against official IP lists when available, ensuring accuracy. We will maintain and update this list to catch new crawlers and changes to existing ones.

The Complete Verified AI Crawler List (December 2025)

Name Purpose Crawl Rate of SEJ (pages/hour) Verified IP List Robots.txt disallow Complete User Agent
GPTBot AI training data collection for GPT models (ChatGPT, GPT-4o) 100 Official IP List User-agent: GPTBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.3; +https://openai.com/gptbot)
ChatGPT-User AI agent for real-time web browsing when users interact with ChatGPT 2400 Official IP List User-agent: ChatGPT-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; ChatGPT-User/1.0; +https://openai.com/bot
OAI-SearchBot AI search indexing for ChatGPT search features (not for training) 150 Official IP List User-agent: OAI-SearchBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36; compatible; OAI-SearchBot/1.3; +https://openai.com/searchbot
ClaudeBot AI training data collection for Claude models 500 Official IP List User-agent: ClaudeBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
Claude-User AI agent for real-time web access when Claude users browse <10>

Not available User-agent: Claude-User
Disallow: /sample-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Claude-User/1.0; +Claude-User@anthropic.com)
Claude-SearchBot AI search indexing for Claude search capabilities <10>

Not available User-agent: Claude-SearchBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Claude-SearchBot/1.0; +https://www.anthropic.com)
Google-CloudVertexBot AI agent for Vertex AI Agent Builder (site owners’ request only) <10>

Official IP List User-agent: Google-CloudVertexBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.7390.122 Mobile Safari/537.36 (compatible; Google-CloudVertexBot; +https://cloud.google.com/enterprise-search)
Google-Extended Token controlling AI training usage of Googlebot-crawled content. User-agent: Google-Extended
Allow: /
Disallow: /private-folder
Gemini-Deep-Research AI research agent for Google Gemini’s Deep Research feature <10>

Official IP List User-agent: Gemini-Deep-Research
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Gemini-Deep-Research; +https://gemini.google/overview/deep-research/) Chrome/135.0.0.0 Safari/537.36
Google  Gemini’s chat when a user asks to open a webpage <10>

Google
Bingbot Powers Bing Search and Bing Chat (Copilot) AI answers 1300 Official IP List User-agent: BingBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/116.0.1938.76 Safari/537.36
Applebot-Extended Doesn’t crawl but controls how Apple uses Applebot data. <10>

Official IP List User-agent: Applebot-Extended
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4 Safari/605.1.15 (Applebot/0.1; +http://www.apple.com/go/applebot)
PerplexityBot AI search indexing for Perplexity’s answer engine 150 Official IP List User-agent: PerplexityBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; PerplexityBot/1.0; +https://perplexity.ai/perplexitybot)
Perplexity-User AI agent for real-time browsing when Perplexity users request information <10>

Official IP List User-agent: Perplexity-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Perplexity-User/1.0; +https://perplexity.ai/perplexity-user)
Meta-ExternalAgent AI training data collection for Meta’s LLMs (Llama, etc.) 1100 Not available User-agent: meta-externalagent
Allow: /
Disallow: /private-folder
meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
Meta-WebIndexer Used to improve Meta AI search. <10>

Not available User-agent: Meta-WebIndexer
Allow: /
Disallow: /private-folder
meta-webindexer/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
Bytespider AI training data for ByteDance’s LLMs for products like TikTok <10>

Not available User-agent: Bytespider
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Linux; Android 5.0) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; Bytespider; https://zhanzhang.toutiao.com/)
Amazonbot AI training for Alexa and other Amazon AI services 1050 Not available User-agent: Amazonbot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36
DuckAssistBot AI search indexing for DuckDuckGo search engine 20 Official IP List User-agent: DuckAssistBot
Allow: /
Disallow: /private-folder
DuckAssistBot/1.2; (+http://duckduckgo.com/duckassistbot.html)
MistralAI-User Mistral’s real-time citation fetcher for “Le Chat” assistant <10>

Not available User-agent: MistralAI-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots)
Webz.io Data extraction and web scraping used by other AI training companies. Formerly known as Omgili. <10>

Not available User-agent: webzio
Allow: /
Disallow: /private-folder
webzio (+https://webz.io/bot.html)
Diffbot Data extraction and web scraping used by companies all over the world. <10>

Not available User-agent: Diffbot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729; Diffbot/0.1; +http://www.diffbot.com)
ICC-Crawler AI and machine learning data collection <10>

Not available User-agent: ICC-Crawler
Allow: /
Disallow: /private-folder
ICC-Crawler/3.0 (Mozilla-compatible; ; https://ucri.nict.go.jp/en/icccrawler.html)
CCBot Open-source web archive used as training data by multiple AI companies <10>

Official IP List User-agent: CCBot
Allow: /
Disallow: /private-folder
CCBot/2.0 (https://commoncrawl.org/faq/)

The user-agent strings above have all been verified against Search Engine Journal server logs.

Popular AI Agent Crawlers With Unidentifiable User Agent

We’ve found that the following didn’t identify themselves:

  • you.com.
  • ChatGPT’s agent Operator.
  • Bing’s Copilot chat.
  • Grok.
  • DeepSeek.

There is no way to track this crawler from accessing webpages other than by identifying the explicit IP.

We set up a trap page (e.g., /specific-page-for-you-com/) and used the on-page chat to prompt you.com to visit it, allowing us to locate the corresponding visit record and IP address in our server logs. Below is the screenshot:

Screenshot by author, December 2025

What About Agentic AI Browsers?

Unfortunately, AI browsers such as Comet or ChatGPT’s Atlas don’t differentiate themselves in the user agent string, and you can’t identify them in server logs and blend with normal users’ visits.

Chatgpt's Atlas browser user agetn string from server logs records
ChatGPT’s Atlas browser user agent string from server logs records (Screenshot by author, December 2025)

This is disappointing for SEOs because tracking agentic browser visits to a website is important for reporting POV.

How To Check What’s Crawling Your Server

Some hosting companies offer a user interface (UI) that makes it easy to access and look at server logs, depending on what hosting service you are using.

If your hosting doesn’t offer this, you can get server log files (usually located  /var/log/apache2/access.log in Linux-based servers) via FTP or request it from your server support to send it to you.

Once you have the log file, you can view and analyze it in either Google Sheets (if the file is in CSV format), Screaming Frog’s log analyzer, or, if your log file is less than 100 MB, you can try analyzing it with Gemini AI.

How To Verify Legitimate Vs. Fake Bots

Fake crawlers can spoof legitimate user agents to bypass restrictions and scrape content aggressively. For example, anyone can impersonate ClaudeBot from their laptop and initiate crawl request from the terminal. In your server log, you will see it as Claudebot is crawling it:

curl -A 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)' https://example.com

Verification can help to save server bandwidth and prevent harvesting content illegally. The most reliable verification method you can apply is checking the request IP.

Check all IPs and scan to match if it’s one of the officially declared IPs listed above. If so, you can allow the request; otherwise, block.

Various types of firewalls can help you with this via allowlist verified IPs (which allows legitimate bot requests to pass through), and all other requests impersonating AI crawlers in their user agent strings are blocked.

For example, in WordPress, you can use Wordfence free plugin to allowlist legitimate IPs from the official lists (as above) and add blocking custom rules as below:

The allowlist rule is superior, and it will let legitimate crawlers pass through and block any impersonation request which comes from different IPs.

However, please note that it is possible to spoof an IP address, and in that case, when bot user agent and IPs are spoofed, you won’t be able to block it.

Conclusion: Stay In Control Of AI Crawlers For Reliable AI Visibility

AI crawlers are now part of our web ecosystem, and the bots listed here represent the major AI platforms currently indexing the web, although this list is likely to grow.

Check your server logs regularly to see what’s actually hitting your site and make sure you inadvertently don’t block AI crawlers if visibility in AI search engines is important for your business. If you don’t want AI crawlers to access your content, block them via robots.txt using the user-agent name.

We’ll keep this list updated as new crawlers emerge and update existing ones, so we recommend you bookmark this URL, or revisit this article on a regular basis to keep your AI crawler list up to date.

More Resources:


Featured Image: BestForBest/Shutterstock

SEO Pulse: Google Updates Console, Maps & AI Mode Flow via @sejournal, @MattGSouthern

Google packed a lot into this week, with Search Console picking up AI-powered configuration, Maps loosening its real-name rule for reviews, and a new test nudging more people from AI Overviews into AI Mode.

Here’s what that means for you.

Google Search Console Tests AI-Powered Report Configuration

Google introduced an experimental AI feature in Search Console that lets you describe the report you want and have the tool build it for you.

The feature, announced in a Google blog post, lives inside the Search results Performance report. You can type something like “compare clicks from UK versus France,” and the system will set filters, comparisons, and metrics to match what it thinks you mean.

For now, the feature is limited to Search results data, while Discover, News, and video reports still work the way they always have. Google says it’s starting with “a limited set of websites” and will expand access based on feedback.

The update is about configuration, not new metrics. It can help you set up a table, but it will not change how you sort or export data, and it does not add separate reporting for AI Overviews or AI Mode.

Why SEOs Should Pay Attention

If you spend a lot of time rebuilding the same types of reports, this can save you some setup time. It’s easier to describe a comparison in one sentence than to remember which checkboxes and filters you used last month.

The tradeoff is that you still need to confirm what the AI actually did. When a view comes from a written request instead of a manual series of clicks, it’s easy for a small misinterpretation to slip through and show up in a deck or a client email.

This is not a replacement for understanding how your reports are put together. It also does nothing to answer a bigger question for SEO professionals about how much traffic is coming from Google’s AI surfaces.

What SEO Professionals Are Saying

On LinkedIn, independent SEO consultant Brodie Clark summed up the launch with:

“Whoa, Google Search Console just rolled out another gem: a new AI-powered configuration to analyse your search traffic. The new feature is designed to reduce the effort it takes for you to select, filter, and compare your data.”

He then walks through how it can apply filters, set comparisons, and pick metrics for common tasks.

Under the official Search Central post, one commenter joked about the gap between configuration and data:

“GSC: ‘Describe the dataview you want to see’ Me: ‘Show me how much traffic I receive from AI overviews and AI mode’ :’)”

The overall mood is that this is a genuine quality-of-life improvement, but many SEO professionals would still rather get first-class reporting for AI Overviews and AI Mode than another way to slice existing Search results data.

Read our full coverage: Google Adds AI-Powered Configuration To Search Console

Google Maps Reviews No Longer Require Real Names

Google Maps now lets people leave reviews under a custom display name and profile picture instead of their real Google Account name. The change rolled out globally and is documented in recent Google Maps updates.

You set this up in the Contributions section of your profile. Once you choose a display name and avatar, that identity appears on new reviews and can be applied to older ones if you edit them, while Google still ties everything back to a real account with a full activity history.

The change is more than cosmetic because review identity shapes how people interpret trust and intent when they scan a local business profile.

Why SEOs Should Pay Attention

Reviews remain one of the strongest local ranking signals, based on Whitespark’s Local Search Ranking Factors survey. When names turn into nicknames, it shifts how business owners and customers read that feedback.

For local businesses, it becomes harder to recognize reviewers at a glance, review audits feel more manual because names are less useful, and owners may feel they have less visibility into who is talking about them, even though Google still sees the underlying accounts.

If you manage local clients, you will likely spend time explaining that this doesn’t make reviews truly anonymous, and that review solicitation and response strategies still matter.

What Local SEO Professionals Are Saying

In a LinkedIn post, Darren Shaw, founder of Whitespark, tried to calm some of the panic:

“Hot take: Everyone is freaking out that anonymous Google reviews will cause a surge in fake review spam, but I don’t think so.”

He points out that anyone determined to leave fake reviews can already create throwaway accounts, and that:

“Anonymous display names ≠ anonymous accounts”

Google still sees device data, behavior patterns, and full contribution history. In his view, the bigger story is that this change lowers the barrier for honest feedback in “embarrassed consumer” categories like criminal defense, rehab, and therapy, where people do not want their real names in search results.

The comments add useful nuance. Curtis Boyd expects “an increase in both 5 star reviews for ‘embarrassed consumer industries’ and correspondingly – 1 star reviews, across all industries as google makes it easier to hide identity.”

Taken together, the thread suggests you should watch for changes in review volume and rating mix, especially in sensitive verticals, without assuming this update alone will cause a sudden spike in spam.

Read our full coverage: Google Maps Lets Users Post Reviews Using Nicknames

Google Tests Seamless AI Overviews To AI Mode Transition

Google is testing a new mobile flow that sends people straight from AI Overviews into AI Mode when they tap “Show more,” based on a post from Robby Stein, VP of Product for Google Search.

In the examples Google has shown, you see an AI Overview at the top of the results page. When you expand it, an “Ask anything” bar appears at the bottom, and typing into that bar opens AI Mode with your original query pulled into a chat thread.

The test is limited to mobile and to countries where AI Mode is already available, and Google hasn’t said how long it will run or when it might roll out more broadly.

Why SEOs Should Pay Attention

This test blurs the line between AI Overviews as a SERP feature and AI Mode as a separate product. If it sticks, someone who sees your content cited in an Overview has a clear path to keep asking follow-up questions inside AI Mode instead of scrolling down to organic results.

On mobile, where this is running first, the effect is stronger because screen space is tight. A prominent “Ask anything” bar at the bottom of the screen gives people an obvious option that doesn’t involve hunting for blue links underneath ads, shopping units, and other features.

If your pages show up in AI Overviews today, it’s worth watching mobile traffic and AI-related impressions so you have before-and-after data if this behavior expands.

What SEO Professionals Are Saying

In a widely shared LinkedIn post, Lily Ray, VP of SEO Strategy & Research at Amsive, wrote:

“Google announced today that they’ll be testing a new way for users to click directly into AI Mode via AI Overviews.”

She notes that many people will likely expect “Show more” to lead back to traditional results, not into a chat interface, and ties the test to the broader state of the results page, arguing that ads and new sponsored treatments are making it harder to find organic listings.

Ray’s most pointed line is:

“Compared to the current chaotic state of Google’s search results, AI Mode feels frictionless.”

Her view is that Google is making traditional search more cluttered while giving AI Mode a cleaner, easier experience.

Other SEO professionals in the comments give concrete examples. One notes that “the well hidden sponsored ads have gotten completely out of control lately,” describing a number one organic result that sits below “5–6 sponsored ads.” Another says they have “been working with SEO since 2007” and only recently had to pause before clicking on a result because they were not sure whether it was organic or an ad.

There’s also frustration with AI Mode’s limits. One commenter describes how the context window “just suddenly refreshes and forgets everything after about 10 prompts/turns,” which makes longer research sessions difficult even as the entry point gets smoother.

Overall, the thread reads as a warning that AI Mode may feel cleaner but also keeps people on Google, and that this test is one more step in nudging searchers toward that experience.

Read our full coverage: Google Connects AI Overviews To AI Mode On Mobile

Theme Of The Week: Google Tightens Its Grip On The Journey

All three updates are pulling in the same direction: More of the search journey happens inside Google’s own interfaces.

Search Console’s AI configuration keeps you in the Performance report longer by taking some of the work out of report setup. Maps nicknames make it easier for people to speak freely, but on a platform where Google defines how identity is presented. The AI Overviews to AI Mode test turns follow-up questions into a chat that runs on Google’s terms rather than yours.

There are real usability wins in all of this, but also fewer clear moments where a searcher is nudged off Google and onto your site.

If you want to dig deeper into this week’s stories, you can read:

And for broader context:


Featured Image: Pixel-Shot/Shutterstock