AI Platform Founder Explains Why We Need To Focus On Human Behavior, Not LLMs via @sejournal, @theshelleywalsh

Google has been doing what it always does, and that is to constantly iterate to try and retain the best product it can.

Large language models (LLMs) and generative AI chatbots are a new reality in SEO, and to keep up, Google is evolving its interface to try and cross the divide between AI and search. Although, what we should all remember is that Google has already been integrating AI in its algorithms for years.

Continuing my IMHO series and speaking to experts to gain their valuable insights, I spoke with Ray Grieselhuber, CEO of Demand Sphere and organizer of Found Conference. We explored AI search vs. traditional search, grounding data, the influence of schema, and what it all means for SEO.

“There is not really any such thing anymore as traditional search versus AI search. It’s all AI search. Google pioneered AI search more than 10 years ago.”

Scroll to the end of this article, if you want to watch the full interview.

Why Grounding Data Matters More Than The LLM Model

The conversation with Ray started with one of his recent posts on LinkedIn:

“It’s the grounding data that matters, far more than the model itself. The models will be trained to achieve certain results but, as always, the index/datasets are the prize.”

I asked him to expand on why grounding data is so important. Ray explained, “Unless something radically changes in how LLMs work, we’re not going to have infinite context windows. If you need up-to-date, grounded data, you need indexed data, and it has to come from somewhere.”

Earlier this year, Ray and his team analyzed ChatGPT’s citation patterns, comparing them to search results from both Google and Bing. Their research revealed that ChatGPT’s results overlap with Google search results about 50% of the time, compared to only 15-20% overlap with Bing.

“It’s been known that Bing has an historical relationship with OpenAI.” Ray expanded, “but, they don’t have Google’s data, index size, or coverage. So eventually, you’re going to source Google data one way or another.”

He went on to say, “That’s what I mean by the index being the prize. Google still has a massive data and index advantage.”

Interestingly, when Ray first presented these findings at Brighton SEO in April, the response was mixed. “I had people who seemed appalled that OpenAI would be using Google results,” Ray recalled.

Maybe the anger stems from the wishful idea that AI would render Google irrelevant, but Google’s dataset still remains central to search.

It’s All AI Search Now

Ray made another recent comment online about how people search:

“Humans are searchers, always have been, always will be. It’s just a question of the experience, behavior, and the tools they use. Focus on search as a primitive and being found and you can ignore pointless debates about what to call it.”

I asked him where he thinks that SEOs go wrong in their approach to the introduction of GEO/LLM visibility, and Ray responded by saying that in the industry, we often have a dialectical tension.

“We have this weird tendency in our industry to talk about how something is either dead and dying. Or, this is the new thing and you have to just rush and forget everything that you learned up until now.”

Ray thinks what we should really be focusing on is human behavior:

“These things don’t make sense in the context of what’s happening overall because I always go back to what is the core instinctual human behavior? If you’re a marketer your job is to attract human attention through their search behavior and that’s really what matters.”

“The major question is what is the experience that’s going to mediate that human behavior and their attention mechanisms versus what you have to offer, you know, as a marketer.

“There is not really any such thing anymore as traditional search versus AI search. It’s all AI search. Google pioneered AI search more than 10 years ago. They’ve been doing it for the last 10 years and now for some reason everyone’s just figuring out that now it’s AI search.”

Ray concluded, “Human behavior is the constant; experiences evolve.”

Schema’s Role In LLM Visibility

I turned the conversation to schema to clarify just how useful it is for LLM visibility and if it has a direct impact on LLMs.

Ray’s analysis reveals the truth is nuanced. LLMs don’t directly process schema in their training data, but there is some limited influence of structured data through retrieval layers when LLMs use search results as grounding data.

Ray explained that Google has essentially trained the entire internet to optimize its semantic understanding through schema markup. The reason they did this is not just for users.

“Google used Core Web Vitals to get the entire internet to optimize itself so that Google wouldn’t have to spend so much money crawling the internet, and they kind of did the same thing with building their semantic layer that enabled them to create an entire new level of richness in the results.”

Ray stressed that schema is only being used as a hint, and it shouldn’t be a question of does this work or not – should we implement Schema to influence results? Instead, SEOs should be focusing on the impact on user and human behavior.

Attract Human Attention Through Search Behavior

Binary thinking, such as SEO is dead, or LLMs are the new SEO, misses the reality that search behavior remains fundamentally unchanged. Humans are searchers who want to find information efficiently, and this underlying need remains constant.

Ray said that what really matters and underlines SEO is to attract human attention through their search behavior.

“I think people will be forced to become the marketers they should have been all along, instead of ignoring the user,” he predicted.

My prediction is that in a few years, we will look back on this time as a positive change. I think search will be better for it as a result of SEOs having to embrace marketing skills and become creative.

Ray believes that we need to use our own data more and to encourage a culture of experimenting with it, and learning from your users and customers. Broad studies are useful for direction, but not for execution.

“If you’re selling airline tickets, it doesn’t really matter how people are buying dog food,” he added.

An Industry Built For Change

Despite the disruption, Ray sees opportunity. SEOs are uniquely positioned to adapt.

“We’re researchers and builders by nature; that’s why this industry can embrace change faster than most,” he said.

Success in the age of AI-powered search isn’t about mastering new tools or chasing the latest optimization techniques. It’s about understanding how people search for information, what experiences they expect, and how to provide genuine value throughout their journey, principles that have always defined effective marketing.

He believes that some users will eventually experience AI exhaustion, returning to Google’s familiar search experience. But ultimately, people will navigate across both generative AI and traditional search. SEOs will have to meet them where they are.

It doesn’t matter what we call it. What matters is attracting attention through search behavior.”

Watch the full video interview with Ray Grieselhuber below.

Thank you to Ray for offering his insights and being my guest on IMHO.

More Resources: 


Featured Image: Shelley Walsh/Search Engine Journal

How to measure the returns on R&D spending

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Given the draconian cuts to US federal funding for science, including the administration’s proposal to reduce the 2026 budgets of the National Institutes of Health by 40% and the National Science Foundation by 57%, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway? To answer that, it’s important to look at both successful returns and investments that went nowhere.

Sure, it’s easy to argue for the importance of spending on science by pointing out that many of today’s most useful technologies had their origins in government-funded R&D. The internet, CRISPR, GPS—the list goes on and on. All true. But this argument ignores all the technologies that received millions in government funding and haven’t gone anywhere—at least not yet. We still don’t have DNA computers or molecular electronics. Never mind the favorite examples cited by contrarian politicians of seemingly silly or frivolous science projects (think shrimp on treadmills).

While cherry-picking success stories help illustrate the glories of innovation and the role of science in creating technologies that have changed our lives, it provides little guidance for how much we should spend in the future—and where the money should go.

A far more useful approach to quantifying the value of R&D is to look at its return on investment (ROI). A favorite metric for stock pickers and PowerPoint-wielding venture capitalists, ROI weighs benefits versus costs. If applied broadly to the nation’s R&D funding, the same kind of thinking could help account for both the big wins and all the money spent on research that never got out of the lab.

The problem is that it’s notoriously difficult to calculate returns for science funding—the payoffs can take years to appear and often take a circuitous route, so the eventual rewards are distant from the original funding. (Who could have predicted Uber as an outcome of GPS? For that matter, who could have predicted that the invention of ultra-precise atomic clocks in the late 1940s and 1950s would eventually make GPS possible?) And forget trying to track the costs of countless failures or apparent dead ends.

But in several recent papers, economists have approached the problem in clever new ways, and though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make.

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

That might not seem very surprising. We’ve long thought that innovation and scientific advances are key to our prosperity. But the new studies provide much-needed details, supplying systematic and rigorous evidence for the impact that R&D funding, including public investment in basic science, has on overall economic growth.

And the magnitude of the benefits is surprising.

Bang for your buck

In “A Calculation of the Social Returns to Innovation,” Benjamin Jones, an economist at Northwestern University, and Lawrence Summers, a Harvard economist and former US Treasury secretary, calculate the effects of the nation’s total R&D spending on gross domestic product and our overall standard of living. They’re taking on the big picture, and it’s ambitious because there are so many variables. But they are able to come up with a convincing range of estimates for the returns, all of them impressive.

On the conservative end of their estimates, says Jones, investing $1 in R&D yields about $5 in returns—defined in this case as additional GDP per person (basically, how much richer we become). Change some of the assumptions—for example, by attempting to account for the value of better medicines and improved health care, which aren’t fully captured in GDP—and you get even larger payoffs.

While the $5 return is at the low end of their estimates, it’s still “a remarkably good investment,” Jones says. “There aren’t many where you put in $1 and get $5 back.”

That’s the return for the nation’s overall R&D funding. But what do we get for government-funded R&D in particular? Andrew Fieldhouse, an economist at Texas A&M, and Karel Mertens at the Federal Reserve Bank of Dallas looked specifically at how changes in public R&D spending affect the total factor productivity (TFP) of businesses. A favorite metric of economists, TFP is driven by new technologies and innovative business know-how—not by adding more workers or machines—and is the main driver of the nation’s prosperity over the long term.

The economists tracked changes in R&D spending at five major US science funding agencies over many decades to see how the shifts eventually affected private-sector productivity. They found that the government was getting a huge bang for its nondefense R&D buck.

The benefits begin kicking in after around five to 10 years and often have a long-lasting impact on the economy. Nondefense public R&D funding has been responsible for 20% to 25% of all private-sector productivity growth in the country since World War II, according to the economists. It’s an astonishing number, given that the government invests relatively little in nondefense R&D. For example, its spending on infrastructure, another contributor to productivity growth, has been far greater over those years.

The large impact of public R&D investments also provides insight into one of America’s most troubling economic mysteries: the slowdown in productivity growth that began in the 1970s, which has roiled the country’s politics as many people face stunted living standards and limited financial prospects. Their research, says Fieldhouse, suggests that as much as a quarter of that slowdown was caused by a decline in public R&D funding that happened roughly over the same time.

After reaching a high of 1.86% of GDP in 1964, federal R&D spending began dropping. Starting in the early 1970s, TFP growth also began to decline, from above 2% a year in the late 1960s to somewhere around 1% since the 1970s (with the exception of a rise during the late 1990s), roughly tracking the spending declines with a lag of a few years.

If in fact the productivity slowdown was at least partially caused by a drop in public R&D spending, it’s evidence that we would be far richer today if we had kept up a higher level of science investment. And it also flags the dangers of today’s proposed cuts. “Based on our research,” says Fieldhouse, “I think it’s unambiguously clear that if you actually slash the budget of the NIH by 40%, if you slash the NSF budget by 50%, there’s going to be a deceleration in US productivity growth over the next seven to 10 years that will be measurable.”

Out of whack

Though the Trump administration’s proposed 2026 budget would slash science budgets to an unusual degree, public funding of R&D has actually been in slow decline for decades. Federal funding of science is at its lowest rate in the last 70 years, accounting for only around 0.6% of GDP.

Even as public funding has dropped, business R&D investments have steadily risen. Today businesses spend far more than the government; in 2023, companies invested about $700 billion in R&D while the US government spent $172 billion, according to data from the NSF’s statistical agency. You might think, Good—let companies do research. It’s more efficient. It’s more focused. Keep the government out of it.

But there is a big problem with that argument. Publicly funded research, it turns out, tends to lead to relatively more productivity growth over time because it skews more toward fundamental science than the applied work typically done by companies.

In a new working paper called “Public R&D Spillovers and Productivity Growth,” Arnaud Dyèvre, an assistant professor at of economics at HEC Paris, documents the broad and often large impacts of so-called knowledge spillovers—the benefits that flow to others from work done by the original research group. Dyèvre found that the spillovers of public-funded R&D have three times more impact on productivity growth across businesses and industries than those from private R&D funding.

The findings are preliminary, and Dyèvre is still updating the research—much of which he did as a postdoc at MIT—but he says it does suggest that the US “is underinvesting in fundamental R&D,” which is heavily funded by the government. “I wouldn’t be able to tell you exactly which percentage of R&D in the US needs to be funded by the government or what percent needs to be funded by the private sector. We need both,” he says. But, he adds, “the empirical evidence” suggests that “we’re out of balance.”

The big question

Getting the balance of funding for fundamental science and applied research right is just one of the big questions that remain around R&D funding. In mid-July, Open Philanthropy and the Alfred P. Sloan Foundation, both nonprofit organizations, jointly announced that they planned to fund a five-year “pop-up journal” that would attempt to answer many of the questions still swirling around how to define and optimize the ROI of research funding.

“There is a lot of evidence consistent with a really high return to R&D, which suggests we should do more of it,” says Matt Clancy, a senior program officer at Open Philanthropy. “But when you ask me how much more, I don’t have a good answer. And when you ask me what types of R&D should get more funding, we don’t have a good answer.”

Pondering such questions should keep innovation economists busy for the next several years. But there is another mystifying piece of the puzzle, says Northwestern’s Jones. If the returns on R&D investments are so high—the kind that most venture capitalists or investors would gladly take—why isn’t the government spending more?

“I think it’s unambiguously clear that if you actually slash the budget of the NIH by 40%, if you slash the NSF budget by 50%, there’s going to be a deceleration in US productivity growth over the next seven to 10 years that will be measurable.”

Jones, who served as a senior economic advisor in the Obama administration, says discussions over R&D budgets in Washington are often “a war of anecdotes.” Science advocates cite the great breakthroughs that resulted from earlier government funding, while budget hawks point to seemingly ludicrous projects or spectacular failures. Both have plenty of ammunition. “People go back and forth,” says Jones, “and it doesn’t really lead to anywhere.”

The policy gridlock is rooted in in the very nature of fundamental research. Today’s science will lead to great advances. And there will be countless failures; a lot of money will be wasted on fruitless experiments. The problem, of course, is that when you’re deciding to fund new projects, it’s impossible to predict which the outcome will be, even in the case of odd, seemingly silly science. Guessing just what research will or will not lead to the next great breakthrough is a fool’s errand.

Take the cuts in the administration’s proposed fiscal 2026 budget for the NSF, a leading funder of basic science. The administration’s summary begins with the assertion that its NSF budget “is prioritizing investments that complement private-sector R&D and offer strong potential to drive economic growth and strengthen U.S. technological leadership.” So far, so good. It cites the government’s commitment to AI and quantum information science. But dig deeper and you will see the contradictions in the numbers.

Not only is NSF’s overall budget cut by 57%, but funding for physical sciences like chemistry and materials research—fields critical to advancing AI and quantum computers—has also been blown apart. Funding for the NSF’s mathematical and physical sciences program was reduced by 67%. The directorate for computer and information science and engineering fared little better; its research funding was cut by 66%.

There is a great deal of hope among many in the science community that Congress, when it passes the actual 2026 budget, will at least partially reverse these cuts. We’ll see. But even if it does, why attack R&D funding in the first place? It’s impossible to answer that without plunging into the messy depths of today’s chaotic politics. And it is equally hard to know whether the recent evidence gathered by academic economists on the strong returns to R&D investments will matter when it comes to partisan policymaking.

But at least those defending the value of public funding now have a far more productive way to make their argument, rather than simply touting past breakthroughs. Even for fiscal hawks and those pronouncing concerns about budget deficits, the recent work provides a compelling and simple conclusion: More public funding for basic science is a sound investment that makes us more prosperous.

The Download: measuring returns on R&D, and AI’s creative potential

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How to measure the returns on R&D spending

Given the draconian cuts to US federal funding for science, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway? 

To answer that, in several recent papers, economists have approached this issue in clever new ways.  And, though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make. Read the full story.

—David Rotman

This article is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

If you’re interested in reading more about America’s economic situation, check out:

+ Sweeping tariffs could threaten the US manufacturing rebound—and they could stunt its ability to make tomorrow’s breakthroughs. Read the full story.

+ The surprising barrier that keeps us from building the housing we need. Read the full story.

+ How to fine-tune AI for prosperity.

+ People are worried that AI will take everyone’s jobs. We’ve been here before.

MIT Technology Review Narrated: How AI can help supercharge creativity

Forget one-click creativity. Artists and musicians are finding new ways to make art using AI, by injecting friction, challenge, and serendipity into the process.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok’s buyers may include Oracle, Silver Lake and Andreessen Horowitz 
They would control around 80% of the business, with Chinese shareholders holding the rest. (WSJ $)
+ We still have plenty of unanswered questions about the deal. (Bloomberg $)
+ It was brokered in Madrid. (The Guardian)

2 OpenAI is working on a version of ChatGPT for teenagers
And it’ll use age-prediction tech to bar them from the standard version. (Axios)
+ The move comes as the US Senate is hearing evidence about chatbot harms. (404 Media)
+ The looming crackdown on AI companionship. (MIT Technology Review)

3 China has banned tech firms from buying Nvidia’s chips
In an effort to boost its own companies. (FT $)
+ Alibaba and ByteDance have been instructed to terminate orders. (Bloomberg $)

4 Anthropic refuses to let US law enforcement use its models
Much to the White House’s chagrin. (Semafor)

5 Tesla’s doors may trap passengers inside its cars
Vehicle safety regulators are investigating after people reported being forced to break windows to retrieve children. (NYT $)  

6 How AI companies train their models to do white-collar jobs  
After hitting a wall, they’re throwing money at the problem. (The Information $)
+ New training ‘environments’ are a hot AI topic right now.  (TechCrunch)
+ How AI is shaking up corporate hierarchies. (WSJ $)

7 Inside Damascus’ bid to become a tech hub
The city’s tech industry has been embraced by its new government. (Rest of World)

8 A supply shipment to the ISS has been delayed
NASA is blaming engine trouble. (Ars Technica)
+ The great commercial takeover of low Earth orbit. (MIT Technology Review)

9 Our darkest nights are getting lighter
Artificial light is ruining our chances of seeing starry skies. (IEEE Spectrum)
+ Bright LEDs could spell the end of dark skies. (MIT Technology Review)

10 You can now book a safari through Uber 🦒🦓
Expedition into Nairobi National Park, anyone? (Bloomberg $)

Quote of the day

“What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

—Matthew Raine, whose 16-year old son Adam died by suicide after repeatedly sharing his intentions with ChatGPT, gives evidence to a Senate Judiciary subcommittee investigating chatbot dangers, the Washington Post reports.

One more thing

AI is coming for music, too

While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains.

By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people

Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. Read the full story.

—James O’Donnell

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Food, in all shapes and forms, is bigger than ever. So why aren’t we watching cooking shows any more?
+ Kate Bush’s Hounds of Love turns 40 this year, but still sounds as fresh as ever.
+ Here’s how to maximize your chances of booking a bargain flight.
+ Robert Redford, you were one of a kind.

AI-designed viruses are here and already killing bacteria

Artificial intelligence can draw cat pictures and write emails. Now the same technology can compose a working genome.

A research team in California says it used AI to propose new genetic codes for viruses—and managed to get several of these viruses to replicate and kill bacteria.

The scientists, based at Stanford University and the nonprofit Arc Institute, both in Palo Alto, say the germs with AI-written DNA represent the “the first generative design of complete genomes.”

The work, described in a preprint paper, has the potential to create new treatments and accelerate research into artificially engineered cells. It is also an “impressive first step” toward AI-designed life forms, says Jef Boeke, a biologist at NYU Langone Health, who was provided an advance copy of the paper by MIT Technology Review.  

Boeke says the AI’s performance was surprisingly good and that its ideas were unexpected. “They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements,” he says.

This is not yet AI-designed life, however. That’s because viruses are not alive. They’re more like renegade bits of genetic code with relatively puny, simple genomes. 

In the new work, researchers at the Arc Institute sought to develop variants of a bacteriophage—a virus that infects bacteria—called phiX174, which has only 11 genes and about 5,000 DNA letters.

To do so, they used two versions of an AI called Evo, which works on the same principles as large language models like ChatGPT. Instead of feeding them textbooks and blog posts to learn from, the scientists trained the models on the genomes of about 2 million other bacteriophage viruses.

But would the genomes proposed by the AI make any sense? To find out, the California researchers chemically printed 302 of the genome designs as DNA strands and then mixed those with E. coli bacteria.

That led to a profound “AI is here” moment when, one night, the scientists saw plaques of dead bacteria in their petri dishes. They later took microscope pictures of the tiny viral particles, which look like fuzzy dots.

“That was pretty striking, just actually seeing, like, this AI-generated sphere,” says Brian Hie, who leads the lab at the Arc Institute where the work was carried out.

Overall, 16 of the 302 designs ended up working—that is, the computer-designed phage started to replicate, eventually bursting through the bacteria and killing them.

J. Craig Venter, who created some of the first organisms with lab-made DNA nearly two decades ago, says the AI methods look to him like “just a faster version of trial-and-error experiments.”

For instance, when a team he led managed to create a bacterium with a lab-printed genome in 2008, it was after a long hit-or-miss process of testing out different genes. “We did the manual AI version—combing through the literature, taking what was known,” he says. 

But speed is exactly why people are betting AI will transform biology. The new methods already claimed a Nobel Prize in 2024 for predicting protein shapes. And investors are staking billions that AI can find new drugs. This week a Boston company, Lila, raised $235 million to build automated labs run by artificial intelligence.

Computer-designed viruses could also find commercial uses. For instance, doctors have sometimes tried “phage therapy” to treat patients with serious bacterial infections. Similar tests are underway to cure cabbage of black rot, also caused by bacteria.

“There is definitely a lot of potential for this technology,” says Samuel King, the student who spearheaded the project in Hei’s lab. He notes that most gene therapy uses viruses to shuttle genes into patients’ bodies, and AI might develop more effective ones.

The Stanford researchers say they purposely haven’t taught their AI about viruses that can infect people. But this type of technology does create the risk that other scientists—out of curiosity, good intentions, or malice—could turn the methods on human pathogens, exploring new dimensions of lethality.

“One area where I urge extreme caution is any viral enhancement research, especially when it’s random so you don’t know what you are getting,” says Venter. “If someone did this with smallpox or anthrax, I would have grave concerns.”

Whether an AI can generate a bona fide genome for a larger organism remains an open question. For instance, E. coli has about a thousand times more DNA code than phiX174 does. “The complexity would rocket from staggering to … way way more than the number of subatomic particles in the universe,” says Boeke.

Also, there’s still no easy way to test AI designs for larger genomes. While some viruses can “boot up” from just a DNA strand, that’s not the case with a bacterium, a mammoth, or a human. Scientists would instead have to gradually change an existing cell with genetic engineering—a still laborious process.

Despite that, Jason Kelly, the CEO of Ginkgo Bioworks, a cell-engineering company in Boston, says exactly such an effort is needed. He believes it could be carried out in “automated” laboratories where genomes get proposed and tested and the results are fed back to AI for further improvement.

 “This would be a nation-scale scientific milestone, as cells are the building blocks of all life,” says Kelly. “The US should make sure we get to it first.”

New Ecommerce Tools: September 17, 2025

Every week we handpick and publish a list of new products and services from vendors to ecommerce merchants. This installment includes updates on agentic commerce, product customizations, embedded checkout, ecommerce search, reverse logistics, fulfillment, warehouse automation, and AI-powered merchandising.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Miva adds merchandising to its AI-powered product recommendations tool. Miva, a pioneering ecommerce platform, has launched dynamic merchandising for its new Vexture AI product search, discovery, and merchandising tool. The added feature utilizes natural language prompts and contextual AI intelligence to automate manual merchandising, providing businesses with more personalized product recommendations that drive sales, according to Miva. Merchants describe their strategy in plain English and instantly generate tailored, AI-powered recommendations across the entire catalog.

Home page of Miva

Miva

Rezolve AI launches visual search for conversational commerce. Rezolve AI, a developer of commerce solutions and a strategic partner of Microsoft and Google, has launched Visual Search, allowing consumers to upload a photo of an item and search across a retailer’s Rezolve-enabled catalog. The tool leverages multimodal AI, including image and text understanding, to interpret complex attributes and presents results with contextual prompts, suggestions, and natural-language dialogue.

Pinterest launches where-to-buy links. Pinterest‘s new where-to-buy links make standard image ads shoppable by surfacing multiple in-stock retailer options for a single product. Brands can add where-to-buy links through MikMak. Alternatively, they can use the free native option, powered by Pear Commerce, to build and launch where-to-buy ads within Pinterest Ads Manager.

Fortis and BigCommerce announce payments partnership to simplify checkout. Fortis, a provider of embedded payments and commerce technology, has partnered with BigCommerce. BigCommerce customers (including mid-market B2B sellers, distributors, service-based businesses, and developers) gain access to Fortis’ embedded payments technology. The solution enables real-time transactions, simplifies reconciliation, and provides next-day funding, while eliminating the need for third-party gateways and fragmented systems. This partnership enhances checkout conversion, optimizes operational efficiency, and fosters sustainable growth at scale, according to Fortis.

Home page of Fortis

Fortis

AI-powered product options app SectionlyAI launches on Shopify. SectionlyAI has launched as an AI-powered app on Shopify to generate and manage product customization. The app handles configurations, from basic options to complex, as well as multi-dimensional pricing combinations, via advanced natural language processing technology. Merchants can describe their requirements in plain language, and the AI will intelligently generate configuration options, conditional logic, and flexible pricing rules.

Spara launches AI platform with $15 million for sales and marketing pipelines. Spara, an enterprise-grade platform for voice, email, and chat AI agents to engage, qualify, and convert leads to revenue, has launched with $15 million in seed funding. The round was led by Radical Ventures and Inspired Capital, with participation from XYZ Ventures, FJ Labs, and Remarkable Ventures. According to Spara, the funding will fuel team expansion, product development, and advancements in machine learning.

Firmly.ai and CJ partner on native commerce for publishers and merchants. Firmly.ai, a provider of agentic commerce for checkout infrastructure, and CJ, an affiliate marketing platform, have announced a collaboration. The integration creates a native checkout solution for CJ’s merchant partners, keeping the consumer within a platform, without requiring any technical implementation, while CJ publishers can transform content into shopping experiences.

Home page of Firmly

Firmly

Unilog partners with HawkSearch on ecommerce search. Bridgeline Digital, a provider of AI-powered marketing technology, announced that its HawkSearch platform has joined Unilog’s tech partner ecosystem to power advanced search as an add-on option for the CX1 ecommerce platform. With the HawkSearch integration, Unilog customers can enhance their digital storefronts with AI-assisted search tailored to complex catalogs and buyer behaviors, improving product discovery and engagement.

Criteo and Google integrate for retail media. Criteo, an advertising network, has integrated with Google Search Ads 360 for retail media. Through the partnership, Criteo’s network of over 200 retailers can opt in to receiving ads from the Google Search Ads 360 platform, while advertisers receive a clear view into how their spend drives incremental impact.

Kibo Commerce launches enhancements in reverse logistics, B2B, and AI agents. Kibo Commerce, a composable commerce platform, has launched over 60 enhancements. Kibo’s new reverse logistics functionality empowers brands to control post-purchase operations, including intelligent return routing and custom return rules. Additionally, Kibo has strengthened its order management system and B2B capabilities with features such as order prioritization, purchase limits, estimated delivery dates, and rule-based inventory management.

Home page of Kibo

Kibo

Ordoro and DropStream partner on 3PL fulfillment for ecommerce merchants. Ordoro, a provider of ecommerce logistics and inventory management, is partnering with DropStream, a fulfillment platform that helps merchants simplify back-end operations and connect with third-party logistics providers. This collaboration merges Ordoro’s centralized order and inventory tools with DropStream’s integration platform to create a fulfillment workflow for brands. Merchants can automate post-purchase operations and route orders by location, inventory, shipping speed, or custom business logic.

Exotec and E80 Group partner on end-to-end warehouse automation. Exotec, a warehouse robotics provider, and E80 Group, a provider of automated and integrated logistics tools, have partnered on a joint solution design that integrates Exotec’s Skypod AS/RS with E80 Group’s suite of pallet handling systems. According to the companies, the collaboration enables customers to unify case and pallet handling within a single, scalable warehouse automation ecosystem.

Quack raises $7 million for customer experience with AI agents. Quack, an agentic AI platform for customer support, has raised $7 million in seed funding led by Hanaco Ventures and Storytime Capital, with participation from Fusion VC, Savyon Ventures, Seed IL, and private investors. According to Quack, the new capital will accelerate its U.S. go-to-market strategy and product development to advance its operating system for training and managing proactive AI agents.

Home page of Quack

Quack

Google Discover Adds Social Media Posts & Follow Buttons via @sejournal, @MattGSouthern

Google is updating Discover with two changes that could change how your content finds readers.

You can now follow publishers directly in Discover, and Google will start showing social posts from platforms like X, Instagram, and YouTube Shorts in the feed.

Rollout begins today, with social integrations coming in the weeks ahead.

Layla Amjadi, Senior Director of Product Management for Search, wrote:

“We’re updating Discover to make it even easier to find, follow and engage with the content and creators you care about most. From creators to news publishers, Discover will be a more helpful and personalized jumping-off point for exploring the web.

What’s New Today

Signed-in users can follow publishers and creators right inside Discover.

When someone taps a name in Discover, they’ll see a dedicated space with that source’s content across formats. If they like what they see, they can follow to get more from that source over time.

Here’s an example of a dedicated publisher space in Google Discover:

Screenshot from: : blog.google/products/search/discover-updates-september-2025, September 2025.

Google’s announcement reads:

“Now, you can “follow” publishers or creators right on Discover to see more of their content. You can preview a publisher or creator’s content — including articles, YouTube videos and posts from social channels — before you follow. Just tap their name to find a new, dedicated space for their content.”

Social Posts In Discover

Google Discover will soon begin showing posts from X and Instagram, along with YouTube Shorts, with more platforms planned.

Google’s announcement reads:

“In the coming weeks, you’ll start to see more types of content in Discover from publishers and creators across the web, such as posts from X and Instagram and YouTube Shorts, with more platforms to come. In our research, people told us they enjoyed seeing a mix of content in Discover, including videos and social posts, in addition to articles.”

Here’s an example of an in-feed social media post in Google Discover:

Screenshot from: : blog.google/products/search/discover-updates-september-2025, September 2025.

Why This Matters

If you publish across web + social, your posts could reach Discover audiences without an extra tap into each platform.

The Follow button gives you a direct, opt-in signal that may help stabilize Discover traffic over time.

This follows Google’s preferred sources feature in Top Stories, which lets people pick news outlets they want to see more often for timely queries.

Looking Ahead

The follow button is available starting today for signed-in users. Social posts will appear in the coming weeks as the integration rolls out.

Together, these updates point to more user-directed personalization across Search surfaces.

How To Measure Brand Marketing Efforts (And Prove Their ROI) via @sejournal, @AlliBerry3

Brand marketing is often the silent driver behind successful digital campaigns.

People are far more likely to read, watch, click, and ultimately buy from a brand they already know and trust. That’s why doing the harder, slower work of building a strong brand pays dividends when it comes to performance marketing efforts like SEO and PPC. We know this intuitively.

But proving the impact of brand marketing is much harder. Unlike SEO rankings or PPC conversions, brand-building results are not always immediately visible, which is why these efforts often get under-credited – or worse, neglected altogether – in favor of easier-to-measure tactics. This is a mistake.

Why Brand Marketing Matters More Than Ever

The irony is that large-scale studies repeatedly show brand-related factors at the forefront of digital visibility.

Semrush’s 2025 ranking factor study found that authority, traffic, and backlink signals – closely tied to brand strength – are still among the most important correlating factors for high search rankings.

Similarly, as AI Overviews and large language model (LLM)-powered search expand, brand strength is proving to be the key to visibility. In its 2025 study, Ahrefs found that branded mentions, branded anchors, and branded search volume are the top three factors correlated with AI Overview presence.

All of these point to one conclusion: Brand marketing is increasingly the engine that drives both human trust and algorithmic preference.

The challenge, however, is demonstrating its impact in a way that stakeholders can understand and value. That’s why it’s critical to learn how to measure your brand marketing efforts using both qualitative and quantitative metrics, tied back to clear key performance indicators (KPIs).

The Situation For Digital Marketing Leaders

Consider the role of an in-house SEO director. Your KPIs might look like this:

  • Grow organic traffic by 25% year-over-year.
  • Increase lead generation downloads by 40%.
  • Drive 20% more sales from organic.

But with Google’s AI Overviews cutting click-through rates by more than 34% and users increasingly turning to LLMs for top-of-funnel research, traditional SEO tactics alone won’t get you there.

Instead, your future success depends on brand strength. Stronger brand signals lead to better visibility in AI-driven search results, higher trust with customers, and greater resilience in an evolving digital landscape. That means, even as an SEO professional, your path forward relies on executing and measuring brand marketing strategy effectively – and proving its business impact.

The good news is that as an SEO professional, you’ve likely already got quite a bit of the data you need. It may just require you to repackage some of your efforts. It may also require you to collaborate more with your fellow digital marketers, particularly those in PR, social media, and PPC, to show brand visibility growth more holistically.

Tying Metrics To The Sales Funnel

When it comes to your brand marketing, there are really four categories of efforts:

  • Awareness.
  • Consideration.
  • Conversion.
  • Loyalty & Advocacy.

Ultimately, you are looking to increase your brand strength in every area of the funnel.

You want more people to hear of your brand, which then drives them to search for it to learn more about it.

More brand familiarity and trust should then ultimately lead to more conversions.

And the more customers and followers of the brand you have, the more you would expect to see an increase in loyalty and advocacy.

All of your brand marketing tracking should tie back to one of those four categories. Therefore, the next sections of this article are broken down by stage of the funnel.

Brand Awareness Metrics

Brand awareness metrics help you measure whether your brand is becoming more recognizable in the right contexts. At the top level, awareness is measured by reach and visibility signals: metrics like impressions, social mentions, and share of voice across channels.

On the digital side, you can monitor branded search impressions and clicks in Google Search Console, track direct traffic growth in Google Analytics 4, and use SEO tools like Semrush or Ahrefs to compare your brand’s share of voice against competitors.

These metrics reveal whether people are actively seeking you out and whether brand exposure is translating into traffic.

Equally important are perception-based metrics, which capture how audiences actually recall and recognize your brand.

Brand lift studies and recall surveys ask consumers whether your brand comes to mind within your category – both aided (i.e., Have you heard of [brand]?) and unaided (i.e., What brands come to mind for [category]?). These are especially powerful after large brand campaigns, such as a national TV spot or a major podcast sponsorship, to see if awareness efforts are resonating with the right audience.

Key Awareness Metrics

Metric Tool Examples Frequency
Branded search impressions & clicks Google Search Console Monthly
Branded search volume Google Trends, Semrush, Ahrefs Quarterly
Direct website traffic Google Analytics 4, Adobe Analytics Monthly
Media mentions/external links Semrush, Ahrefs Monthly
Social mentions/share of voice Sprout, Semrush Monthly
Brand recall survey SurveyMonkey, Qualtrics Per campaign
Brand lift study Google Ads Per campaign

It is important that you’re measuring both the quantitative signals of awareness (search, traffic, mentions) and the qualitative signals (surveys, brand lift). Together, these provide a complete picture of how visible and memorable your brand really is.

Consideration Metrics

While awareness tells you whether people recognize your brand, consideration metrics show whether they are actively evaluating your brand as a viable option. This stage of the funnel is all about engagement and intent. We’re looking at signals that potential customers are digging deeper, comparing you against competitors, and gathering the information they need to make a decision.

On your website, key metrics include pages per session, time spent on product or service pages, and return visits to your site, which often indicate research and deeper evaluation. Growth in traffic to product-related pages and increases in branded product queries (i.e., “Brand X running shoes”) are also strong signals that awareness is moving into intent.

Beyond on-site behavior, content downloads such as case studies, whitepapers, or product comparison guides show that audiences are engaging with assets that help them evaluate their choices.

Similarly, a rise in third-party product reviews or mentions on industry forums and social media reflects growing consideration and social proof that others are weighing your brand seriously in the buying process.

Key Consideration Metrics

Metric Tool Examples Recommended Frequency
Pages per session & time on product pages Google Analytics 4, Adobe Analytics Monthly
Traffic growth on product/service pages GA4, Adobe Analytics Monthly
Branded product-related search volume, impressions, and clicks Google Search Console, Semrush, Ahrefs Monthly
Return visits/repeat sessions GA4, Adobe Analytics Monthly
Gated content downloads (case studies, whitepapers, comparisons) GA4 or a third-party like HubSpot Monthly
Product mentions on forums/social media Sprout, Semrush Monthly

By tracking both behavioral signals on your owned channels (site engagement, return visits, content downloads) and external validation (third-party mentions), you build a clear picture of whether your brand is moving beyond recognition and into active consideration.

Conversion Metrics

Conversion metrics show how effectively brand strength translates into tangible business outcomes. At this stage, the focus shifts from evaluation to action.

We’re looking at whether people are requesting demos, signing up for free trials, or making purchases. Strong branding makes these conversions more likely by building the trust and credibility necessary to reduce friction at the decision point.

On your website, look for form fills, demo requests, trial sign-ups, and completed transactions as clear indicators of conversion. Tracking conversion rates from branded search campaigns in Google Ads or measuring pipeline influenced by brand-related traffic in your customer relationship management (CRM) also provides valuable insight.

Additionally, monitoring add-to-cart and checkout completions in GA4 can highlight how often brand equity is driving purchase intent to completion.

Key Conversion Metrics

Metric Tool Examples Recommended Frequency
Add-to-cart & completed transactions GA4, Adobe Analytics Monthly
Demo requests/trial sign-ups CRM Monthly
“Contact us” or lead generation form fills GA4 or CRM Monthly
Conversion rates from branded PPC Google Ads, Microsoft Ads Monthly

Loyalty And Advocacy Metrics

Loyalty and advocacy metrics reveal whether brand strength translates into long-term customer relationships. At this stage, the goal is not just to retain customers but to turn them into advocates who actively promote your brand.

Strong loyalty reduces churn, increases lifetime value, and builds a customer base that supports sustainable growth.

Key metrics here include customer retention rates, repeat purchase behavior, and customer lifetime value (CLV), which quantify how effectively you’re keeping customers over time.

Net Promoter Score (NPS) and customer satisfaction surveys capture how likely customers are to recommend your brand. Monitoring referrals, user-generated content, and social sharing also provides qualitative proof of advocacy.

Review platforms and communities can be another strong signal. Growth in positive product reviews or customers organically defending your brand in forums shows that loyalty has translated into advocacy.

Key Loyalty & Advocacy Metrics

Metric Tool Examples Recommended Frequency
Customer retention rate/churn CRM Quarterly
Customer lifetime value (CLV) CRM Quarterly
Net Promoter Score (NPS) SurveyMonkey, Qualtrics Bi-Annually
Referrals & word-of-mouth Referral programs, HubSpot, GA4 Monthly
Positive review growth & advocacy Google Business Profile, Yelp, Reddit Monthly
User-generated content & social sharing Sprout Social, Hootsuite, Brandwatch Monthly

Turning Metrics Into A Compelling Data Story For Stakeholders

The real value of measuring brand marketing comes not just from tracking the right metrics, but from connecting them into a story that stakeholders can understand.

By aligning awareness, consideration, conversion, and loyalty metrics to the sales funnel, you create a framework that shows how brand-building efforts impact the entire customer journey.

A brand dashboard is one of the most effective tools for communicating this story. Tools like Looker Studio or Power BI will allow you to consolidate signals from multiple sources to present a holistic view of brand health.

Rather than overwhelming leadership with granular reports from different platforms, you’re providing them with a clear line of sight from brand activity to revenue impact. It can look something like: Google Search Console for branded queries, GA4 for site engagement, CRM data for conversions, and social listening tools for sentiment and share of voice.

When sharing results, keep in mind that executives often care less about the technical details and more about the outcomes. Frame your reporting around KPIs tied to growth:

  • Did brand awareness lift lead to more traffic and higher-quality leads?
  • Did stronger consideration metrics translate into more demo requests or trial sign-ups?
  • Did higher loyalty scores reduce churn or drive referrals?

By mapping brand marketing metrics to outcomes stakeholders already value – pipeline growth, revenue impact, and customer retention – you position branding not as a “soft” investment, but as a measurable driver of business performance.

More Resources:


Featured Image: Master1305/Shutterstock

Google Is Hiring An Anti-Scraping Engineering Analyst via @sejournal, @martinibuster

Google is hiring a new anti-scraping czar, whose job will be to analyze search traffic to identify the patterns of search scrapers, assess the impact, and work with engineering teams to develop new anti-scraping models for improving anti-scraping defenses.

Search Results Scraping

SEOs rely on SERP tracking companies to provide search results data for understanding search ranking trends, enabling competitive intelligence, and other keyword-related research and analysis.

Many of these companies conduct massive amounts of automated crawling of Google’s search results to take a snapshot of ranking positions and data related to search features triggered by keyword phrases. This scraping is suspected of causing significant changes to what’s reported in Google Search Console.

In the early days of SEO, there used to be a free keyword data source via Yahoo’s Overture, their PPC service. Many SEOs used to search on Yahoo so often that their searches would unintentionally inflate the keyword volume. Smart SEOs would know better to not optimize for those keyword phrases.

I have suspected that some SEOs may also have intentionally scraped Yahoo’s search results using fake keyword phrases in order to generate keyword volumes for those queries, in order to mislead competitors into optimizing for phantom search queries.

&num=100 Results Parameter

There is a growing suspicion backed by Google Search Console data that search result scraping may have inflated the official keyword impression data and that it may be the reason why Search Console Data appears to show that AI Search results aren’t sending traffic while Google’s internal data shows the opposite.

This suspicion is based on falling keyword impressions that correlate with Google’s recent action to block generating 100 search results with one search query, a technique used by various keyword tracking tools.

Google Anti-Scraping Engineering Analyst

Jamie Indigo posted that Google is looking to hire an Engineering Analyst focused on combatting search scraping.

The responsibilities for the job are:

  • “Investigate and analyze patterns of abuse on Google Search, utilizing data-motivated insights to develop countermeasures and enhance platform security.
    Analyze datasets to identify trends, patterns, and anomalies that may indicate abuse within Google Search.
  • Develop and track metrics to measure scraper impact and the effectiveness of anti-scraping defenses. Collaborate with engineering teams to design, test, and launch new anti-scraper rules, models, and system enhancements.
  • Investigate proof-of-concept attacks and research reports that identify blind spots and guide the engineering team’s development priorities. Evaluate the effectiveness of existing and proposed detection mechanisms, understanding the impact on scrapers and real users.
  • Contribute to the development of signals and features for machine learning models to detect abusive behavior. Develop and maintain threat intelligence on scraper actors, motivations, tactics and the scraper ecosystem.”

What Does It Mean?

There hasn’t been an official statement from Google but it’s fairly apparent that Google may be putting a stop to search results scrapers. This should result in more accurate Search Console data, so that’s a plus.

Featured Image by Shutterstock/DIMAS WINDU

How To Win Brand Visibility in AI Search [Webinar] via @sejournal, @lorenbaker

AIOs, LLMs & the New Rules of SEO

AI Overviews are changing everything.

Your impressions might be up, but the traffic isn’t following. Competitors are showing up in AI search while your brand remains invisible.

How do you measure success when ChatGPT or Gemini doesn’t show traditional rankings? How do you define “winning” in a world where every query can produce a different answer?

Learn the SEO & GEO strategies enterprise brands are using to secure visibility in AI Overviews and large language models.

AI Mode is growing fast. Millions of users are turning to AI engines for answers, and brand visibility is now the single most important metric. 

In this webinar, Tom Capper, Sr. Search Scientist at STAT Search Analytics, will guide you through how enterprise SEOs can adapt, measure, and thrive in this new environment.

You’ll Learn:

  • How verticals and user intents are shifting under AI Overviews and where SERP visibility and traffic opportunities still exist.
  • Practical ways to leverage traditional SEO while optimizing for generative engines.
  • How to bridge the gap between SEO and GEO with actionable strategies for enterprise brands.
  • How to measure success in AI search when impressions and rankings no longer tell the full story.

Register now to gain the latest, data-driven insights on maintaining visibility across AI Overviews, ChatGPT, Gemini, and more.

🛑 Can’t attend live? Sign up anyway, and we’ll send you the recording.

The looming crackdown on AI companionship

As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin from data center sprawl. But this week showed that another threat entirely—that of kids forming unhealthy bonds with AI—is the one pulling AI safety out of the academic fringe and into regulators’ crosshairs.

This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that companion-like behavior in their models contributed to the suicides of two teenagers. A study by US nonprofit Common Sense Media, published in July, found that 72% of teenagers have used AI for companionship. Stories in reputable outlets about “AI psychosis” have highlighted how endless conversations with chatbots can lead people down delusional spirals.

It’s hard to overstate the impact of these stories. To the public, they are proof that AI is not merely imperfect, but a technology that’s more harmful than helpful. If you doubted that this outrage would be taken seriously by regulators and companies, three things happened this week that might change your mind.

A California law passes the legislature

On Thursday, the California state legislature passed a first-of-its-kind bill. It would require AI companies to include reminders for users they know to be minors that responses are AI generated. Companies would also need to have a protocol for addressing suicide and self-harm and provide annual reports on instances of suicidal ideation in users’ conversations with their chatbots. It was led by Democratic state senator Steve Padilla, passed with heavy bipartisan support, and now awaits Governor Gavin Newsom’s signature. 

There are reasons to be skeptical of the bill’s impact. It doesn’t specify efforts companies should take to identify which users are minors, and lots of AI companies already include referrals to crisis providers when someone is talking about suicide. (In the case of Adam Raine, one of the teenagers whose survivors are suing, his conversations with ChatGPT before his death included this type of information, but the chatbot allegedly went on to give advice related to suicide anyway.)

Still, it is undoubtedly the most significant of the efforts to rein in companion-like behaviors in AI models, which are in the works in other states too. If the bill becomes law, it would strike a blow to the position OpenAI has taken, which is that “America leads best with clear, nationwide rules, not a patchwork of state or local regulations,” as the company’s chief global affairs officer, Chris Lehane, wrote on LinkedIn last week.

The Federal Trade Commission takes aim

The very same day, the Federal Trade Commission announced an inquiry into seven companies, seeking information about how they develop companion-like characters, monetize engagement, measure and test the impact of their chatbots, and more. The companies are Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, the maker of Character.AI.

The White House now wields immense, and potentially illegal, political influence over the agency. In March, President Trump fired its lone Democratic commissioner, Rebecca Slaughter. In July, a federal judge ruled that firing illegal, but last week the US Supreme Court temporarily permitted the firing.

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC chairman Andrew Ferguson in a press release about the inquiry. 

Right now, it’s just that—an inquiry—but the process might (depending on how public the FTC makes its findings) reveal the inner workings of how the companies build their AI companions to keep users coming back again and again. 

Sam Altman on suicide cases

Also on the same day (a busy day for AI news), Tucker Carlson published an hour-long interview with OpenAI’s CEO, Sam Altman. It covers a lot of ground—Altman’s battle with Elon Musk, OpenAI’s military customers, conspiracy theories about the death of a former employee—but it also includes the most candid comments Altman’s made so far about the cases of suicide following conversations with AI. 

Altman talked about “the tension between user freedom and privacy and protecting vulnerable users” in cases like these. But then he offered up something I hadn’t heard before.

“I think it’d be very reasonable for us to say that in cases of young people talking about suicide seriously, where we cannot get in touch with parents, we do call the authorities,” he said. “That would be a change.”

So where does all this go next? For now, it’s clear that—at least in the case of children harmed by AI companionship—companies’ familiar playbook won’t hold. They can no longer deflect responsibility by leaning on privacy, personalization, or “user choice.” Pressure to take a harder line is mounting from state laws, regulators, and an outraged public.

But what will that look like? Politically, the left and right are now paying attention to AI’s harm to children, but their solutions differ. On the right, the proposed solution aligns with the wave of internet age-verification laws that have now been passed in over 20 states. These are meant to shield kids from adult content while defending “family values.” On the left, it’s the revival of stalled ambitions to hold Big Tech accountable through antitrust and consumer-protection powers. 

Consensus on the problem is easier than agreement on the cure. As it stands, it looks likely we’ll end up with exactly the patchwork of state and local regulations that OpenAI (and plenty of others) have lobbied against. 

For now, it’s down to companies to decide where to draw the lines. They’re having to decide things like: Should chatbots cut off conversations when users spiral toward self-harm, or would that leave some people worse off? Should they be licensed and regulated like therapists, or treated as entertainment products with warnings? The uncertainty stems from a basic contradiction: Companies have built chatbots to act like caring humans, but they’ve postponed developing the standards and accountability we demand of real caregivers. The clock is now running out.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.