LLMs.txt For AI SEO: Is It A Boost Or A Waste Of Time? via @sejournal, @martinibuster

Many popular WordPress SEO plugins and content management platforms offer the ability to generate LLMs.txt for the purpose of improving visibility in AI search platforms. With so many popular SEO plugins and CMS platforms offering LLMs.txt functionality, one might come away with the impression that it is the new frontier of SEO. The fact, however, is that LLMs.txt is just a proposal, and no AI platform has signed on to use it.

So why are so many companies rushing to support a standard that no one actually uses? Some SEO tools offer it because their users are asking for it, while many users feel they need to adopt LLMs.txt simply because their favorite tools provide it. A recent Reddit discussion on this very topic is a good place to look for answers.

Third Party SEO Tool And LLMs.txt

Google’s John Mueller addressed the LLMs.txt confusion in a recent Reddit discussion.  The person asking the question was concerned because an SEO tool flagged it as 404, missing. The user had the impression that the tool implied it was needed.

Their question was:

“Why is SEMRush showing that the /llm.txt is a 404? Yes, I. know I don’t have one for the website, but, I’ve heard it’s useless and not needed. Is that true?

If i need it, how do i build it?

Thanks”

The Redditor seems to be confused by the Semrush audit that appears to imply that they need an LLMs.txt. I don’t know what they saw in the audit but this is what the official Semrush audit documentation shares about the usefulness of LLMs.txt:

“If your site lacks a clear llms.txt file it risks being misrepresented by AI systems.

…This new check makes it easy to quickly identify any issues that may limit your exposure in AI search results.”

Their documentation says that it’s a “risk” to not have an LLMs.txt but the fact is that there is absolutely no risk because no AI platform uses it. And that may be why the Redditor was asking the question, “If i need it, how do I build it?”

LLMs.txt Is Unnecessary

Google’s John Mueller confirmed that LLMs.txt is unnecessary.

He explained:

“Good catch! Especially in SEO, it’s important to catch misleading & bad information early, before you invest time into doing something unnecessary. Question everything.”

Why AI Platforms May Choose To Not Use LLMs.txt

Aside from John Mueller’s many informal statements about the uselessness of LLMs.txt, I don’t think there are any formal statements from AI platforms as to why they don’t use LLMs.txt and their associated .md markdown texts. There are, however, many good reasons why an AI platform would choose not to use it.

The biggest reason not to use LLMs.txt is that it is inherently untrustworthy. On-page content is relatively trustworthy because it is the same for users as it is for an AI bot.

A sneaky SEO could add things to structured data and markdown texts that don’t exist in the regular HTML content in order to get their content to rank better. It is naive to think that an SEO or publisher would not use .md files to trick AI platforms.

For example, unscrupulous SEOs add hidden text and AI prompts within HTML content. A research paper from 2024 (Adversarial Search Engine Optimization for Large Language Models) showed that manipulation of LLMs was possible using a technique they called Preference Manipulation Attacks.

Here’s a quote from that research paper (PDF):

“…an attacker can trick an LLM into promoting their content over competitors. Preference Manipulation Attacks are a new threat that combines elements from prompt injection attacks… Search Engine Optimization (SEO)… and LLM ‘persuasion.’

We demonstrate the effectiveness of Preference Manipulation Attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). Our attacks are black-box, stealthy, and reliably manipulate the LLM to promote the attacker’s content. For example, when asking Bing to search for a camera to recommend, a Preference Manipulation Attack makes the targeted camera 2.5× more likely to be recommended by the LLM.”

The point is that if there’s a loophole to be exploited, someone will think it’s a good idea to take advantage of it, and that’s the problem with creating a separate file for AI chatbots: people will see it as the ideal place to spam LLMs.

It’s safer to rely on on-page content than on a markdown file that can be altered exclusively for AI. This is why I say that LLMs.txt is inherently untrustworthy.

What SEO Plugins Say About LLMs.txt

The makers of Squirrly WordPress SEO plugin acknowledge that they provided the feature only because their users asked for it, and they assert that it has no influence on AI search visibility.

They write:

“I know that many of you love using Squirrly SEO and want to keep using it. Which is why you’ve asked us to bring this feature.

So we brought it.

But, because I care about you:

– know that LLMs txt will not help you magically appear in AI search. There is currently zero proof that it helps with being promoted by AI search engines.”

They strike a good balance between giving users what they want while also letting them know it’s not actually needed.

While Squirrly is at one end saying (correctly) that LLMs.txt doesn’t boost AI search visibility, Rank Math is on the opposite end saying that AI chatbots actually use the curated version of the content presented in the markdown files.

Rank Math is generally correct in its description of what an LLMs.txt is and how it works, but it overstates the usefulness by suggesting that AI chatbots use the curated LLMs.txt and the associated markdown files.

They write:

“So when an AI chatbot tries to summarize or answer questions based on your site, it doesn’t guess—it refers to the curated version you’ve given it. This increases your chances of being cited properly, represented accurately, and discovered by users in AI-powered results.”

We know for a fact that AI chatbots do not use a curated version of the content. They don’t even use structured data; they just use the regular HTML content.

Yoast SEO is a little more conservative, occupying a position in the center between Squirrly and Rank Math, explaining the purpose of LLMs.txt but not overstating the benefits by hedging with words like “can” and “could.” That is a fair way to describe LLMs.txt, although I like Squirrly’s approach that says, you asked for it, here it is, but don’t expect a boost in search performance.

The LLMs.txt Misinformation Loop

The conversation around LLMs.txt has become a self-reinforcing loop: business owners and SEOs feel anxiety over AI visibility and feel they must do something, viewing LLMs.txt as the something they can do.

SEO tool providers are compelled to provide the LLMs.txt option, reinforcing the belief that it’s a necessity, unintentionally perpetuating the cycle of misunderstanding.

Concern over AI visibility has led to the adoption of LLMs.txt which at this stage is only a proposal for a standard that no AI platform currently uses.

Featured Image by Shutterstock/James Delia

SERP Visibility Decline: How To Grow Brand Awareness When Organic Traffic Stalls

This post was sponsored by AdRoll. The opinions expressed in this article are the sponsor’s own.

Text‑heavy AI Overviews blocking your brand?

The “People Also Ask” that you could scroll through forever, effectively hiding position 1?

Knowledge panels and rich snippets hogging the view?

The majority of people who entered a search query never made it past the top of the search result page (SERP) in 2024.

For users, these updates to Google’s SERPs are technically efficient.

For you, changes like AI Overviews are another strategy to master, and at worst, a direct competitor for attention.

So how do you increase brand awareness and search presence when Google is taking away your bids for top-of-funnel (TOFU) content?

The Rise of Zero-Click: Why Rankings Don’t Equal Traffic Anymore

As search evolves, AI-powered summaries now appear in more than 13% of queries.

This resulted in nearly 60% of Google searches ending without a click last year, dramatically shrinking the traditional flow of search traffic to a website.

Not only are you fighting for space against the usual blue links, you’re now competing with AI-generated answers that package everything up before a user even considers a click.

Which means that “we made it to the top” moment doesn’t guarantee anyone actually sees your brand.

So, even if your brand earns a top ranking, it may never translate into visibility. That’s the reality of today’s zero-click environment, and it is what creates the awareness gap — a challenge that every marketer now has to solve.

What Is Zero Click?

A “zero-click” search happens when a user gets their answer directly on the search results page through featured snippets, knowledge panels, or AI-generated overviews without ever clicking through to a website.

For users, it’s fast and convenient. For brands, it means fewer chances for visitors to actually land on your site, even when you’ve earned a top ranking. Think of it as Google (and increasingly, AI) keeping people inside its own ecosystem rather than sending them out to explore yours.

This is where the awareness gap comes in.

What Is The Awareness Gap?

The awareness gap is the space in which your content is seen, but it is not tied to your brand.

Even if your brand appears in these results, you may never see the traditional signals like traffic or time on site that prove your influence. People might recognize your name or absorb part of your story, but that exposure is not reflected in your metrics.

The gap is the difference between being seen and being measured, and closing it requires a new playbook for visibility and recall.

How Zero-Click Reshapes Discovery

The zero-click trend is most disruptive at the very start of the customer journey. Your website used to be Rome; eventually, all roads led there. Now? Fewer and fewer organic roads exist. That means the earliest brand touchpoints are disappearing.

Here’s what that means for marketers today:

  • Fewer chances for discovery. If users never click, they never see your story. All things that shape early perception, such as your messaging, your visuals, your value props, get skipped.
  • SEO loses some steam. While organic optimization still matters for long-term discoverability (hello, LLMs absorbing and citing content), its ability to drive top-of-funnel awareness isn’t what it used to be. In a zero-click world, amazing content may rank, but still never get seen.
  • Competition gets fiercer. If you’ve relied heavily on organic strategies alone, competitors who invest in paid ads are now likely to edge you out. Ads still sit above AI overviews in many results, and that’s prime real estate that’s hard to ignore.
  • Research shifts elsewhere. With crowded SERPs and often confusing AI answers, users are taking their research off of traditional search platforms to other places. Social media, communities, and unowned channels are becoming important sources for educational content that feels clearer and more trustworthy.

Bottom line: the early doors to discovering your brand are closing faster than they’re opening. It takes a new mix of channels to ensure you’re still part of the conversation.

3 Steps to Reclaim Top-of-Funnel Presence

So what’s a marketer to do? Is all hope lost?

Show up where they are still landing: relevant active sites that deliver clear ad space to your target audience.

Advertising offers a direct and reliable solution to the awareness gap.

Unlike organic results, paid campaigns guarantee an immediate and prominent presence on SERPs and other digital platforms. That means eyeballs on your ads, even if a user doesn’t click on them.

Consider paid campaigns as a type of insurance policy against brand invisibility on the SERP.

Remember: early impressions = stronger recall later in the funnel. The power of showing up first cannot be overstated. Even if a user doesn’t click on your ad, the exposure to your name, logo, or key message fosters familiarity. Early recognition makes your brand more memorable when it comes time to convert.

Step 1: Implement An Awareness-Focused Advertising Strategy

If you’ve made it this far, you’re likely nodding along: zero-click is here, and advertising has to play a bigger role. But where do you start? The good news is you don’t need to overhaul everything overnight. Instead, think of paid as a strategic layer that enhances the visibility you’ve already worked hard to build organically.

Here’s the first step in making that shift in a way that feels purposeful, not scattered:

Leverage common queries

Run search and display ads tied to common zero-click queries. Many of the searches most impacted by zero-click are informational: “what is,” “how to,” and “why does” questions that rarely result in clicks. Instead of letting that traffic disappear into AI overviews, run search and display campaigns against these queries. Your brand may not get the click, but it will get the visibility, ensuring you stay part of the conversation even when Google is trying to keep people on the page.

Connect with tomorrow’s customers today. AdRoll makes brand awareness ads work for you. Get started with a demo.

Use what you already know

Build awareness campaigns in categories where your brand already shows up. If you’ve earned a featured snippet or knowledge panel, don’t leave it unsupported. Pair that organic placement with a targeted ad so your brand appears twice on the same page. This kind of overlap creates a halo effect: users perceive your brand as both authoritative and unavoidable. It’s one of the fastest ways to reinforce recall.

Enhance, don’t replace SEO

Paid advertising isn’t a substitute for strong organic presence, it’s an amplifier. Use ads to reinforce your authority and extend the reach of your organic work, not cover for it. Think of the two channels as partners: SEO earns you credibility, while ads guarantee visibility. Together, they create a more holistic visibility strategy that keeps you top of mind across formats and touchpoints. And don’t forget: LLMs and AI overviews are still learning from organic signals. If your content isn’t strong, your ads won’t carry the same weight.

At the end of the day, this isn’t about abandoning what has always worked. It’s about making sure your brand shows up where discovery is actually happening, whether that’s in a blue link, a snippet, or a sponsored placement.

Step 2: Measure Zero-Click Strategies The Right Way

Here’s the tricky part: in a zero-click world, traditional metrics don’t always tell the whole story. If you’re only watching organic traffic, it may look like your efforts are failing. But the reality is that influence is happening upstream, before a user ever lands on your site.

Here’s what to measure instead:

  • Branded search volume. If more people are searching for your brand name specifically, you know your awareness strategy is working. This is often the clearest leading indicator of recall.
  • Visibility share. Track how often your brand appears in SERPs, featured snippets, AI overviews, and paid placements, even if it doesn’t result in a click.
  • Impression lift. Ads may not drive immediate conversions, but consistent exposure increases recognition. Measuring impressions alongside recall surveys can help connect the dots.
  • Engagement on unowned channels. As research moves to social and communities, track where your educational content sparks conversations and shares outside of your own site.

The key is to shift from measuring traffic to measuring presence. Visibility in high-authority spaces, whether through organic or paid efforts, is the new top-of-funnel KPI.

Step 3: Connect The C-Suite To Zero-Click Strategies

Of course, metrics only matter if your leadership team understands them. However, many executives are still trained to see organic traffic as the gold standard. So when traffic dips, even for reasons outside your control, it can look like a problem.

This is where your role as translator becomes critical. You need to reframe the conversation from clicks to visibility, from pageviews to presence. The message to the C-suite should sound less like an apology and more like a strategic shift:

  • A decline in organic traffic doesn’t equal a decline in influence. Zero-click means users may never land on your site, but they’re still seeing your brand. Visibility is impact.
  • Your brand may actually be showing up more often. The problem is measurement, not presence. Snippets, AI overviews, and social conversations don’t show up in traffic charts, but they absolutely shape perception.
  • Advertising fills the gap. Paid campaigns guarantee your brand isn’t invisible at the exact moment prospects are forming their first impressions, making it the perfect complement to organic efforts.

The way to make this stick with leaders is through narrative. Show them that early impressions are building brand memory. Connect branded search growth to that recall. Paint the picture that what looks like “less traffic” is often “more visibility in new places.”

Executives care about competitive positioning and long-term growth, not just line graphs. So remind them: being the brand people remember when it’s time to buy is the real win. Presence is what creates that memory, and memory is what drives future pipeline.

Zero-Click Isn’t the End. It’s Your Advantage If You Move First

Zero-click isn’t the end of marketing as we know it. It’s just the latest evolution in how people discover and remember brands. The marketers who win will be the ones who adapt their strategies, blending organic authority with paid presence, reframing their KPIs, and helping their companies understand what visibility really means today.

The awareness gap is real, but it’s also an opportunity. By rethinking how you measure, how you communicate results, and how you show up at the top of the funnel, you can set your brand up to thrive in an environment where discovery no longer depends on a click.

And this is only Part 1. In Part 2, we’ll dig into the real secret weapon in a clickless world: recall. Because the brands that stay top of mind are the ones that get chosen later. Advertising’s biggest power isn’t in driving a click, it’s in building the kind of recognition that lasts.

Check back soon on the AdRoll website for Part 2: How to Build Recall in a Clickless World.

Image Credits

Featured Image: Image by AdRoll. Used with permission.

In-Post Images: Image by AdRoll. Used with permission.

This medical startup uses LLMs to run appointments and make diagnoses

Imagine this: You’ve been feeling unwell, so you call up your doctor’s office to make an appointment. To your surprise, they schedule you in for the next day. At the appointment, you aren’t rushed through describing your health concerns; instead, you have a full half hour to share your symptoms and worries and the exhaustive details of your health history with someone who listens attentively and asks thoughtful follow-up questions. You leave with a diagnosis, a treatment plan, and the sense that, for once, you’ve been able to discuss your health with the care that it merits.

The catch? You might not have spoken to a doctor, or other licensed medical practitioner, at all.

This is the new reality for patients at a small number of clinics in Southern California that are run by the medical startup Akido Labs. These patients—some of whom are on Medicaid—can access specialist appointments on short notice, a privilege typically only afforded to the wealthy few who patronize concierge clinics.

The key difference is that Akido patients spend relatively little time, or even no time at all, with their doctors. Instead, they see a medical assistant, who can lend a sympathetic ear but has limited clinical training. The job of formulating diagnoses and concocting a treatment plan is done by a proprietary, LLM-based system called ScopeAI that transcribes and analyzes the dialogue between patient and assistant. A doctor then approves, or corrects, the AI system’s recommendations.

“Our focus is really on what we can do to pull the doctor out of the visit,” says Jared Goodner, Akido’s CTO. 

According to Prashant Samant, Akido’s CEO, this approach allows doctors to see four to five times as many patients as they could previously. There’s good reason to want doctors to be much more productive. Americans are getting older and sicker, and many struggle to access adequate health care. The pending 15% reduction in federal funding for Medicaid will only make the situation worse.

But experts aren’t convinced that displacing so much of the cognitive work of medicine onto AI is the right way to remedy the doctor shortage. There’s a big gap in expertise between doctors and AI-enhanced medical assistants, says Emma Pierson, a computer scientist at UC Berkeley.  Jumping such a gap may introduce risks. “I am broadly excited about the potential of AI to expand access to medical expertise,” she says. “It’s just not obvious to me that this particular way is the way to do it.”

AI is already everywhere in medicine. Computer vision tools identify cancers during preventive scans, automated research systems allow doctors to quickly sort through the medical literature, and LLM-powered medical scribes can take appointment notes on a clinician’s behalf. But these systems are designed to support doctors as they go about their typical medical routines.

What distinguishes ScopeAI, Goodner says, is its ability to independently complete the cognitive tasks that constitute a medical visit, from eliciting a patient’s medical history to coming up with a list of potential diagnoses to identifying the most likely diagnosis and proposing appropriate next steps.

Under the hood, ScopeAI is a set of large language models, each of which can perform a specific step in the visit—from generating appropriate follow-up questions based on what a patient has said to to populating a list of likely conditions. For the most part, these LLMs are fine-tuned versions of Meta’s open-access Llama models, though Goodner says that the system also makes use of Anthropic’s Claude models. 

During the appointment, assistants read off questions from the ScopeAI interface, and ScopeAI produces new questions as it analyzes what the patient says. For the doctors who will review its outputs later, ScopeAI produces a concise note that includes a summary of the patient’s visit, the most likely diagnosis, two or three alternative diagnoses, and recommended next steps, such as referrals or prescriptions. It also lists a justification for each diagnosis and recommendation.

ScopeAI is currently being used in cardiology, endocrinology, and primary care clinics and by Akido’s street medicine team, which serves the Los Angeles homeless population. That team—which is led by Steven Hochman, a doctor who specializes in addiction medicine—meets patients out in the community to help them access medical care, including treatment for substance use disorders. 

Previously, in order to prescribe a drug to treat an opioid addiction, Hochman would have to meet the patient in person; now, caseworkers armed with ScopeAI can interview patients on their own, and Hochman can approve or reject the system’s recommendations later. “It allows me to be in 10 places at once,” he says.

Since they started using ScopeAI, the team has been able to get patients access to medications to help treat their substance use within 24 hours—something that Hochman calls “unheard of.”

This arrangement is only possible because homeless patients typically get their health insurance from Medicaid, the public insurance system for low-income Americans. While Medicaid allows doctors to approve ScopeAI prescriptions and treatment plans asynchronously, both for street medicine and clinic visits, many other insurance providers require that doctors speak directly with patients before approving those recommendations. Pierson says that discrepancy raises concerns. “You worry about that exacerbating health disparities,” she says.

Samant is aware of the appearance of inequity, and he says the discrepancy isn’t intentional—it’s just a feature of how the insurance plans currently work. He also notes that being seen quickly by an AI-enhanced medical assistant may be better than dealing with long wait times and limited provider availability, which is the status quo for Medicaid patients. And all Akido patients can opt for traditional doctor’s appointments, if they are willing to wait for them, he says.

Part of the challenge of deploying a tool like ScopeAI is navigating a regulatory and insurance landscape that wasn’t designed for AI systems that can independently direct medical appointments. Glenn Cohen, a professor at Harvard Law School, says that any AI system that effectively acts as a “doctor in a box” would likely need to be approved by the FDA and could run afoul of medical licensure laws, which dictate that only doctors and other licensed professionals can practice medicine.

The California Medical Practice Act says that AI can’t replace a doctor’s responsibility to diagnose and treat a patient, but doctors are allowed to use AI in their work, and they don’t need to see patients in-person or in real-time before diagnosing them. Neither the FDA nor the Medical Board of California were able to say whether or not ScopeAI was on solid legal footing based only on a written description of the system.

But Samant is confident that Akido is in compliance, as ScopeAI was intentionally designed to fall short of being a “doctor in a box.” Because the system requires a human doctor to review and approve of all of its diagnostic and treatment recommendations, he says, it doesn’t require FDA approval. 

At the clinic, this delicate balance between AI and doctor decision making happens entirely behind the scenes. Patients don’t ever see the ScopeAI interface directly—instead, they speak with a medical assistant who asks questions in the way that a doctor might in a typical appointment. That arrangement might make patients feel more comfortable. But Zeke Emanuel, a professor of medical ethics and health policy at the University of Pennsylvania who served in the Obama and Biden administrations, worries that this comfort could be obscuring from patients the extent to which an algorithm is influencing their care.

Pierson agrees. “That certainly isn’t really what was traditionally meant by the human touch in medicine,” she says.

DeAndre Siringoringo, a medical assistant who works at Akido’s cardiology office in Rancho Cucamonga, says that while he tells the patients he works with that an AI system will be listening to the appointment in order to gather information for their doctor, he doesn’t inform them about the specifics of how ScopeAI works, including the fact that it makes diagnostic recommendations to doctors. 

Because all ScopeAI recommendations are reviewed by a doctor, that might not seem like such a big deal—it’s the doctor who makes the final diagnosis, not the AI. But it’s been widely documented that doctors using AI systems tend to go along with the system’s recommendations more often than they should, a phenomenon known as automation bias. 

At this point, it’s impossible to know whether automation bias is affecting doctors’ decisions at Akido clinics, though Pierson says it’s a risk—especially when doctors aren’t physically present for appointments. “I worry that it might predispose you to sort of nodding along in a way that you might not if you were actually in the room watching this happen,” she says.

An Akido spokesperson says that automation bias is a valid concern for any AI tool that assists a doctor’s decision-making and that the company has made efforts to mitigate that bias. “We designed ScopeAI specifically to reduce bias by proactively countering blind spots that can influence medical decisions, which historically lean heavily on physician intuition and personal experience,” she says. “We also train physicians explicitly on how to use ScopeAI thoughtfully, so they retain accountability and avoid over-reliance.”

Akido evaluates ScopeAI’s performance by testing it on historical data and monitoring how often doctors correct its recommendations; those corrections are also used to further train the underlying models. Before deploying ScopeAI in a given specialty, Akido ensures that when tested on historical data sets, the system includes the correct diagnosis in its top three recommendations at least 92% of the time.

But Akido hasn’t undertaken more rigorous testing, such as studies that compare ScopeAI appointments with traditional in-person or telehealth appointments, in order to determine whether the system improves—or at least maintains—patient outcomes. Such a study could help indicate whether automation bias is a meaningful concern.

“Making medical care cheaper and more accessible is a laudable goal,” Pierson says. “But I just think it’s important to conduct strong evaluations comparing to that baseline.”

An oil and gas giant signed a $1 billion deal with Commonwealth Fusion Systems

Eni, one of the world’s largest oil and gas companies, just agreed to buy $1 billion in electricity from a power plant being built by Commonwealth Fusion Systems. The deal is the latest to illustrate just how much investment Commonwealth and other fusion companies are courting as they attempt to take fusion power from the lab to the power grid. 

“This is showing in concrete terms that people that use large amounts of energy, that know the energy market—they want fusion power, and they’re willing to contract for it and to pay for it,” said Bob Mumgaard, cofounder and CEO of Commonwealth, on a press call about the deal.   

The agreement will see Eni purchase electricity from Commonwealth’s first commercial fusion power plant, in Virginia. The facility is still in the planning stages but is scheduled to come online in the early 2030s.

The news comes a few weeks after Commonwealth announced a $863 million funding round, bringing its total funding raised to date to nearly $3 billion. The fusion company also announced earlier this year that Google would be its first commercial power customer for the Virginia plant.

Commonwealth, a spinout from MIT’s Plasma Science and Fusion Center, is widely considered one of the leading companies in fusion power. Investment in the company represents nearly one-third of the total global investment in private fusion companies. (MIT Technology Review is owned by MIT but is editorially independent.)

Eni has invested in Commonwealth since 2018 and participated in the latest fundraising round. The vast majority of the company’s business is in oil and gas, but in recent years it’s made investments in technologies like biofuels and renewables.

“A company like us—we cannot stay and wait for things to happen,” says Lorenzo Fiorillo, Eni’s director of technology, research and development, and digital. 

One open question is what, exactly, Eni plans to do with this electricity. When asked about it on the press call, Fiorillo referenced wind and solar plants that Eni owns and said the plan “is not different from what we do in other areas in the US and the world.” (Eni sells electricity from power plants that it owns, including renewable and fossil-fuel plants.)

Commonwealth is building tokamak fusion reactors that use superconducting magnets to hold plasma in place. That plasma is where fusion reactions happen, forcing hydrogen atoms together to release large amounts of energy.

The company’s first demonstration reactor, which it calls Sparc, is over 65% complete, and the team is testing components and assembling them. The plan is for the reactor, which is located outside Boston, to make plasma within two years and then demonstrate that it can generate more energy than is required to run it.

While Sparc is still under construction, Commonwealth is working on plans for Arc, its first commercial power plant. That facility should begin construction in 2027 or 2028 and generate electricity for the grid in the early 2030s, Mumgaard says.

Despite the billions of dollars Commonwealth has already raised, the company still needs more money to build its Arc power plant—that will be a multibillion-dollar project, Mumgaard said on a press call in August about the company’s latest fundraising round. 

The latest commitment from Eni could help Commonwealth secure the funding it needs to get Arc built. “These agreements are a really good way to create the right environment for building up more investment,” says Paul Wilson, chair of the department of nuclear engineering and engineering physics at the University of Wisconsin, Madison.

Even though commercial fusion energy is still years away at a minimum, investors and big tech companies have pumped money into the industry and signed agreements to buy power from plants once they’re operational. 

Helion, another leading fusion startup, has plans to produce electricity from its first reactor in 2028 (an aggressive timeline that has some experts expressing skepticism). That facility will have a full generating capacity of 50 megawatts, and in 2023 Microsoft signed an agreement to purchase energy from the facility in order to help power its data centers.

As billions of dollars pour into the fusion industry, there are still many milestones ahead. To date, only the National Ignition Facility at Lawrence Livermore National Laboratory has demonstrated that a fusion reactor can generate more energy than the amount put into the reaction. No commercial project has achieved that yet. 

“There’s a lot of capital going out now to these startup companies,” says Ed Morse, a professor of nuclear engineering at the University of California, Berkeley. “What I’m not seeing is a peer-reviewed scientific article that makes me feel like, boy, we really turned the corner with the physics.”

But others are taking major commercial deals from Commonwealth and others as reasons to be optimistic. “Fusion is moving from the lab to be a proper industry,” says Sehila Gonzalez de Vicente, global director of fusion energy at the nonprofit Clean Air Task Force. “This is very good for the whole sector to be perceived as a real source of energy.”

The Download: the LLM will see you now, and a new fusion power deal

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This medical startup uses LLMs to run appointments and make diagnoses

Patients at a small number of clinics in Southern California run by the medical startup Akido Labs are spending relatively little time, or even no time at all, with their doctors. Instead, they see a medical assistant, who can lend a sympathetic ear but has limited clinical training.

The job of formulating diagnoses and concocting a treatment plan is done by an LLM-based system called ScopeAI that transcribes and analyzes the dialogue between patient and assistant. A doctor then approves, or corrects, the AI system’s recommendations.

According to Akido’s CEO, this approach allows doctors to see four to five times as many patients as they could previously. But experts aren’t convinced that displacing so much of the cognitive work of medicine onto AI is the right way to remedy the doctor shortage. Read the full story.

—Grace Huckins

An oil and gas giant signed a $1 billion deal with Commonwealth Fusion Systems

Eni, one of the world’s largest oil and gas companies, just agreed to buy $1 billion in electricity from a power plant being built by Commonwealth Fusion Systems. The deal is the latest to illustrate just how much investment Commonwealth and other fusion companies are courting as they attempt to take fusion power from the lab to the power grid.

The agreement will see Eni purchase electricity from Commonwealth’s first commercial fusion power plant, in Virginia. The facility is still in the planning stages but is scheduled to come online in the early 2030s. Read the full story.

—Casey Crownhart

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump officials are expected to link Tylenol to autism
They’re also likely to tout a lesser-known drug called leucovorin as a potential treatment. (WP $)
+ They’ll warn women in the early stages of pregnancy that they should only take Tylenol to treat high fevers. (Politico)
+ But a huge study found no connection last year. (Axios)

2 Trump wants to charge skilled foreign workers $100,000 for H-1B visas
The decision is highly likely to harm US growth, especially in its tech sector. (The Guardian)
+ The visa has been a lifeline for hundreds of thousands of tech workers. (BBC)
+ Indian outsourcing companies are struggling to pivot. (Bloomberg $)
+ Tech firms are sending memos to their workers on the visa. (Insider $)

3 The European Commission wants to ax cookie consent banners
A 2009 law triggered an influx in pesky pop-ups that the EU now wants to get rid of. (Politico)

4 The Murdochs and Michael Dell are among TikTok’s potential buyers
The media mogul family and Dell founder are interested in shares, Trump says. (CNN)

5 Inside China’s plan to put its data centers to work
A mega-cluster of centers is springing up in the city of Wuhu. (FT $)
+ China built hundreds of AI data centers to catch the AI boom. Now many stand unused. (MIT Technology Review)

6 Seattle’s tech scene is in trouble
When its biggest firms slash their workforces, where does that leave everyone else? (WSJ $)

7 Innocent people are being scammed into scamming
Chinese gangs are imprisoning trafficking victims in compounds on the Myanmar-Thai border. (Reuters)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

8 Europe’s reusable rocket dream isn’t entirely dead
But progress has been a lot slower than it should be. (Ars Technica)
+ Elon Musk’s utter dominance of space tech is hard to overestimate. (Wired $)
+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)

9 How ChatGPT fares as a financial stock picker
Be prepared to roll the dice. (Fast Company $)

10 Silicon Valley is ditching dating apps
And turning to elite matchmakers instead. (The Information $)

Quote of the day

“I didn’t sleep all night. I kept thinking: What if I get stuck outside the US?”

—Akaash Hazarika, a Salesforce engineer, tells Insider he was forced to cut his vacation to Toronto short and rush back to America after the Trump administration announced changes to the H-1B skilled foreign worker visa.

One more thing

The quest to figure out farming on Mars

Once upon a time, water flowed across the surface of Mars. Waves lapped against shorelines, strong winds gusted and howled, and driving rain fell from thick, cloudy skies. It wasn’t really so different from our own planet 4 billion years ago, except for one crucial detail—its size. Mars is about half the diameter of Earth, and that’s where things went wrong.

The Martian core cooled quickly, soon leaving the planet without a magnetic field. This, in turn, left it vulnerable to the solar wind, which swept away much of its atmosphere. Without a critical shield from the sun’s ultraviolet rays, Mars could not retain its heat. Some of the oceans evaporated, and the subsurface absorbed the rest, with only a bit of water left behind and frozen at its poles. If ever a blade of grass grew on Mars, those days are over.

But could they begin again? And what would it take to grow plants to feed future astronauts on Mars? Read the full story.

—David W. Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+  These abandoned blogs are a relic of the bygone internet (bring them back!)
+ How to strengthen your bond with your reluctant cat 😾
+ How Metal Gear Solid inspired the video to one of the greatest hits of the late 90s.
+ If I had to explain British culture to someone, I’d just send them this video.

Recover ChatGPT 404 Traffic with GA4

ChatGPT often links to sources when answering prompts. Traffic from those clicks is typically high-converting in my testing. Unfortunately, ChatGPT frequently hallucinates URLs and sends visitors to nonexistent pages.

A study released this month by Ahrefs found ChatGPT 5 links to error pages nearly three times more than does Google Search.

To be sure, traffic thus far from ChatGPT is less than 5% for most sites. But it’s still a good idea to monitor ChatGPT-generated 404 errors and adjust the pages accordingly. With declining Google Search traffic, “saving” visits is paramount.

Address the problem in three steps:

  1. Track 404 “page not found” URLs in Google Analytics 4.
  2. Create helpful 404 pages for visitors from hallucinated URLs.
  3. Set up 301 redirects only for broken URLs that generate traffic.

I’ll explain the first step in this article.

Track in Google Analytics

Filter Google Analytics reports to URLs with traffic from ChatGPT:

  • Go to “Engagement” > “Pages and screens” to view all pages with traffic for the designated period.
  • Select “Page titles and screen class” above the list of pages.
  • Click “Add filter” above the graph.
  • Select “Session source/medium” as the dimension.
  • Select “Contains” and type “ChatGPT.”
  • Click “Apply.”
Screenshot of the Dimension interface in Google Analytics

Filter Google Analytics reports to URLs with traffic from ChatGPT. Click image to enlarge.

Now your list is filtered to pages with traffic from ChatGPT.

Next, narrow the list to error pages:

  • Go to your site and open all the ChatGPT-filtered pages above.
  • Note the title of pages with 404 errors (Ctrl+D on Windows; Command+D on Mac). In my case, the title was “404 Response Error Page.”

Then return to Google Analytics:

  • Type the error page title in the search bar above the list of pages with traffic from ChatGPT. Add “Page path and screen” class as a secondary dimension to view the hallucinated URLs.
  • Bookmark the URL of this report and check from time to time.

Type the error page title in the search bar above the list of pages with traffic from ChatGPT. Add “Page path and screen” class as a secondary dimension. Click image to enlarge.

Agentic AI In SEO: AI Agents & The Future Of Content Strategy (Part 3) via @sejournal, @VincentTerrasi

For years, the SEO equation appeared to be a fixed and unchanging landscape: optimizing for Googlebot on one side, and creating content for human users on the other. This outdated binary vision is now a thing of the past.

In the current business environment, a new generation of actors is causing significant changes to the online visibility landscape. AI agents such as ChatGPT, Perplexity, Claude, and Gemini are no longer merely processing information; they are exploring, synthesizing, choosing sources to cite, and significantly influencing traffic flows.

For those who are skeptical about the impact of AI agents, I would invite you to consider the concept of Zero Moment of Truth (ZMOT), which was developed by Google over 10 years ago. The principle is straightforward: Prior to any purchase, consumers undertake an extensive research phase. They consult customer reviews, compare across different sites, scrutinize social networks, accumulate information sources, and now use their favorite AIs for final validation.

A New Paradigm

We are currently experiencing a fundamental reconfiguration of the digital ecosystem. In the past, we have identified two or three main engines. However, a new paradigm is emerging.

Google continues to be a leading search engine, utilizing sophisticated algorithms to index and rank content. Humans act as a virality engine, sharing and amplifying information via their social networks and interactions.

It is becoming increasingly apparent that AI agents are assuming the role of an autonomous traffic engine. These intelligent systems are capable of navigating information independently, establishing their own selection criteria, and directing users to sources they deem relevant.

This transformation necessitates a wholly new approach to content creation, which I will be sharing imminently. I will be sharing concepts and case studies that have been successfully implemented with several major accounts.

Agentic SEO

Quick reminder following my two previous articles on the subject: “Agentic AI In SEO: AI Agents & Workflows For Ideation (Part 1)” and “Agentic AI In SEO: AI Agents & Workflows For Audit (Part 2).”

Agentic SEO involves the creation of structured and dynamic content that is designed to appeal not only to Google, but also to conversational AIs.

The approach to content generation is founded on three key pillars:

1. Data Enrichment: Schema.org data, microformats, and semantic tags are becoming important as, when grounding data, they can facilitate understanding and information extraction by language models.

2. Content Modularity: Concise and “chunkable” responses are perfectly suited to Retrieval-Augmented Generation (RAG) ingestion processes utilized by these agents. Content should be designed using autonomous and reusable blocks.

3. Polymorphism: Each page can offer variants adapted according to the type of agent consulting it. It is essential to recognize that the needs of a shopping agent differ from those of a medical agent, and content must adapt accordingly.

Image from author, September 2025

If your content isn’t optimized for AI agents, you’re already experiencing considerable strategic lag.

However, if your site is optimized for SEO, you’ve already taken a significant step forward.

The Foundations: Generative SEO And Edge SEO

To understand this evolution, it is important to consider the concepts that have prepared the ground: generative SEO and Edge SEO.

Generative SEO

Generative SEO facilitates the creation of substantial and insightful content through the utilization of language models. This approach automates the process of creating content while ensuring its relevance and quality.

Generative SEO has always existed in primitive forms, such as content spinning and all derived techniques. In today’s digital landscape, we are witnessing a paradigm shift towards unparalleled quality, as evidenced by the preponderance of AI-generated or co-written content across various social networks, including LinkedIn.

Edge SEO

Edge SEO leverages CDN or proxy-side deployment capabilities to reduce deployment latency and enable large-scale content testing from both content and performance perspectives.

These two approaches are naturally complementary, but they still represent a 1.0 vision of automated SEO. It is important to note that traditional A/B testing and content freezing, once generation is complete, limit the potential of the project.

The true revolution lies in the adoption of dynamic and adaptive systems that surpass these limitations.

Agentic Edge SEO

Edge SEO had already revolutionized the very notion of static content. The system now has the capability to modify content in real-time according to the following three variables:

  • Firstly, user intention is detected and used to guide content adaptation. The system is able to analyze behavioral signals in order to adjust the message in real-time.
  • Next, let us consider the impact of SERP seasonality on modifications. When Google prioritizes certain trends on a given query, content automatically adapts to capitalize on these evolutions.
  • Finally, the instant technical optimizations triggered by Core Web Vitals signals ensure that performance is maintained.
Image from author, September 2025

Let us consider a product page as a case study. If Google highlights “sustainable” or “economical” trends for a particular search, this page automatically adapts its titles, metadata, and visuals to align with these market signals.

At Draft&Goal, we have developed connectors with the Fasterize tool to facilitate the deployment of AI workflows. These workflows are compatible with all the most recent proprietary or open-source LLMs.

We anticipate that in the future, the system will continuously test these variants with search engines and users, collecting performance data in near real-time.

The most effective version is then selected by the algorithm, in terms of click-through rate (CTR), positioning, and conversion, with results continually being optimized.

For example, imagine a “Running Shoes” landing page, existing in seven distinct versions, each oriented towards a specific angle: price, performance, comfort, ecology, style, durability, or innovation. The polymorphic system automatically highlights the most effective variant according to signals sent by Google and user behaviors.

Three Concrete Applications

These concepts are immediately applicable to several strategic sectors. Allow me to provide three examples of the products currently under active testing.

In ecommerce, product pages are self-evolving. These systems adapt to search trends, available stock, and detected behavioral preferences.

1. To illustrate this point, consider a peer-to-peer car rental platform that manages 20,000 city pages.

Each page automatically adapts according to Google signals and local user patterns. During the summer months, the “Car rental Nice” page automatically prioritizes convertibles and highlights family testimonials. During the winter season, the fleet is transitioned to 4×4 vehicles, with a focus on optimizing the “mountain car rental” service.

2. Another example of technological innovation in the media industry is the ability of major news outlets to deploy “living” articles.

These articles are automatically updated to include the latest breaking news, ensuring that content remains fresh and relevant without the need for human editorial intervention. We continue to prioritize content creation by human professionals, with AI playing a supportive role in maintaining currency.

3. Finally, the promo codes website has successfully managed 3,000 merchant pages, which adapt in real-time to commercial cycles and breaking deals.

Amazon’s Prime Days announcement is met with the automatic enrichment of contextual banners and temporal counters on all related pages. The system is designed to monitor partner APIs in order to detect new offers and instantly generate optimized content. Three weeks before Black Friday, “Zalando promo codes” pages automatically integrate dedicated sections and restructure their keywords.

Toward A New Era Of SEO

The future of SEO lies in publishing dynamic content that can adapt to the ever-changing algorithms of Google’s index. This transformation requires a fundamental paradigm shift, and many SEO agencies we support have already made the switch.

Marketing experts must abandon the “page” logic to adopt that of “adaptive systems.” This transition necessitates the acquisition of new tools and skills, as well as a re-evaluation of our strategic vision.

It is important to note that Agentic SEO is not merely a passing trend; it is the necessary response to an ecosystem undergoing profound mutation. Organizations that master these concepts will gain a significant competitive advantage in tomorrow’s attention economy.

More Resources:


Featured Image: Collagery/Shutterstock

How AI Mode Will Redefine Paid Search Advertising via @sejournal, @brookeosmundson

Search has always been a moving target.

From the days when keyword match types and manual cost-per-click (CPCs) gave advertisers a sense of control, to the rise of Shopping ads, automated bidding, and Performance Max, Google has never stopped reshaping how search works.

Every step has chipped away at some level of control for marketers while making it easier for Google to monetize intent.

But what we’re seeing now with AI Overviews and AI Mode is not just another product update. It is a structural rewrite of how search itself functions, which has some serious implications for paid ads.

Instead of sending people to a list of blue links, Google is using AI to generate answers and guide users through multi-step, conversational journeys. Ads are being pulled directly into these experiences, sometimes above or below AI summaries, other times embedded right inside them.

Google calls this a way to “shorten the path from discovery to decision.” For advertisers, it means budgets are being funneled into surfaces that look and act very different from the SERPs we’ve optimized around for years.

The stakes are clear: If fewer people click through to websites, advertisers face tighter competition for attention, rising CPCs, brand safety concerns, and limited transparency into where money is going.

Marketing leaders can’t afford to treat AI Mode as a side experiment. This is the future of Google search, and your ads will either adapt to it or be left behind.

Google’s AI Search Vision And Ad Strategy

Google has been explicit about where it wants to go. At Google Marketing Live 2025, executives described AI Overviews as “one of the most successful launches in Search in the past decade,” citing increases in commercial queries in markets like the U.S. and India.

AI Mode builds on that success by creating a conversational environment where users can refine, compare, and act without returning to the static list of links that defined Google for 20 years.

The company frames this as a win-win: Users get answers more efficiently, and advertisers get placements where intent is clearer and actions are closer at hand.

Google explains that ads are pulled seamlessly into these surfaces from Search, Shopping, Performance Max, and App campaigns.

For the user, the ad is “just part of the journey.” For the advertiser, there is no opting out, no special campaign type, and no reporting that shows which impressions or clicks came from AI Mode versus traditional search.

This approach is not new. Every major change to Google’s results has tilted the balance toward monetization.

Shopping ads once displaced text ads. Featured Snippets and the Knowledge Graph began answering questions directly, cutting down on organic clicks. Performance Max combined inventory into a single system, obscuring where impressions were served.

AI Mode is the culmination of these shifts: Ads are not just on the page; they are woven into the answers themselves.

Competition is another driver. Microsoft has already integrated ads into Copilot. OpenAI is experimenting with sponsored results in ChatGPT. Perplexity, the AI search upstart, has raised millions while building advertiser interest in native placements.

Google cannot afford to sit back while others monetize AI-first search. Ads inside AI Mode aren’t an experiment; they’re an existential business necessity.

Industry experts see this direction clearly. Cindy Krum of MobileMoxie has argued that Google is merging AI Overviews, Discover, and conversational flows into a single journey-first system. She believes ads will become highly-targeted to users within that journey.

Krum further explained her opinion of Google’s intention for Ads in AI Mode:

You’ll have to be logged in to access AI Mode and when you’re logged in, they [Google] can collect all kinds of behavioral data and serve you incredibly personalized ads—ones you’re actually likely to click and convert on. That’s valuable to advertisers. Google can say, “We only show your ads to people who will convert.”

What I find concerning, though, is that advertisers are being asked to play along without the transparency they need to measure value. Seamless for users often means opaque for marketers, and this transition is no exception.

How AI Mode Changes User Behavior And Why It Matters For Ads

It’s easy to assume AI Mode is just another SERP redesign. But the data suggests it is changing how users behave, and those changes have direct implications for paid ads.

According to Pew Research, when an AI Overview appears:

  • Only 8% of visits result in clicks on traditional results, compared to 15% when no overview is present.
  • Only about 1% of visits include clicks on the links embedded inside the AI box.

Similarweb has tracked a sharp rise in zero-click searches, reaching nearly 70% of all queries by mid-2025, up from 56% the year before.

Authoritas found that in news-related queries, traffic to a top-ranking result dropped by almost 80% when an AI Overview appeared above it.

For advertisers, the math is simple.

  • If fewer people leave Google, the competition for the remaining clicks intensifies.
  • CPCs rise because ad real estate is scarcer.
  • Campaign budgets have to stretch further just to maintain the same level of visibility.
  • Organic traffic has always acted as a counterweight to paid spend.
  • If that counterweight shrinks, paid budgets take on more pressure.

The effects differ by vertical. Ecommerce and travel sometimes see AI summaries spark more exploration of products, which can benefit Shopping ads.

Finance and insurance face mixed outcomes. Simplified comparisons may increase clicks in some cases but reduce brand-specific exposure in others.

News, health, and publishers are hit hardest, with traffic losses so steep that paid ads often become the only reliable way to reach audiences at scale.

Industry experts have not been shy about voicing their concerns.

Lily Ray, SEO director at Amsive, expressed her view after click-through rate data came out on AI Overviews:

“It was only a matter of time before new data & studies started to contradict Google’s messaging around the impact of AIOs on traffic.”

Rand Fishkin of SparkToro has been even more blunt:

“Zero click is taking over everything. Google is trying to answer searches without clicks. Facebook is trying to keep people on Facebook. LinkedIn wants to keep people on LinkedIn.”

I share that unease. This is a classic supply-and-demand problem. As free clicks shrink, advertisers will be forced to compete harder and pay more. Google benefits from this compression; advertisers absorb the costs.

Marketing leaders should stop treating this as a temporary adjustment. CPC inflation is becoming a structural reality of AI-powered search.

Ads Inside AI Journeys: Auctions, Costs, And Creative Implications

Google’s marketing spin around AI Mode is that ads are “a logical and natural next action to consumers exploring any topic.” That might be true from a user perspective, but from an advertiser’s perspective, the auction mechanics have changed in ways that deserve scrutiny.

Ads in AI Mode are not a distinct product. They are pulled from Search, Shopping, Performance Max, and App campaigns.

That means the inventory is blended, and advertisers don’t know whether impressions came from a standard SERP or an AI-generated summary.

Larger brands with broad match strategies, comprehensive product feeds, and robust budgets will have the advantage. Smaller or more niche advertisers risk being squeezed out, not because of poor strategy, but because the system is designed to privilege scale.

This dynamic almost guarantees CPC pressure. We saw the same thing when Shopping ads rose to prominence a decade ago.

As more real estate was given to paid placements, the remaining inventory became more competitive, and CPCs rose for the survivors. AI Mode is likely to trigger a similar cycle: fewer outbound clicks, fiercer bidding, higher costs.

Google is also testing outcome-based formats that push this further. For example, in the retail vertical, early experiments allow users to use virtual try-on or track prices without ever leaving the AI journey.

By embedding ads as actions, Google can move from CPC toward cost-per-action pricing.

Fred Vallaeys of Optmyzr stated:

I have no doubt that Google and other ad platforms will find ways to appropriately monetize these advertising opportunities, even if there will be fewer impressions for each consumer journey.

He sees a potential upside for advertisers. I agree, but only if advertisers can prove that the actions driven inside AI Mode are incremental, not cannibalized from existing campaigns.

Creative expectations are also shifting. Conversational journeys demand conversational ads.

A blunt “Sign up today” may feel jarring inside a multi-step dialogue. Phrasing like “Find the right plan for your family” or “See how much you could save in minutes” fits better into the AI-driven flow.

I see opportunity here, but also risk. AI Mode could deliver more relevant matches between ad and intent. But without transparency into where ads appear and how they perform, advertisers are bidding blind. Google will extract more value from each interaction. Whether advertisers see the same value in return is far less certain.

The Transparency And Measurement Gap Of AI Mode

Perhaps the most glaring problem with AI Mode is measurement. Advertisers cannot see how their ads perform specifically in AI Overviews or AI Mode.

There is no column in Google Ads. Search Console offers no separate reporting. All performance is collapsed into existing campaigns.

This is more than a technical gap. For CMOs and CFOs, modeled attribution is not enough. Boards want to know where money is going and what it is producing.

If ad spend is being redirected into AI surfaces but not disclosed separately, how can leaders defend their budgets?

We’ve seen this before. Performance Max launched with almost no reporting visibility. Advertisers pushed back, and Google eventually provided more insights.

Transparency tends to lag product launches, but history suggests it comes only after sustained pressure from advertisers and agencies.

In the meantime, marketers have to fill the gap themselves. Some are building marketing mix models to estimate AI’s contribution. Others are connecting CRM systems more tightly to campaign spend.

Tracking mid-funnel events like demos or downloads is also becoming essential, since these signals often reveal whether AI-driven impressions are assisting conversion paths.

Modeled attribution can provide directional value, but it cannot replace true visibility.

Until Google surfaces AI-specific reporting, marketers should approach performance claims with skepticism and invest in their own measurement frameworks to avoid flying blind.

The Brand Safety And Trust Challenge With AI Overviews

AI Overviews have already produced embarrassing results, suggesting people put glue on pizza or eat rocks.

Google has since upgraded its models, grounding them in Gemini 2.5 and using “query fan-out” to cross-check responses. Accuracy has improved, but hallucinations still occur.

For advertisers, the risk goes beyond bad answers. It’s about adjacency. If your brand’s ad appears alongside a flawed or misleading AI-generated response, the reputational fallout could be significant.

This is a new kind of brand safety risk for search. In Display, adjacency concerns are expected. In search, ads have traditionally been “safe.” AI Mode changes that equation.

Regulators are also paying attention. The FTC and DOJ have already scrutinized Google’s dominance in search advertising.

If AI-driven ads blur the line between editorial and commercial, new antitrust challenges are possible. In Europe, the AI Act may impose stricter standards for how AI-generated content and ads are labeled.

Avoiding AI surfaces altogether isn’t realistic. The opportunity is too large. But brands must prepare frameworks to protect themselves.

That means actively monitoring where ads appear, setting internal thresholds for unacceptable contexts, and establishing escalation paths with Google when placements cross the line.

Trust cannot be outsourced. Advertisers must take responsibility for brand safety in AI environments, even if it means creating new workflows and raising difficult questions with their Google reps.

What Should Marketers Prioritize In The Face Of AI Mode And Overviews?

It’s tempting to wait until reporting improves and best practices become clearer. But hesitation is risky. The brands that adapt early will set the standards others follow.

The most important shift is reframing search around journeys, not keywords.

AI Mode thrives on follow-ups and refinements. Campaigns should be designed with multi-step customer paths in mind.

An insurance company, for example, shouldn’t stop at “compare rates.” It should also anticipate “how to switch providers” or “what coverage works best for families.”

Automation is another reality. Performance Max and broad match are the engines of eligibility for AI surfaces. But these tools need guardrails.

Negative keywords, audience signals, and clean product feeds help prevent waste and maintain some level of control.

Tinuiti has emphasized media accountability and measurement tools to ensure campaigns optimize what works and limit waste.

Agencies like Seer Interactive have published data showing paid click-through rates drop significantly when AI Overviews are present, and recommend careful monitoring and automation guardrails so advertisers don’t get caught by surprise.

Asset quality also matters more than ever. Structured data, schema markup, and entity-rich product feeds aren’t optional. They determine whether ads are eligible to show inside AI responses at all. Poor data means invisibility.

Measurement, too, must evolve. Last-click cost-per-acquisition (CPA) no longer tells the story. Marketing leaders need to evaluate mid-funnel signals like lead quality, sales cycle speed, and assisted revenue.

These key performance indicators (KPIs) reveal whether AI-driven impressions are helping move customers forward.

Creative strategy is another frontier. Ads inside AI journeys need to read like natural next steps, not jarring interruptions.

Early tests in Microsoft Copilot and Perplexity show conversational CTAs, such as “Estimate your monthly cost in seconds,” outperform blunt directives. Marketers should begin experimenting now to build a playbook before these surfaces scale further.

Adaptation is non-negotiable. This isn’t about abandoning SEM fundamentals. It’s about extending them into a search environment where AI defines the journey. CMOs who build strategies around these realities will not just survive the shift; they’ll gain a competitive edge.

The Future Of Paid Search In An AI World

AI search complicates the three pillars paid search has relied on for decades:

  • Transparency.
  • Predictable intent.
  • Measurable outcomes.

Ads are shifting from placements that sit beside results to actions that live inside AI-generated answers.

This isn’t unique to Google. Microsoft has integrated ads into Copilot. OpenAI is piloting sponsored answers in ChatGPT. Amazon and TikTok are testing AI-driven search monetization.

The entire industry is converging on the same model: AI-assisted journeys with ads embedded at critical decision points.

The outlook can be framed in scenarios.

In the best case, AI ads deliver more qualified clicks and higher efficiency, creating a win for advertisers.

In the middle case, some verticals see gains while frustrations over transparency persist.

In the worst case, CPCs inflate significantly, brand safety incidents mount, and ROI weakens, pushing advertisers to question their reliance on Google.

My conclusion is clear: This is not a passing experiment. It’s a structural shift. CMOs should treat AI search as a permanent change to the foundation of paid advertising.

That means reframing PPC as journey management, not keyword bidding. It means doubling down on first-party data and building attribution systems that don’t rely on Google’s word alone. And it means pressing Google for accountability at every step.

Because when ads become the answer, the brands that prepare early will be the ones that still get found.

More Resources:


Featured Image: Masha_art/Shutterstock

Google Answers SEO Question About Keyword Cannibalization via @sejournal, @martinibuster

Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.

What Is Keyword/Content Cannibalization?

There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.

The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.

SEO Confusion

The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.

The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.

Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.

Here is the question that was asked:

“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?

If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”

Different Pages Ranking For Same Query

Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.

Mueller explained:

“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)

All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.

Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”

Actual SEO Problems

Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.

Here are some real reasons why pages on the same topic may fail to rank:

  • The pages are too long and consequently are unfocused.
  • The pages contain off-topic passages.
  • The pages are insufficiently linked internally.
  • The pages are thin.
  • The pages are virtually duplicates of the other pages in the group.

The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.

Takeaway

The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.

Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.

What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.

Featured Image by Shutterstock/Roman Samborskyi