Roundtables: Meet the 2025 Innovator of the Year

Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method. Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.

Speakers: Sneha Goenka, Innovator of the Year; Leilani BattleUniversity of Washington; and Mat Honaneditor in chief

Recorded on September 23, 2025

Related Coverage:

Ask an Expert: How to Start with GEO?

“Ask an Expert” is an occasional feature where we pose questions to seasoned ecommerce pros. For this installment, we’ve turned to Louis Camassa, the director of product at Rithum, a marketplace orchestration platform. He’s also a serial entrepreneur and an occasional contributor to Practical Ecommerce.

He addresses the essentials of generative engine optimization for ecommerce.

Practical Ecommerce: How can merchants optimize product visibility across ChatGPT, Perplexity, Gemini, and other generative AI platforms?

Louis Camassa: There’s no universal guide at present for product integration, but retailers can take proactive steps to prepare.

Louis Camassa

Louis Camassa

Begin by evaluating current genAI visibility. Merchants should search for their brand names to understand how the platforms present them. Experiment with shopper-like queries to observe how the systems rank and mention offerings in comparison to competitors. Consider searches such as “Find me running shoes with maximum cushioning for marathon training” or “What are the top-rated coffee makers that brew a single cup in under 2 minutes?”

Next, thoroughly review product info. Again, standardized genAI product formats do not yet exist, yet companies with existing product feeds have a solid foundation. Ensure your feed contains key details such as dimensions, color options, materials, weight specifications, and intended applications.

ChatGPT, Perplexity, and Gemini have not yet opened the gates to share product data directly, but small-to-midsize merchants can get ahead by preparing now.

Generative AI platforms thrive on structured, accurate, real-time data.

Here are optimization tips:

1. Keep product data clean and consistent
• Unique IDs that never change
• Plain text titles and descriptions

2. Write for people, not just machines
• Short, specific titles (brand + product + key attribute)
• Natural language benefits in descriptions

3. Use structured attributes
• Brand, price, size, color, and material
• Group product variants with a shared ID (e.g., parent-child relationship)

4. Optimize images
• Use a content delivery network
• Extra angles or lifestyle shots help

5. Update feeds often
• Refresh at least daily

6. Use custom fields
• Add category-specific details (battery life, fabric, eco-friendly)

7. Localize content
• Language and country codes
• Simple, clear native text

8. Be transparent on price
• Always list price in ISO format (e.g., 29.99 USD)
• Include sale price if available

9. Don’t skip the details
• Shipping, handling, promotions, and other data build trust and improve ranking

Pew: Most Americans Want AI Labels, Few Trust Detection via @sejournal, @MattGSouthern

A new Pew Research Center survey reveals a gap between people’s desire to know when AI is used in content and their confidence in being able to identify it.

Seventy-six percent say it’s extremely or very important to know whether pictures, videos, or text were made by AI or by people. Only 12% feel confident they could tell the difference themselves.

Pew Research Center wrote:

“Americans feel strongly that it’s important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don’t trust their own ability to spot AI-generated content.”

This confidence gap reflects a rising unease with AI.

Half of Americans believe that the increased presence of AI in daily life raises more concerns than excitement, while just 10% are more excited than worried.

What Pew Research Found

People Want More Control

About 60% of Americans want more control over AI in their lives, an increase from 55% last year.

They’re open to AI helping with daily tasks, but still want clarity on where AI ends and human involvement begins.

When People Accept vs. Reject AI

Most support the use of AI in data-intensive tasks, such as weather prediction, financial crime detection, fraud investigation, and drug development.

About two-thirds oppose AI in personal areas such as religious guidance and matchmaking.

Younger Audiences Are More Aware

Awareness of AI is highest among adults under 30, with 62% claiming they’ve heard a lot about it, compared to only 32% of those 65 and older.

But this awareness doesn’t lead to optimism. Younger adults are more likely than seniors to believe that AI will negatively impact creative thinking and the development of meaningful relationships.

Creativity Concerns

More Americans believe AI will negatively impact essential human skills.

Fifty-three percent think it will reduce creative thinking, and 50% feel it will hinder the ability to connect with others, with only a few expecting improvements.

This suggests labeling alone isn’t sufficient. Human input must also be evident in the work.

Why This Matters

People are generally not against AI, but they do want to know when AI is involved. Being open about AI use can help build trust.

Brands that go the transparent route might find themselves at an advantage in creating connections with their audience.

For more insights, see the full report.


Featured Image: Roman Samborskyi/Shutterstock

Review Signals Gain Influence In Top Google Local Rankings via @sejournal, @MattGSouthern

A new analysis from Search Atlas quantifies the interaction between proximity and reviews in local rankings.

Proximity drives visibility overall, while review signals become stronger differentiators in the highest positions.

This study examines 3,269 businesses across the food, health, law, and beauty sectors.

It shows that for positions 1–21, proximity influences 55% of decisions, while review count accounts for 19%. In the top ten, proximity’s influence decreases to 36%, but review count increases to 26%, with review keyword relevance reaching 22%.

Search Atlas writes:

Proximity is the top driver of local visibility.

The study also notes:

Proximity does not always dominate in elite positions.

What It Means

You’ll have a better chance of achieving top results by focusing on earning more reviews and naturally incorporating service-specific terms into reviews, rather than relying on your pin’s location on the map.

The report suggests that Google understands review text semantically. Using service-specific language in reviews can help your rankings for high-value queries.

How To Apply This

Think of proximity as your default setting. It’s fixed, so focus your attention on the inputs you can control.

When crafting your review requests, aim for natural, service-specific language. For instance, “best dentist for whitening” tends to work better than “great service.”

Also, ensure that your GBP name and profile details are aligned. The research shows that matching your business name to the search intent, such as “Downtown Dental Clinic” for someone searching “dentist near me,” can make a positive difference.

Sector Behavior

While the overall pattern remains consistent, shoppers can exhibit different behaviors across categories.

Per the report:

  • For Law, proximity tends to be the most important factor, with reviews playing a secondary role.
  • In Beauty, reputation signals are more influential. While proximity is still key, review volume and keywords are also important.
  • When it comes to Food, review content and profile relevance become especially valuable, particularly in crowded markets.
  • Health balances proximity with strong reviews and service alignment in reviews.

Looking Ahead

This study quantifies something practitioners have long suspected: proximity earns you a look, but review content helps you secure the top spot in the close contest.

If you can’t change your location, shape the language around it.

For more data on GBP ranking factors, see the full report.

Methods & Limits

The authors applied XGBoost to grid visibility, GBP metadata, website content, and reviews, achieving a global model that explains approximately 92–93% of the variance.

They emphasize that feature importance indicates correlation, not causation. Additionally, they warn that proximity might be overstated due to fixed grid collection and note that their results represent a snapshot in time.

Use these insights as guidance, not a strict rulebook.


Featured Image: Roman Samborskyi/Shutterstock

Making SEO Personas Actionable Across Teams via @sejournal, @Kevin_Indig

Here’s what I’m covering this week: How to get the most out of personas in your day-to-day work across SEO, content, and the broader org.

Because in the AI-search era, personas built from organic queries and prompts have value for every touchpoint: ad copy, sales scripts, support docs, product messaging.

They carry the unfiltered language of your audience (their fears, hesitations, and demands) straight into the hands of the teams shaping your funnel.

If you’re not operationalizing search-data-based personas across departments, you’re missing one of the few forms of market intelligence that scale across SEO, marketing, sales, and product.

Personas shouldn’t live stagnantly in a slide deck. I’ll show you how to make them pull their weight across the org.

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Last week, I showed you how to create search personas based on data you already have available, along with how to use an LLM-ready persona card to extract custom insights.

But the best persona in the world doesn’t help if it collects dust in your Google Drive.

This week, I’m digging into how to make these search persona insights actionable – not only across your SEO processes and production, but also across broader teams that SEO work touches.

However, before we dive in, I want to share a few notable perspectives on search personas that came up in conversation on this LinkedIn thread:

Malte Landwehr, CPO & CMO at Peec AI, gave this visual example in the thread (with additional context) that resonated strongly. From his own research and testing, he shared a visual detailing LLM visibility for various headphones based on prompts for personas and use cases.

The findings? LLMs recommended different brands/products based on different persona-based prompts.

Image Credit: Kevin Indig

And below, David Melamed brings up an interesting and important question below.

Image Credit: Kevin Indig

I agree with David: The more personalized search results are, the less you can segment or generalize across a group.

But if you check out our conversation in the comments, David absolutely gets it, and his concerns are valid.

He shares that “more long tail content and citations across more unique niches, scenarios and comparisons should beat out persona driven content” and that “looking at questions, related searches in search console, and Google and Microsoft ads search term reports… [along with] experience and other voice of customer research (listening to calls, analyzing reviews, reddit threads, complaints, etc..)” would be a helpful approach.

And that’s what I tackled last week in Personas are critical for AI search (part 1 on the persona topic): To succeed with user personas for SEO – and make them valuable and usable – the goal is to build custom, unique search personas from your actual in-house data and long-tail Google Search Console.

So, David brought up a valid point, one that’s aligned with how we should be building useful search personas for today.

Lastly, Elisa Daniela Montanari sums up how a lot of us feel about the shift toward qualitative research (along with mentioning her goals to upskill as an SEO by diving into user research tactics):

Image Credit: Kevin Indig

And with these conversations in mind…

I’d argue that high-quality, customer-centered SEO research captures unfiltered questions, painpoints, and intents at scale, across the entire journey – and that makes it one of the most versatile forms of market intelligence that you can use across your brand as a whole.

So if organic query and prompt research is so valuable and versatile, how do you ensure they’re actually used?

Because all strategists everywhere have had that stupidly challenging moment: After doing all the labor-intensive data-gathering of building user personas for SEO, it’s time to get your team or clients to use those insights regularly across SEO production.

You need to prep your findings so they’re not left gathering cobwebs in the dark corners of the cloud.

1. Create An Internal Knowledge Hub For Core Search Personas

Not another slide deck or spreadsheet that gathers dust. A simple, easily-accessible hub that is a living, breathing document.

Translate data into the formats your team and stakeholders already use: dashboards, one-page briefs, funnel visualizations.

Think Notion, Airtable, Asana, Google Sheets, Slack Canvas – wherever your team is already working and discussing production.

Key contributors need to have access to fluidly comment and update as organic questions and pain points surface across your audience.

2. Build A Clear Narrative Around How And Why Using These Personas Is Valuable

Position SEO research/persona use as a “horizontal competency” that makes every department smarter.

Kick off persona use with a short session showing:

  • Real queries from your personas.
  • How those queries reveal pain points, objections, or jobs-to-be-done.
  • Where competitors are (or aren’t) meeting those needs.
  • Inform the team on how users are interacting with AI-based search results (see Trust Still Lives in Blue Links for details on the four AIO intent patterns).

A three-minute Loom video can do wonders.

Use the data you have (Google Search Console, Semrush, Ahrefs, LLM prompt monitoring tools) to back up the importance of use.

At the end of this memo, I have a slide deck template for premium subscribers that will help you build this narrative and guide effective persona implementation across teams.

3. Train Contributors On How Personas Will Be Used Across Production – And Follow Through

Train your SEO/content contributors that personas don’t just shape blog posts – they inform all communication touchpoints in the customer journey.

If you’re also using search personas to inform your sales and customer care team interactions (and you should – more on that below), create examples of how to use personas across all communication channels.

Highlight missed opportunities (e.g., ad copy vs. organic messaging mismatch, customer support docs hidden from search, sales scripts that could benefit).

And although this means extra work for leaders, managers, or editors, this part is crucial: Let your team know that briefs that don’t specify personas will be rejected or sent back for revision. That also goes for drafts that don’t speak directly to defined personas and their search behaviors/needs.

Yes, it’s an added step on an often-already-overloaded plate of a marketer, but this is how you ensure they’re successfully implemented across your work over time.

Image Credit: Kevin Indig

Here’s where your personas stop being a strategy deck or training session and start shaping what users experience.

1. Incorporate Persona Data Into Every Content Brief

Your search persona data is there to help you direct every brief beyond target queries and products/services features to mention.

Use it to inform your content producers of the following:

  • Unique, data-backed pain points.
  • Real customer/lead questions that need answering.
  • Proof points needed to reduce hesitation.
  • What authority signals resonate with your target reader.
  • Behaviors that impact interactions with the page.
  • Copy on the page.

In every content brief, flag actual language from queries, call transcripts, or reviews that should be used on the page. Create a copy bank that’s tagged into your content briefs that your writers, editors, and LLMs can pull from.

For example, if your persona says “integration headaches,” don’t water it down to “implementation challenges.” Use their words.

2. Use Search Persona Data To Inform Page Structure

Match the flow of the page to how specific personas are likely to consume information.

Some personas need trust-driven validation upfront (editorial quality signals, branded logos, stats, testimonials). Others need efficiency first, then a CTA.

Here’s a practical way to estimate what each of your search personas needs on the page:

  • Follow guidance (and use the regex) provided in Personas are critical for AI search to extract GSC long-tail queries that can contain indicators of specific search personas.
  • Select a specific URL or page that comes up for multiple long-tails for a consistent search persona type.
  • Examine on-page user scrolling and clicking behavior via your heatmap tool.
  • Look for places users pause, scroll past, or toggle back and forth between information. Strong behavioral patterns (skips, hesitations, long-tread times) point to places to better optimize page structure based on search persona type.

Once you’re done gathering information based on user behavioral patterns, audit your on-page modules, formats, and design capabilities to ensure you have all pieces needed to create pages that fulfill those specific needs.

Enlist your product and/or web design team to create what’s needed to serve a better on-page experience.

Then, include direction in each brief of what sort of modules and information structuring is needed based on search persona type.

3. Map To Topic Clusters In The Brief

Specific search personas naturally gravitate toward certain topics or proof points.

A searcher who uses technical language for their queries may cluster around integrations and APIs and need to see clear documentation is available for how to use them, while a user with economic or decision-making intent may cluster around ROI topics.

Build semantically related internal linking paths that explicitly connect those journeys for your SEO personas. Use your topic map (if you’ve built one) and revisit your keyword universe as needed.

4. Personas Should Inform Your AI-Assisted Workflows

Use search persona details as inputs to LLM prompts and/or incorporate them into your AI-assisted content generation, like AirOps workflows.

Instead of “write an article about X with the search intent of Y,” frame it as “write for a skeptical buyer evaluating vendors – include comparisons and third-party validation.”

Or better yet? Use your persona cards (see Personas Are Crucial for AI Search for a detailed guide) to help guide additional prompts personas might use in LLMs when attempting to solve queries related to your brand.

Below, take a look at how this could work in practice, using the four distinct AIO intent patterns from the additional analysis of the UX study of AIOs found in Trust Still Lives in Blue Links:

  1. Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
  2. Trust-driven validations that convert only with credibility (validate AIOs).
  3. Comparative validations that use AIOs but compare with multiple sources.
  4. Skeptical rejections that automatically distrust AIOs for high-stakes queries.

Let’s say you work for a fintech startup that provides easy-to-use business insurance for small to midsize businesses.

Here’s how you might use personas to inform content production for efficiency-first and trust-driven search behaviors:

Example 1: Junior operations coordinator at a 20-person marketing agency → accepting of AIOs (efficiency-first) → queries “What’s the average cost of business insurance for a 20-person company?” → Likely to validate range via the AIO → Takeaway for your brand: Create content geared to businesses with small teams and/or junior learners that includes straightforward facts and ranges that are easily extractable, so it’s cited in AIOs. Make your pricing explanations scannable and structured. Internally link to other knowledge guides for project managers or operations leads at small to midsize businesses.

Example 2: Small business owner in healthcare services → validate AIOs with second-clicks (trust-driven) → queries “Do I need business insurance for HIPAA compliance?” → Likely to read the AIO but won’t act until they see credible signals → citations from legal/insurance authorities → Takeaway for your brand: Position your content with authoritative references (link to .gov or .org sources) and highlight compliance expertise so your page is validated by trust; include case studies and/or social proof of authority; Internally link to other guides for healthcare service businesses.

How To Know Search Persona Implementation Is Working

Watch for these signals:

  • Higher engagement time and more downstream actions on the page.
  • Lower bounce rates on persona-driven pages.
  • More citations and visibility in AIOs and LLM outputs (your copy matches how users ask questions).
  • Increased assisted conversions: Pages designed for a specific persona show up more often in multi-touch journeys or are incorporated strategically and/or organically into follow-up communications by sales/customer teams.
  • Sales/Customer service team feedback loop: Fewer “this didn’t answer my question” moments.

Amanda jumping in here: In March of this last year, I led one of my clients to pivot hard to persona-focused content. Not only have we seen an increase in AIO inclusion, AI Mode citations, and LLM visibility for these niche terms, but we’ve also experienced a boost in visits to our core guides that were geared toward our broader audience. After this pivot, we’re seeing anywhere between a 20-60% month-over-month increase in organic visits from ChatGPT, and a ~40% increase month-over-month in visible AIO inclusion, to include our older core content as well. Although some of this growth is likely due to increased overall ChatGPT adoption and increase in Google’s use of AIOs across queries, here’s the takeaway (and my hypothesis): As you create niche content for personas, it’s possible you could also see a lift in your core content as it’s served to these specific groups of searchers – based on what these tools know about (1) the end user and (2) who your brand serves best. But only time (and more experiments) will truly tell.

The reality is, no matter how well you implement search personas into your SEO and content production, SEO and growth marketing teams can’t win on their own.

Search personas have the real opportunity to contribute to results when the rest of the org picks them up and runs with them throughout lead and customer touchpoints.

The trick is to make it dead-simple for every team to see why personas matter for their work and how to apply them.

Plus, a big advantage of bringing other teams on board is that SEO-driven personas – built from real search queries, prompts, social chatter, and call transcripts – arm everyone with the exact language customers use.

That means you can reduce hesitations, preemptively answer questions, and build trust across every channel of communication.

Below, here’s a quick list of guidance to help you collaborate with other teams on how to use search persona data.

And in the next section, I’ll jump into how to create intentional feedback loops so your personas stay fresh, useful, and relevant.

Email Marketing

  • Work with email teams to trigger sequences based on persona signals (query intent by pages visited, topics visited).
  • Example: If someone hits three pricing-related pages, route them into a nurture path designed for a search-data-informed persona that includes supportive content often visited by those users.
  • Benefit: Aligns your SEO insights with lifecycle marketing, reducing drop-off between discovery and conversion.

Paid Media And Advertising

  • Lift search-persona informed language directly into ad copy → track if it increases CTR because you’re speaking the way customers search.
  • Map objections to creatives: For example, run ads that emphasize compliance and audits if you have search data illustrating a segment of users who have detailed questions about security of your software.
  • Test messaging by persona to learn faster which angles convert.
  • Benefit: SEO persona research de-risks your paid spend by validating copy before it goes live.

Social And Community

  • Translate persona pain points into campaign themes and engagement prompts.
  • Highlight UGC that shows peers solving the same persona pain point = social proof!
  • Build Reddit or forum campaigns where you provide helpful answers framed through persona lenses.
  • Benefit: Social teams stop guessing what will resonate – they get ready-made hooks from organic customer query data and in-house transcript research.

Sales

  • Use personas to shape sales scripts to reduce organic hesitations, along with your follow-up email templates.
  • Provide a list of key characteristics or organic phrases discovered in your SEO user persona research for sales to easily pick up on what scripts or content to use.
  • Equip reps with content “proof kits” (case studies, calculators, benchmarks) that map to persona objections.
  • Example: Lead comes in from organic content around “integration headaches.” Sales can immediately address hesitations with comparison docs + customer proof.
  • Benefit: SEO insights close the loop. Your leads feel heard because the same language follows them from organic query to sales call.

Customer Support

  • Build FAQs, hub pages, and documentation around persona pain points and natural language so customers can self-serve faster.
  • Train reps on marketing and educational language developed for personas to keep communication consistent across the lifecycle.
  • Feed recurring support questions back to SEO/content as new opportunities.
  • Benefit: Less friction for customers, more organic opportunities uncovered for SEO.

Product And/Or Product Marketing

  • Tie persona insights to feature positioning: “Which persona is this release for?”
  • Test messaging against persona objections to see what sticks before launch.
  • Document frameworks: “For Persona A, highlight speed. For Persona B, highlight compliance.”
  • Benefit: SEO personas become market intelligence, not just marketing intel. This helps product teams ship smarter. Unanswered questions or unsolved organic problems are great opportunities for new features.

One of the biggest pitfalls with doing the work to create search personas is then treating them like static, lifeless relics afterward.

2015 B2B study conducted by Cintell found that 71% of companies who exceeded revenue goals had documented personas – and nearly two-thirds of those orgs had updated them within the last six months.

(Listen, I am well aware 2015 is approximately 47 internet years ago – but I’d argue core human decision-making behavior takes much longer to change than a decade.)

No matter the study’s age, the message rings true today: Marketing and user personas win when they’re kept alive.

SEO personas make this easier than traditional personas because they’re rooted in fluid signals, like real search queries, prompts, and customer language that evolve as quickly as the market and trends do.

If you’re closely monitoring GSC data, Semrush, or AIO/LLM interactions, you’ll see shifts in questions and pain points before most competitors.

Image Credit: Kevin Indig

How to operationalize a persona freshness feedback loop across your team:

  • Employ direct communication channels: Create dedicated Slack channels, a shared CRM note hub, or monthly syncs where Sales, Customers, and Marketing can drop fresh objections, questions, or hesitations they’re hearing. If you’ve got power users or partners who can drop in routine feedback and thoughts, even better.
  • Develop a regular review cadence: Run a quarterly refresh of persona pain points, objections, and query patterns. Layer in branded search trends, referral data, and AIO/LLM interactions to validate updates.
  • Create an escalation path: Set up a clear process for when a “new pain point” surfaces. Sales hears it first → SEO/content teams get it next → new content or updates ship fast → implement/inform across marketing channels. How do you make room for organic escalations in your SEO/content production systems?
  • Do hesitation check-ins: Bi-weekly or monthly cross-team reviews (Support + Sales + SEO) where you identify the top organic customer/lead hesitations and assign assets to resolve them: case studies, how-to videos, tools and calculators, testimonials/reviews, community feedback on social channels.
  • Hold a regular retro: Tie shipped assets back to KPIs. Which persona-driven pages moved the needle? Which didn’t? Prune or upgrade pages that aren’t solving the problem.

The big takeaway here is search personas are never one-and-done.

They’re a dynamic, qualitative and quantitative data-based operating system for your marketing, sales, and product teams … and if you keep the feedback loop tight, they’ll keep paying dividends.


Featured Image: Paulo Bobita/Search Engine Journal

Finding The Perfect Balance Between AI And Human Control In Google Ads

Google Ads in 2025 looks nothing like it did in 2019. What used to be a hands-on, keyword-driven platform is now powered by AI and machine learning. From bidding strategies and audience targeting to creative testing and budget allocation, automation runs through everything.

Automation brings a lot to the table: efficiency at scale, smarter bidding, faster launches, and less time spent tweaking settings. For busy advertisers or those managing multiple accounts, it is a game-changer.

But left unchecked, automation backfires. Hand over the keys without guardrails and you risk wasted spend, irrelevant placements, or campaigns chasing the wrong metrics. Automation can execute tasks, but it still lacks an understanding of client goals, market nuances, and broader strategy.

In this article, we’ll explore how to balance AI and human oversight. We’ll look at where automation shines, where it falls short, and how to design a hybrid setup that leverages both scale and strategic control.

Measurement First: Feeding The Machine The Right Signals

Automation learns from the conversions you feed it. When tracking is incomplete, Google fills the gaps with modeled conversions. These estimates are useful for directional reporting, but they do not always match the actual numbers in your customer relationship management (CRM).

Chart by author, September 2025

Conversion lag adds another wrinkle. Google attributes conversions to the click date, not the conversion date, which means lead generation accounts often look like they are underperforming mid-week, even though conversions are still being reported. Adding the “Conversions (by conversion time)” column alongside the standard “Conversions” reveals that lag.
Also, you can build a custom column to compare actual cost-per-acquisition (CPA) or return on ad spend (ROAS) against your targets. This makes it clear when Smart Bidding is constrained by overly strict settings rather than failing outright.

For CPA, use the formula (Cost / Conversions) – Target CPA. The result tells you how far above or below the goal the campaign is currently hitting. A positive number means you are running over target, often because Smart Bidding is being choked by strict efficiency settings. Smart Bidding may pull back volume and still fail to reach efficiency, or compromise by bringing in conversions above target. A negative number means you are under target, which suggests automation is performing well and may have room to scale.

For ROAS, use the formula (Conv. Value / Cost) – Target ROAS. A negative result shows Smart Bidding is under-delivering on efficiency and not meeting the target. A positive result means you are beating the target, a signal that the system is thriving.

For example, if your Target CPA is $50 and the custom column shows +12, your campaigns are running $12 above goal, typically because the bidding algorithm is adhering too closely to constraints put in by the advertiser. If it shows -8, you are beating the target by $8, which can mean that the system could scale further.

To get real value from automation, connect it to business outcomes, not just clicks or form fills. Optimize toward revenue, profit margin, customer lifetime value, or qualified opportunities in your CRM. Train automation on shallow signals, and it will chase cheap conversions. Train it on metrics that matter to the business, and it will align more closely with growth goals.

Drawing Lanes For Automation

Automation performs best when campaigns have clear lanes. Mix brand and non-brand queries, or new and returning customers, and the system will almost always chase the easiest wins.

That is why human strategy still matters. Search campaigns should own high-intent queries where control of copy and bidding is critical. Performance Max should focus on prospecting and cross-network reach. Without this separation, the auction can route more impressions to PMax, which often pulls volume away from Search. The scale of overlap is hard to ignore. Optmyzr’s analysis revealed that when PMax cannibalized Search keywords, Search campaigns still performed better 28.37% of the time. In cases where PMax and Search overlapped, Search won outright 32.37% of the time.

The same problem arises with brand traffic. PMax leans heavily toward brand queries because they convert cheaply and inflate reported performance. Even with brand exclusions, impressions slip through. If you’re looking for your brand exclusions to be airtight, add branded negative keywords to your campaigns.

Supervising The Machine

Automation does not announce its mistakes. It drifts quietly, and you have to search for the information and read the signals.

Bid strategy reports show which signals Smart Bidding relied on. Seeing remarketing lists or high-value audiences is reassuring. Seeing random in market categories that do not reflect your customer base is a warning that your conversion data is too thin or too noisy.

Google now includes Performance Max search terms in the standard Search Terms report, providing visibility into the actual queries driving clicks and conversions. You can view these within Google Ads and even pull them via API for deeper analysis. With this update, you can now extract performance metrics, including impressions, clicks, click-through rates (CTR), conversions, and directly add negative keywords from the report, helping to refine your targeting quickly.

Looking at impression share signals completes the picture. A high Lost IS (budget) means your campaign is simply underfunded. A high lost IS (rank) paired with a low Absolute Top IS usually means your CPA or ROAS targets are too strict, so the system bids too low to win auctions. This tells us that it’s not automation that is failing; it’s automation following the rules you set. The fix is incremental: Loosen targets by 10-15% and reassess after a full learning cycle.

Intervening When Context Changes

Even the best automation struggles when conditions change faster than its learning model can adapt. Smart Bidding optimizes based on historical patterns, so when the context shifts suddenly, the system often misreads the signals.

Take seasonality, for example. During Black Friday, conversion rates spike far above normal, and the algorithm raises bids aggressively to capture that “new normal.” When the sale ends, it can take days or weeks for smart bidding to recalibrate, overvaluing traffic long after the uplift is gone. Or consider tracking errors. If duplicate conversions fire, the system thinks performance has improved and will start to bid more aggressively, spending money on results that don’t even exist.

That is why guardrails, such as seasonality adjustments and data exclusions, exist: they provide the algorithm with a correction in moments when its model would otherwise drift.

Auto Applied Recommendations: Why They Miss The Mark

Auto-applied recommendations are pitched as a way to streamline account management. On paper, they promise efficiency and better hygiene. In practice, they often do more harm than good, broadening match types, adding irrelevant keywords, or switching bid strategies without context.

Google positions them as helpful, but many practitioners disagree. My view is that AARs are not designed to maximize your profitability at the account level. They are designed to keep budgets flowing efficiently across Google’s limited inventory. The safest approach is to turn them off and review recommendations manually. Keep what aligns with your strategy and ignore the rest. My firm belief is that automation should support your work, not overwrite it.

Scripts That Catch What Automation Misses

Scripts remain one of the simplest ways to hold automation accountable.

The official Google Ads Account Anomaly Detector flags when spend, clicks, or conversions swing far outside historical norms, giving you an early warning when automation starts drifting. The updated n-gram script identifies recurring low-quality terms, such as “free” or “jobs,” allowing you to exclude them before Smart Bidding optimizes toward them. And if you want a simple pacing safeguard, Callie Kessler’s custom column shows how daily spend is tracking against your monthly budget, making volatility visible at a glance.

Together, these lightweight scripts and columns act as additional guardrails. They don’t replace automation, but they catch blind spots and force a human check before wasted spend piles up.

Where To Let AI Lead And Where To Step In

Automation performs best when it has clean signals, clear lanes, and enough data to learn from. That is when you can lean in with tROAS, Maximize Conversion Value, or new customer goals and let Smart Bidding handle auction-time complexity.

It struggles when data quality is shaky, when intents are mixed in a single campaign, or when efficiency targets are set unrealistically tight. Those are the moments when human oversight matters most: adding negatives, restructuring campaigns, excluding bad data, or easing targets so the system can compete.

Closing Thoughts

Automation is the operating system of Google Ads. The question is not whether it works; it is whether it is working in your favor. Left alone, it will drift toward easy wins and inflated metrics. Supervised properly, it can scale results no human could ever manage.

The balance is recognizing that automation is powerful, but not self-policing. Feed it clean data, define its lanes, and intervene when context shifts. Do that, and you will turn automation from a liability into an edge.

More Resources:


Featured Image: N Universe/Shutterstock

LLMs.txt For AI SEO: Is It A Boost Or A Waste Of Time? via @sejournal, @martinibuster

Many popular WordPress SEO plugins and content management platforms offer the ability to generate LLMs.txt for the purpose of improving visibility in AI search platforms. With so many popular SEO plugins and CMS platforms offering LLMs.txt functionality, one might come away with the impression that it is the new frontier of SEO. The fact, however, is that LLMs.txt is just a proposal, and no AI platform has signed on to use it.

So why are so many companies rushing to support a standard that no one actually uses? Some SEO tools offer it because their users are asking for it, while many users feel they need to adopt LLMs.txt simply because their favorite tools provide it. A recent Reddit discussion on this very topic is a good place to look for answers.

Third Party SEO Tool And LLMs.txt

Google’s John Mueller addressed the LLMs.txt confusion in a recent Reddit discussion.  The person asking the question was concerned because an SEO tool flagged it as 404, missing. The user had the impression that the tool implied it was needed.

Their question was:

“Why is SEMRush showing that the /llm.txt is a 404? Yes, I. know I don’t have one for the website, but, I’ve heard it’s useless and not needed. Is that true?

If i need it, how do i build it?

Thanks”

The Redditor seems to be confused by the Semrush audit that appears to imply that they need an LLMs.txt. I don’t know what they saw in the audit but this is what the official Semrush audit documentation shares about the usefulness of LLMs.txt:

“If your site lacks a clear llms.txt file it risks being misrepresented by AI systems.

…This new check makes it easy to quickly identify any issues that may limit your exposure in AI search results.”

Their documentation says that it’s a “risk” to not have an LLMs.txt but the fact is that there is absolutely no risk because no AI platform uses it. And that may be why the Redditor was asking the question, “If i need it, how do I build it?”

LLMs.txt Is Unnecessary

Google’s John Mueller confirmed that LLMs.txt is unnecessary.

He explained:

“Good catch! Especially in SEO, it’s important to catch misleading & bad information early, before you invest time into doing something unnecessary. Question everything.”

Why AI Platforms May Choose To Not Use LLMs.txt

Aside from John Mueller’s many informal statements about the uselessness of LLMs.txt, I don’t think there are any formal statements from AI platforms as to why they don’t use LLMs.txt and their associated .md markdown texts. There are, however, many good reasons why an AI platform would choose not to use it.

The biggest reason not to use LLMs.txt is that it is inherently untrustworthy. On-page content is relatively trustworthy because it is the same for users as it is for an AI bot.

A sneaky SEO could add things to structured data and markdown texts that don’t exist in the regular HTML content in order to get their content to rank better. It is naive to think that an SEO or publisher would not use .md files to trick AI platforms.

For example, unscrupulous SEOs add hidden text and AI prompts within HTML content. A research paper from 2024 (Adversarial Search Engine Optimization for Large Language Models) showed that manipulation of LLMs was possible using a technique they called Preference Manipulation Attacks.

Here’s a quote from that research paper (PDF):

“…an attacker can trick an LLM into promoting their content over competitors. Preference Manipulation Attacks are a new threat that combines elements from prompt injection attacks… Search Engine Optimization (SEO)… and LLM ‘persuasion.’

We demonstrate the effectiveness of Preference Manipulation Attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). Our attacks are black-box, stealthy, and reliably manipulate the LLM to promote the attacker’s content. For example, when asking Bing to search for a camera to recommend, a Preference Manipulation Attack makes the targeted camera 2.5× more likely to be recommended by the LLM.”

The point is that if there’s a loophole to be exploited, someone will think it’s a good idea to take advantage of it, and that’s the problem with creating a separate file for AI chatbots: people will see it as the ideal place to spam LLMs.

It’s safer to rely on on-page content than on a markdown file that can be altered exclusively for AI. This is why I say that LLMs.txt is inherently untrustworthy.

What SEO Plugins Say About LLMs.txt

The makers of Squirrly WordPress SEO plugin acknowledge that they provided the feature only because their users asked for it, and they assert that it has no influence on AI search visibility.

They write:

“I know that many of you love using Squirrly SEO and want to keep using it. Which is why you’ve asked us to bring this feature.

So we brought it.

But, because I care about you:

– know that LLMs txt will not help you magically appear in AI search. There is currently zero proof that it helps with being promoted by AI search engines.”

They strike a good balance between giving users what they want while also letting them know it’s not actually needed.

While Squirrly is at one end saying (correctly) that LLMs.txt doesn’t boost AI search visibility, Rank Math is on the opposite end saying that AI chatbots actually use the curated version of the content presented in the markdown files.

Rank Math is generally correct in its description of what an LLMs.txt is and how it works, but it overstates the usefulness by suggesting that AI chatbots use the curated LLMs.txt and the associated markdown files.

They write:

“So when an AI chatbot tries to summarize or answer questions based on your site, it doesn’t guess—it refers to the curated version you’ve given it. This increases your chances of being cited properly, represented accurately, and discovered by users in AI-powered results.”

We know for a fact that AI chatbots do not use a curated version of the content. They don’t even use structured data; they just use the regular HTML content.

Yoast SEO is a little more conservative, occupying a position in the center between Squirrly and Rank Math, explaining the purpose of LLMs.txt but not overstating the benefits by hedging with words like “can” and “could.” That is a fair way to describe LLMs.txt, although I like Squirrly’s approach that says, you asked for it, here it is, but don’t expect a boost in search performance.

The LLMs.txt Misinformation Loop

The conversation around LLMs.txt has become a self-reinforcing loop: business owners and SEOs feel anxiety over AI visibility and feel they must do something, viewing LLMs.txt as the something they can do.

SEO tool providers are compelled to provide the LLMs.txt option, reinforcing the belief that it’s a necessity, unintentionally perpetuating the cycle of misunderstanding.

Concern over AI visibility has led to the adoption of LLMs.txt which at this stage is only a proposal for a standard that no AI platform currently uses.

Featured Image by Shutterstock/James Delia

SERP Visibility Decline: How To Grow Brand Awareness When Organic Traffic Stalls

This post was sponsored by AdRoll. The opinions expressed in this article are the sponsor’s own.

Text‑heavy AI Overviews blocking your brand?

The “People Also Ask” that you could scroll through forever, effectively hiding position 1?

Knowledge panels and rich snippets hogging the view?

The majority of people who entered a search query never made it past the top of the search result page (SERP) in 2024.

For users, these updates to Google’s SERPs are technically efficient.

For you, changes like AI Overviews are another strategy to master, and at worst, a direct competitor for attention.

So how do you increase brand awareness and search presence when Google is taking away your bids for top-of-funnel (TOFU) content?

The Rise of Zero-Click: Why Rankings Don’t Equal Traffic Anymore

As search evolves, AI-powered summaries now appear in more than 13% of queries.

This resulted in nearly 60% of Google searches ending without a click last year, dramatically shrinking the traditional flow of search traffic to a website.

Not only are you fighting for space against the usual blue links, you’re now competing with AI-generated answers that package everything up before a user even considers a click.

Which means that “we made it to the top” moment doesn’t guarantee anyone actually sees your brand.

So, even if your brand earns a top ranking, it may never translate into visibility. That’s the reality of today’s zero-click environment, and it is what creates the awareness gap — a challenge that every marketer now has to solve.

What Is Zero Click?

A “zero-click” search happens when a user gets their answer directly on the search results page through featured snippets, knowledge panels, or AI-generated overviews without ever clicking through to a website.

For users, it’s fast and convenient. For brands, it means fewer chances for visitors to actually land on your site, even when you’ve earned a top ranking. Think of it as Google (and increasingly, AI) keeping people inside its own ecosystem rather than sending them out to explore yours.

This is where the awareness gap comes in.

What Is The Awareness Gap?

The awareness gap is the space in which your content is seen, but it is not tied to your brand.

Even if your brand appears in these results, you may never see the traditional signals like traffic or time on site that prove your influence. People might recognize your name or absorb part of your story, but that exposure is not reflected in your metrics.

The gap is the difference between being seen and being measured, and closing it requires a new playbook for visibility and recall.

How Zero-Click Reshapes Discovery

The zero-click trend is most disruptive at the very start of the customer journey. Your website used to be Rome; eventually, all roads led there. Now? Fewer and fewer organic roads exist. That means the earliest brand touchpoints are disappearing.

Here’s what that means for marketers today:

  • Fewer chances for discovery. If users never click, they never see your story. All things that shape early perception, such as your messaging, your visuals, your value props, get skipped.
  • SEO loses some steam. While organic optimization still matters for long-term discoverability (hello, LLMs absorbing and citing content), its ability to drive top-of-funnel awareness isn’t what it used to be. In a zero-click world, amazing content may rank, but still never get seen.
  • Competition gets fiercer. If you’ve relied heavily on organic strategies alone, competitors who invest in paid ads are now likely to edge you out. Ads still sit above AI overviews in many results, and that’s prime real estate that’s hard to ignore.
  • Research shifts elsewhere. With crowded SERPs and often confusing AI answers, users are taking their research off of traditional search platforms to other places. Social media, communities, and unowned channels are becoming important sources for educational content that feels clearer and more trustworthy.

Bottom line: the early doors to discovering your brand are closing faster than they’re opening. It takes a new mix of channels to ensure you’re still part of the conversation.

3 Steps to Reclaim Top-of-Funnel Presence

So what’s a marketer to do? Is all hope lost?

Show up where they are still landing: relevant active sites that deliver clear ad space to your target audience.

Advertising offers a direct and reliable solution to the awareness gap.

Unlike organic results, paid campaigns guarantee an immediate and prominent presence on SERPs and other digital platforms. That means eyeballs on your ads, even if a user doesn’t click on them.

Consider paid campaigns as a type of insurance policy against brand invisibility on the SERP.

Remember: early impressions = stronger recall later in the funnel. The power of showing up first cannot be overstated. Even if a user doesn’t click on your ad, the exposure to your name, logo, or key message fosters familiarity. Early recognition makes your brand more memorable when it comes time to convert.

Step 1: Implement An Awareness-Focused Advertising Strategy

If you’ve made it this far, you’re likely nodding along: zero-click is here, and advertising has to play a bigger role. But where do you start? The good news is you don’t need to overhaul everything overnight. Instead, think of paid as a strategic layer that enhances the visibility you’ve already worked hard to build organically.

Here’s the first step in making that shift in a way that feels purposeful, not scattered:

Leverage common queries

Run search and display ads tied to common zero-click queries. Many of the searches most impacted by zero-click are informational: “what is,” “how to,” and “why does” questions that rarely result in clicks. Instead of letting that traffic disappear into AI overviews, run search and display campaigns against these queries. Your brand may not get the click, but it will get the visibility, ensuring you stay part of the conversation even when Google is trying to keep people on the page.

Connect with tomorrow’s customers today. AdRoll makes brand awareness ads work for you. Get started with a demo.

Use what you already know

Build awareness campaigns in categories where your brand already shows up. If you’ve earned a featured snippet or knowledge panel, don’t leave it unsupported. Pair that organic placement with a targeted ad so your brand appears twice on the same page. This kind of overlap creates a halo effect: users perceive your brand as both authoritative and unavoidable. It’s one of the fastest ways to reinforce recall.

Enhance, don’t replace SEO

Paid advertising isn’t a substitute for strong organic presence, it’s an amplifier. Use ads to reinforce your authority and extend the reach of your organic work, not cover for it. Think of the two channels as partners: SEO earns you credibility, while ads guarantee visibility. Together, they create a more holistic visibility strategy that keeps you top of mind across formats and touchpoints. And don’t forget: LLMs and AI overviews are still learning from organic signals. If your content isn’t strong, your ads won’t carry the same weight.

At the end of the day, this isn’t about abandoning what has always worked. It’s about making sure your brand shows up where discovery is actually happening, whether that’s in a blue link, a snippet, or a sponsored placement.

Step 2: Measure Zero-Click Strategies The Right Way

Here’s the tricky part: in a zero-click world, traditional metrics don’t always tell the whole story. If you’re only watching organic traffic, it may look like your efforts are failing. But the reality is that influence is happening upstream, before a user ever lands on your site.

Here’s what to measure instead:

  • Branded search volume. If more people are searching for your brand name specifically, you know your awareness strategy is working. This is often the clearest leading indicator of recall.
  • Visibility share. Track how often your brand appears in SERPs, featured snippets, AI overviews, and paid placements, even if it doesn’t result in a click.
  • Impression lift. Ads may not drive immediate conversions, but consistent exposure increases recognition. Measuring impressions alongside recall surveys can help connect the dots.
  • Engagement on unowned channels. As research moves to social and communities, track where your educational content sparks conversations and shares outside of your own site.

The key is to shift from measuring traffic to measuring presence. Visibility in high-authority spaces, whether through organic or paid efforts, is the new top-of-funnel KPI.

Step 3: Connect The C-Suite To Zero-Click Strategies

Of course, metrics only matter if your leadership team understands them. However, many executives are still trained to see organic traffic as the gold standard. So when traffic dips, even for reasons outside your control, it can look like a problem.

This is where your role as translator becomes critical. You need to reframe the conversation from clicks to visibility, from pageviews to presence. The message to the C-suite should sound less like an apology and more like a strategic shift:

  • A decline in organic traffic doesn’t equal a decline in influence. Zero-click means users may never land on your site, but they’re still seeing your brand. Visibility is impact.
  • Your brand may actually be showing up more often. The problem is measurement, not presence. Snippets, AI overviews, and social conversations don’t show up in traffic charts, but they absolutely shape perception.
  • Advertising fills the gap. Paid campaigns guarantee your brand isn’t invisible at the exact moment prospects are forming their first impressions, making it the perfect complement to organic efforts.

The way to make this stick with leaders is through narrative. Show them that early impressions are building brand memory. Connect branded search growth to that recall. Paint the picture that what looks like “less traffic” is often “more visibility in new places.”

Executives care about competitive positioning and long-term growth, not just line graphs. So remind them: being the brand people remember when it’s time to buy is the real win. Presence is what creates that memory, and memory is what drives future pipeline.

Zero-Click Isn’t the End. It’s Your Advantage If You Move First

Zero-click isn’t the end of marketing as we know it. It’s just the latest evolution in how people discover and remember brands. The marketers who win will be the ones who adapt their strategies, blending organic authority with paid presence, reframing their KPIs, and helping their companies understand what visibility really means today.

The awareness gap is real, but it’s also an opportunity. By rethinking how you measure, how you communicate results, and how you show up at the top of the funnel, you can set your brand up to thrive in an environment where discovery no longer depends on a click.

And this is only Part 1. In Part 2, we’ll dig into the real secret weapon in a clickless world: recall. Because the brands that stay top of mind are the ones that get chosen later. Advertising’s biggest power isn’t in driving a click, it’s in building the kind of recognition that lasts.

Check back soon on the AdRoll website for Part 2: How to Build Recall in a Clickless World.

Image Credits

Featured Image: Image by AdRoll. Used with permission.

In-Post Images: Image by AdRoll. Used with permission.

This medical startup uses LLMs to run appointments and make diagnoses

Imagine this: You’ve been feeling unwell, so you call up your doctor’s office to make an appointment. To your surprise, they schedule you in for the next day. At the appointment, you aren’t rushed through describing your health concerns; instead, you have a full half hour to share your symptoms and worries and the exhaustive details of your health history with someone who listens attentively and asks thoughtful follow-up questions. You leave with a diagnosis, a treatment plan, and the sense that, for once, you’ve been able to discuss your health with the care that it merits.

The catch? You might not have spoken to a doctor, or other licensed medical practitioner, at all.

This is the new reality for patients at a small number of clinics in Southern California that are run by the medical startup Akido Labs. These patients—some of whom are on Medicaid—can access specialist appointments on short notice, a privilege typically only afforded to the wealthy few who patronize concierge clinics.

The key difference is that Akido patients spend relatively little time, or even no time at all, with their doctors. Instead, they see a medical assistant, who can lend a sympathetic ear but has limited clinical training. The job of formulating diagnoses and concocting a treatment plan is done by a proprietary, LLM-based system called ScopeAI that transcribes and analyzes the dialogue between patient and assistant. A doctor then approves, or corrects, the AI system’s recommendations.

“Our focus is really on what we can do to pull the doctor out of the visit,” says Jared Goodner, Akido’s CTO. 

According to Prashant Samant, Akido’s CEO, this approach allows doctors to see four to five times as many patients as they could previously. There’s good reason to want doctors to be much more productive. Americans are getting older and sicker, and many struggle to access adequate health care. The pending 15% reduction in federal funding for Medicaid will only make the situation worse.

But experts aren’t convinced that displacing so much of the cognitive work of medicine onto AI is the right way to remedy the doctor shortage. There’s a big gap in expertise between doctors and AI-enhanced medical assistants, says Emma Pierson, a computer scientist at UC Berkeley.  Jumping such a gap may introduce risks. “I am broadly excited about the potential of AI to expand access to medical expertise,” she says. “It’s just not obvious to me that this particular way is the way to do it.”

AI is already everywhere in medicine. Computer vision tools identify cancers during preventive scans, automated research systems allow doctors to quickly sort through the medical literature, and LLM-powered medical scribes can take appointment notes on a clinician’s behalf. But these systems are designed to support doctors as they go about their typical medical routines.

What distinguishes ScopeAI, Goodner says, is its ability to independently complete the cognitive tasks that constitute a medical visit, from eliciting a patient’s medical history to coming up with a list of potential diagnoses to identifying the most likely diagnosis and proposing appropriate next steps.

Under the hood, ScopeAI is a set of large language models, each of which can perform a specific step in the visit—from generating appropriate follow-up questions based on what a patient has said to to populating a list of likely conditions. For the most part, these LLMs are fine-tuned versions of Meta’s open-access Llama models, though Goodner says that the system also makes use of Anthropic’s Claude models. 

During the appointment, assistants read off questions from the ScopeAI interface, and ScopeAI produces new questions as it analyzes what the patient says. For the doctors who will review its outputs later, ScopeAI produces a concise note that includes a summary of the patient’s visit, the most likely diagnosis, two or three alternative diagnoses, and recommended next steps, such as referrals or prescriptions. It also lists a justification for each diagnosis and recommendation.

ScopeAI is currently being used in cardiology, endocrinology, and primary care clinics and by Akido’s street medicine team, which serves the Los Angeles homeless population. That team—which is led by Steven Hochman, a doctor who specializes in addiction medicine—meets patients out in the community to help them access medical care, including treatment for substance use disorders. 

Previously, in order to prescribe a drug to treat an opioid addiction, Hochman would have to meet the patient in person; now, caseworkers armed with ScopeAI can interview patients on their own, and Hochman can approve or reject the system’s recommendations later. “It allows me to be in 10 places at once,” he says.

Since they started using ScopeAI, the team has been able to get patients access to medications to help treat their substance use within 24 hours—something that Hochman calls “unheard of.”

This arrangement is only possible because homeless patients typically get their health insurance from Medicaid, the public insurance system for low-income Americans. While Medicaid allows doctors to approve ScopeAI prescriptions and treatment plans asynchronously, both for street medicine and clinic visits, many other insurance providers require that doctors speak directly with patients before approving those recommendations. Pierson says that discrepancy raises concerns. “You worry about that exacerbating health disparities,” she says.

Samant is aware of the appearance of inequity, and he says the discrepancy isn’t intentional—it’s just a feature of how the insurance plans currently work. He also notes that being seen quickly by an AI-enhanced medical assistant may be better than dealing with long wait times and limited provider availability, which is the status quo for Medicaid patients. And all Akido patients can opt for traditional doctor’s appointments, if they are willing to wait for them, he says.

Part of the challenge of deploying a tool like ScopeAI is navigating a regulatory and insurance landscape that wasn’t designed for AI systems that can independently direct medical appointments. Glenn Cohen, a professor at Harvard Law School, says that any AI system that effectively acts as a “doctor in a box” would likely need to be approved by the FDA and could run afoul of medical licensure laws, which dictate that only doctors and other licensed professionals can practice medicine.

The California Medical Practice Act says that AI can’t replace a doctor’s responsibility to diagnose and treat a patient, but doctors are allowed to use AI in their work, and they don’t need to see patients in-person or in real-time before diagnosing them. Neither the FDA nor the Medical Board of California were able to say whether or not ScopeAI was on solid legal footing based only on a written description of the system.

But Samant is confident that Akido is in compliance, as ScopeAI was intentionally designed to fall short of being a “doctor in a box.” Because the system requires a human doctor to review and approve of all of its diagnostic and treatment recommendations, he says, it doesn’t require FDA approval. 

At the clinic, this delicate balance between AI and doctor decision making happens entirely behind the scenes. Patients don’t ever see the ScopeAI interface directly—instead, they speak with a medical assistant who asks questions in the way that a doctor might in a typical appointment. That arrangement might make patients feel more comfortable. But Zeke Emanuel, a professor of medical ethics and health policy at the University of Pennsylvania who served in the Obama and Biden administrations, worries that this comfort could be obscuring from patients the extent to which an algorithm is influencing their care.

Pierson agrees. “That certainly isn’t really what was traditionally meant by the human touch in medicine,” she says.

DeAndre Siringoringo, a medical assistant who works at Akido’s cardiology office in Rancho Cucamonga, says that while he tells the patients he works with that an AI system will be listening to the appointment in order to gather information for their doctor, he doesn’t inform them about the specifics of how ScopeAI works, including the fact that it makes diagnostic recommendations to doctors. 

Because all ScopeAI recommendations are reviewed by a doctor, that might not seem like such a big deal—it’s the doctor who makes the final diagnosis, not the AI. But it’s been widely documented that doctors using AI systems tend to go along with the system’s recommendations more often than they should, a phenomenon known as automation bias. 

At this point, it’s impossible to know whether automation bias is affecting doctors’ decisions at Akido clinics, though Pierson says it’s a risk—especially when doctors aren’t physically present for appointments. “I worry that it might predispose you to sort of nodding along in a way that you might not if you were actually in the room watching this happen,” she says.

An Akido spokesperson says that automation bias is a valid concern for any AI tool that assists a doctor’s decision-making and that the company has made efforts to mitigate that bias. “We designed ScopeAI specifically to reduce bias by proactively countering blind spots that can influence medical decisions, which historically lean heavily on physician intuition and personal experience,” she says. “We also train physicians explicitly on how to use ScopeAI thoughtfully, so they retain accountability and avoid over-reliance.”

Akido evaluates ScopeAI’s performance by testing it on historical data and monitoring how often doctors correct its recommendations; those corrections are also used to further train the underlying models. Before deploying ScopeAI in a given specialty, Akido ensures that when tested on historical data sets, the system includes the correct diagnosis in its top three recommendations at least 92% of the time.

But Akido hasn’t undertaken more rigorous testing, such as studies that compare ScopeAI appointments with traditional in-person or telehealth appointments, in order to determine whether the system improves—or at least maintains—patient outcomes. Such a study could help indicate whether automation bias is a meaningful concern.

“Making medical care cheaper and more accessible is a laudable goal,” Pierson says. “But I just think it’s important to conduct strong evaluations comparing to that baseline.”

An oil and gas giant signed a $1 billion deal with Commonwealth Fusion Systems

Eni, one of the world’s largest oil and gas companies, just agreed to buy $1 billion in electricity from a power plant being built by Commonwealth Fusion Systems. The deal is the latest to illustrate just how much investment Commonwealth and other fusion companies are courting as they attempt to take fusion power from the lab to the power grid. 

“This is showing in concrete terms that people that use large amounts of energy, that know the energy market—they want fusion power, and they’re willing to contract for it and to pay for it,” said Bob Mumgaard, cofounder and CEO of Commonwealth, on a press call about the deal.   

The agreement will see Eni purchase electricity from Commonwealth’s first commercial fusion power plant, in Virginia. The facility is still in the planning stages but is scheduled to come online in the early 2030s.

The news comes a few weeks after Commonwealth announced a $863 million funding round, bringing its total funding raised to date to nearly $3 billion. The fusion company also announced earlier this year that Google would be its first commercial power customer for the Virginia plant.

Commonwealth, a spinout from MIT’s Plasma Science and Fusion Center, is widely considered one of the leading companies in fusion power. Investment in the company represents nearly one-third of the total global investment in private fusion companies. (MIT Technology Review is owned by MIT but is editorially independent.)

Eni has invested in Commonwealth since 2018 and participated in the latest fundraising round. The vast majority of the company’s business is in oil and gas, but in recent years it’s made investments in technologies like biofuels and renewables.

“A company like us—we cannot stay and wait for things to happen,” says Lorenzo Fiorillo, Eni’s director of technology, research and development, and digital. 

One open question is what, exactly, Eni plans to do with this electricity. When asked about it on the press call, Fiorillo referenced wind and solar plants that Eni owns and said the plan “is not different from what we do in other areas in the US and the world.” (Eni sells electricity from power plants that it owns, including renewable and fossil-fuel plants.)

Commonwealth is building tokamak fusion reactors that use superconducting magnets to hold plasma in place. That plasma is where fusion reactions happen, forcing hydrogen atoms together to release large amounts of energy.

The company’s first demonstration reactor, which it calls Sparc, is over 65% complete, and the team is testing components and assembling them. The plan is for the reactor, which is located outside Boston, to make plasma within two years and then demonstrate that it can generate more energy than is required to run it.

While Sparc is still under construction, Commonwealth is working on plans for Arc, its first commercial power plant. That facility should begin construction in 2027 or 2028 and generate electricity for the grid in the early 2030s, Mumgaard says.

Despite the billions of dollars Commonwealth has already raised, the company still needs more money to build its Arc power plant—that will be a multibillion-dollar project, Mumgaard said on a press call in August about the company’s latest fundraising round. 

The latest commitment from Eni could help Commonwealth secure the funding it needs to get Arc built. “These agreements are a really good way to create the right environment for building up more investment,” says Paul Wilson, chair of the department of nuclear engineering and engineering physics at the University of Wisconsin, Madison.

Even though commercial fusion energy is still years away at a minimum, investors and big tech companies have pumped money into the industry and signed agreements to buy power from plants once they’re operational. 

Helion, another leading fusion startup, has plans to produce electricity from its first reactor in 2028 (an aggressive timeline that has some experts expressing skepticism). That facility will have a full generating capacity of 50 megawatts, and in 2023 Microsoft signed an agreement to purchase energy from the facility in order to help power its data centers.

As billions of dollars pour into the fusion industry, there are still many milestones ahead. To date, only the National Ignition Facility at Lawrence Livermore National Laboratory has demonstrated that a fusion reactor can generate more energy than the amount put into the reaction. No commercial project has achieved that yet. 

“There’s a lot of capital going out now to these startup companies,” says Ed Morse, a professor of nuclear engineering at the University of California, Berkeley. “What I’m not seeing is a peer-reviewed scientific article that makes me feel like, boy, we really turned the corner with the physics.”

But others are taking major commercial deals from Commonwealth and others as reasons to be optimistic. “Fusion is moving from the lab to be a proper industry,” says Sehila Gonzalez de Vicente, global director of fusion energy at the nonprofit Clean Air Task Force. “This is very good for the whole sector to be perceived as a real source of energy.”