HIV could infect 1,400 infants every day because of US aid disruptions

Around 1,400 infants are being infected by HIV every day as a result of the new US administration’s cuts to funding to AIDS organizations, new modeling suggests.

In an executive order issued January 20, President Donald Trump paused new foreign aid funding to global health programs, and four days later, US Secretary of State Marco Rubio issued a stop-work order on existing foreign aid assistance. Surveys suggest that these changes forced more than a third of global organizations that provide essential HIV services to close within days of the announcements. 

Hundreds of thousands of people are losing access to HIV treatments as a result. Women and girls are missing out on cervical cancer screening and services for gender-based violence, too. A waiver Rubio later issued in an attempt to restore lifesaving services has had very little impact. 

“We are in a crisis,” said Jennifer Sherwood, director of research, public policy, at amfAR, the Foundation for AIDS Research, at a data-sharing event on March 17 at Columbia University in New York. “Even funds that had already been appropriated, that were in the field, in people’s bank accounts, [were] frozen.” 

Rubio approved a waiver for “life-saving” humanitarian assistance on January 28. “This resumption is temporary in nature, and with limited exceptions as needed to continue life-saving humanitarian assistance programs, no new contracts shall be entered into,” he said in a statement at the time.

The US President’s Emergency Plan for AIDS Relief (PEPFAR), which invests millions of dollars in the global AIDS response every year, was also granted a waiver February 1 to continue “life-saving” work. 

Despite this waiver, there have been devastating reports of the impact on health programs across the many low-income countries that relied on the US Agency for International Development (USAID), which oversees PEPFAR, for funding. To get a better sense of the overall impact, amfAR conducted two surveys looking at more than 150 organizations that rely on PEPFAR funding in more than 26 countries. 

“We found really severe disruptions to HIV services,” said Sherwood, who presented the findings at Columbia. “About 90% of our participants said [the cuts] had severely limited their ability to deliver HIV services.” Specifically, 94% of follow-up services designed to monitor people’s progress were either canceled or disrupted. There were similarly dramatic disruptions to services for HIV testing, treatment, and prevention, and 92% of services for gender-based violence were canceled or disrupted.

The cuts have plunged organizations into a “deep financial crisis,” said Sherwood. Almost two-thirds of respondents said community-based staff were laid off before the end of January. When the team asked these organizations how long they could stay open without US funding, 36% said they had already closed. “Only 14% said that they were able to stay open longer than a month,” said Sherwood. “And … this data was collected longer than a month ago.”

The organizations said tens of thousands of the people they serve would lose HIV treatment within a month. For some organizations, that figure was over 100,000, said Sherwood. 

Part of the problem is that the stop-work order came at a time when these organizations were already experiencing “shortages in commodities,” Sherwood said. Typically, centers might give a person a six-month supply of antiretroviral drugs. Before the stop-work order, many organizations were only giving one-month supplies. “Almost all of their clients are due to come back and pick up [more] treatments in this 90-day freeze,” she said. “You can really see the panic this has caused.”

The waiver for “life-saving” treatment didn’t do much to remedy this situation. Only 5% of the organizations received funds under the waiver, while the vast majority either were told they didn’t qualify or had not been told they could restart services. “While the waiver might be one important avenue to restart some services, it cannot, on the whole, save the US HIV program,” says Sherwood. “It is very limited in scope, and it has not been widely communicated to the field.”

AmfAR isn’t the only organization tracking the impact of US funding cuts. At the same event, Sara Casey, assistant professor of population and family health at Columbia, presented results of a survey of 101 people who work in organizations reliant on US aid. They reported seeing disruptions to services in humanitarian responses, gender-based violence, mental health, infectious diseases, essential medicines and vaccines, and more. “Many of these should have been eligible for the ‘life-saving’ waivers,” Casey said.

Casey and her colleagues have also been interviewing people in Colombia, Kenya, and Nepal. In those countries, women of reproductive age, newborns and children, people living with HIV, members of the LGBTQI+ community, and migrants are among those most affected by the cuts, she said, and health workers, who are primarily women, are losing their livelihoods.

“There will be really disproportionate impacts on the world’s most vulnerable,” said Sherwood. Women make up 67% of the health-care workforce, according to the World Health Organization. They also make up 63% of PEPFAR clients. PEPFAR has supported gender equality and services for gender-based violence. “We don’t know if other countries or other donors … can or will pick up these types of programs, especially in the face of competing priorities about keeping people on treatment and keeping people alive,” said Sherwood.

Sherwood and her colleagues at amfAR have also done some modeling work to determine the potential impact of cuts to PEPFAR on women and girls, using data from last year to create their estimates. “Each day that the stop-work order is in place, we estimate that there are 1,400 new HIV infections among infants,” she said. And every day, over 7,000 women stand to miss out on cervical cancer screenings.

The funding cuts have also had a dramatic effect on mental-health services, said Farah Arabe, who serves on the advisory board of the Global Mental Health Action Network. Arabe presented the preliminary findings of an ongoing survey of mental-health organizations from 29 countries that receive US aid. “Unfortunately, this is a very grim picture,” she said. “Only 5% of individuals who were receiving services in 2024 will be able to receive services in 2025.” 

The same goes for children and adolescents. “This is a particularly sad picture because children … are going through brain development,” she said. “Impacts … at this early stage of life have lifelong impacts on academic achievement, economic productivity, mental health, physical health … even the ability to parent the next generation.” 

For now, nonprofits and aid and research organizations are scrambling to try to understand, and potentially limit, the impact of the cuts. Some are hoping to locate new sources of funding, independent of the US. 

“I am deeply concerned that progress in disease eradication, poverty reduction, and gender equality is at risk of being reversed,” said Thoai Ngo of Columbia University’s Mailman School of Public Health, who chaired the event. “Without urgent action, preventable deaths will rise, more people will fall into poverty, and as always, women and girls will bear the heaviest burden.”

On March 10, Rubio announced the results of his department’s review of USAID. “After a 6 week review we are officially cancelling 83% of the programs at USAID,” he shared via the social media platform X

Bad SEO Advice: 4 Tips to Ignore

Tools to optimize search engine rankings often provide automated audits with reports and recommendations. The findings typically include detailed explanations, which are handy for SEO learners.

But the findings are often harmful in my experience, as they suggest actions that are low priority. They are unlikely to improve performance, and they distract from worthwhile tactics that can move the needle.

Do-it-yourself SEO is possible, provided the doer understands helpful practices versus those that waste time.

Here are common SEO recommendations to ignore.

Title Tag Length

Google truncates title tags in search results to roughly 60 characters. Many SEO tools therefore flag longer titles as a weakness.

This is false. Google considers the entire title to assess relevancy, even if partly cropped. Hence a longer title can help a page rank higher.

Instead of shortening title tags, consider rewriting them with the critical keywords at the front to appear prominently in SERPs and attract clicks.

Hreflang Usage

Hreflang is an HTML attribute informing search engines which language to show for a specific geo region.

SEO tools frequently report the absence of an hreflang tag as an error. In reality, the tag is pointless for single-language sites, and since 2016 Google has ignored it altogether.

Moreover, modern web browsers and accessibility screen readers programmatically determine the language.

Hence, hreflang tags are unnecessary for all sites — single- and multi-language.

HTML Headings

Many themes and content management systems use HTML headings inconsistently. For example, a page could contain two H1 headings, no H2s, or multiple H3s preceding an H2.

Certainly HTML headings should appear sequentially when possible, but it’s sometimes difficult without changing a theme, and it’s unlikely to improve rankings. Thus I advise clients to ignore sequential placement if it requires much time or money.

Make sure to use HTML headings and include keywords, but ignore recommendations for precise order.

Word Count

For years, search engine optimizers have claimed word count as an important ranking factor. Some studies have claimed longer pages rank higher. Many SEO tools suggest pages of 1,000 words or more.

It’s nonsense. For years Google representatives have stated the number of words (or links) on a page is irrelevant. Likely, bloated pages are actually harmful if humans ignore them, prompting algorithms to classify them as unhelpful.

A page should be the minimum length to satisfy and help a user. If that’s 200 words, so be it. Write for humans, ignore “minimum word count” recommendations, and focus instead on searchers’ intent.

Google Expands AI Overviews To Thousands More Health Queries via @sejournal, @MattGSouthern

Google is expanding AI overviews to “thousands more health topics,” per an announcement at the company’s health-focused ‘The Check Up’ event.

The event included developments spanning research, wearable technology, and medical records.

Here’s more about how Google is refining health results in Search.

AI Overviews For Health Queries

Google is showing AI overviews for more health-related queries.

Compared to other types of questions, this topic has had fewer AI overviews. Now, these overviews will be available for more queries and in more languages.

Google states:

“Now, using AI and our best-in-class quality and ranking systems, we’ve been able to expand these types of overviews to cover thousands more health topics. We’re also expanding to more countries and languages, including Spanish, Portuguese and Japanese, starting on mobile.”

Google notes health-focused advancements to its Gemini models will go into summarizing information for health topics.

With these updates, Google claims AI overviews for health queries are “more relevant, comprehensive and continue to meet a high bar for clinical factuality.”

New “What People Suggest” Feature

Google is introducing a new feature for health queries called “What People Suggest.”

It uses AI to organize perspectives from online discussions and to analyze what people with similar health conditions are saying.

For example, someone with arthritis looking for exercise recommendations could use this feature to learn what works for others with the same condition.

See an example below.

Screenshot from: blog.google/technology/health/the-check-up-health-ai-updates-2025/, March 2025.

“What People Suggest” is currently available only on mobile devices in the U.S.

Broader Health AI Initiatives

The search updates were part of a larger set of health technology announcements at The Check Up event. Google also revealed:

  • Medical Records APIs in Health Connect for managing health data across applications
  • FDA clearance for Loss of Pulse Detection on Pixel Watch 3
  • An AI co-scientist built on Gemini 2.0 to help biomedical researchers
  • TxGemma, a collection of open models for AI-powered drug discovery
  • Capricorn, an AI tool for pediatric oncology treatment developed with Princess Máxima Center

Looking Ahead

Hallucination remains a problem for AI models. While Gemini may have upgrades that make it more accurate, it will still be wrong at least sometimes.

Google’s inclusion of personal experiences alongside medical websites marks a shift, recognizing people value both clinical information and real-world perspectives.

Health publishers should be aware that this could affect search visibility but may also increase chances of appearing for more queries or the “What People Suggest” section.

New Cybersecurity Bot Attack Defense Helps SaaS Apps Stay Secure via @sejournal, @martinibuster

Cybersecurity company HUMAN introduces a new feature for its HUMAN Application Protection service called HUMAN Sightline. The new Sightline enables users to defend their SaaS applications with detailed analyses of attacker activities and to track changes in bot behavior. This feature is available as a component of Account Takeover Defense, Scraping Defense, and Transaction Abuse Defense at no additional cost.

Human is a malicious traffic analytics and bot blocking solution that enables analysts to understand what bots and humans are doing and also block them.

According to the Human Sightlines announcement:

“Customers have long asked us to provide advanced anomaly reporting—or, in other words, to mark anomalies that represent distinct attacks. But when we started down that path, we realized that simply labeling spikes would not provide the information that customers really need…

…We built a secondary detection engine using purpose-built AI that analyzes all the malicious traffic in aggregate after the initial block or allow decision is made. This engine compares every automated request to every other current and past request in order to construct and track “attack profiles,” groups of requests thought to be from the same attacker based on their characteristics and actions.

Beyond visibility, secondary detection allows HUMAN’s detection to adapt and learn to the attacker’s changing behavior. Now that we can monitor individual profiles over time, the system can react to their specific adaptation, which allows us to continue to track and block the attacker. The number of signatures used by the system for each profile increases over time, and this information is surfaced in the portal.”

Search Engine Journal Asked Human About Their Service

How is this solution implemented?

“HUMAN Sightline will be a new dashboard in HUMAN Application Protection. It will be available in Account Takeover Defense, Scraping Defense, and Transaction Abuse Defense, at no additional cost. No other bot management product on the market has similar capabilities to HUMAN Sightline. HUMAN’s new attack profiling approach segments malicious traffic into distinct profiles, so customers can identify the different profiles that make up each traffic volume. Analysts can understand what each is doing, their sophistication, their capabilities, and the specific characteristics that distinguish them from other humans and bots on the application. This allows HUMAN to bring attack reporting to the next level, serving as both a bot blocking solution and a data-centric, machine learning-driven analyst tool.”

Is it a SaaS solution? Or is it something that lives on a server?

“Our Human Defense Platform safeguards the entire customer journey with high-fidelity decision-making that defends against bots, fraud, and digital threats. HUMAN helps SaaS platforms provide a safe user journey by preserving high-quality customer interactions across online accounts, applications, and websites.”

Is this aimed at enterprise level businesses? How about universities, are they an end user that can implement this solution?

“This solution is aimed at organizations that are interested in expanding its bot traffic analyzing capabilities. Enterprise level businesses and higher education can certainly utilize this solution; again, it depends how committed the organization is to tracking bot traffic. HUMAN has long been helping clients in the higher education sector from evolving cyber threats, and HUMAN Sightline will only benefit these organizations to protect themselves further.”

Read more about Human Sightline:

Human Sightline: A New Era in Bot Visibility

Featured Image by Shutterstock/AntonKhrupinArt

What Content Works Well In LLMs? via @sejournal, @Kevin_Indig

Over the last 12 months, we filled significant gaps in our understanding of AI Chatbots like ChatGPT & Co.

We know:

  1. Adoption is growing rapidly.
  2. AI chatbots send more referrals to websites over time.
  3. Referral traffic from AI chatbots has a higher quality than that from Google.

You can read all about it in the state of AI chatbots and SEO.

But there isn’t much content about examples and success factors of content that drives citations and mentions in AI chatbots.

To get an answer, I analyzed over 7,000 citations across 1,600 URLs to content-heavy sites (think: Integrators) in # AI chatbots (ChatGPT, Perplexity, AI Overviews) in February 2024 with the help of Profound.

My goal is to figure out:

  1. Why some pages are more cited than others, so we can optimize content for AI chatbots.
  2. Whether classic SEO factors matter for AI chatbot visibility, so we can prioritize.
  3. What traps to avoid, so we don’t have to learn the same lessons many times.
  4. If different factors influence mentions and citations, so we can be more targeted in our efforts.

Here are my findings:

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

The Key To Brand Citation In AI Chatbots: Deep Content

Image Credit: Kevin Indig

🔍 Context: We know that AI chatbots use Retrieval Augmented Generation (RAG) to weigh their answers with results from Google and Bing. However, does that mean classic SEO ranking factors also translate to AI chatbot citations? No.

My correlation analysis shows that none of the classic SEO metrics have strong relationships with citations. LLMs have light preferences: Perplexity and in AIOs weigh word and sentence count higher. ChatGPT weighs domain rating and Flesch Score.

💡Takeaway: Classic SEO metrics don’t matter nearly as much for AI chatbot mentions and citations. The best thing you can do for content optimization is to aim for depth, comprehensiveness, and readability (how easy the text is to understand).

The following examples all demonstrate those attributes:

  • https://www.byrdie.com/digital-prescription-services-dermatologist-5179537
  • https://www.healthline.com/nutrition/best-weight-loss-programs
  • https://www.verywellmind.com/we-tried-online-therapy-com-these-were-our-experiences-8780086

Broad correlations didn’t reveal enough meat on the bone and left me with too many open questions.

So, I looked at what the most-cited content does differently than the rest. That approach showed much stronger patterns.

Image Credit: Kevin Indig

🔍Context: Because I didn’t get much out of statistical correlations, I wanted to see how the top 10% of most cited content stacks up against the bottom 90%.

The bigger the difference, the more critical the factor for the top 10%. In other words, the multiplier (x-axis on the chart) indicates what factors LLMs reward with citations.

The results:

  • The two factors that stand out are sentence and word count, followed by the Flesch Score. Metrics related to backlinks and traffic seem to have a negative effect, which doesn’t mean that AI chatbots weigh them negatively but simply that they don’t matter for mentions or citations.
  • The top 10% of most cited pages across all three LLMs have much less traffic, rank for fewer keywords, and get fewer total backlinks. How does that make sense? It almost looks like being strong in traditional SEO metrics is bad for AI chatbot visibility.
  • Copilot (not included in the chart) has the starkest inequality, by the way. The top 10% have 17.6 more citations than the bottom 90%. However, top 10% also rank for 1.7x more keywords in organic search. So, Copilot seems to have stronger preferences than other AI Chatbots.

Splitting the data up by AI Chatbot shows you their unique preferences:

Image Credit: Kevin Indig

💡Takeaway: Content depth (word and sentence count) and readability (Flesch Score) have the biggest impact on citations in AI chatbots.

This is important to understand: Longer content isn’t better because it’s longer, but because it has a higher chance of answering a specific question prompted in an AI chatbot.

Examples:

  • www.verywellmind.com/best-online-psychiatrists-5119854 has 187 citations, over 10,000 words, and over 1,500 sentences, with a Flesch Score of 55, and is cited 72 times by ChatGPT.
  • On the other hand, www.onlinetherapy.com/best-online-psychiatrists/ has only three citations, also a low Flesch Score, with 48, but comes “short” with only 3,900 words and 580 sentences.

🔍Context: We don’t yet know the value of a brand being mentioned by an AI chatbot.

Early research indicates it’s high, especially when prompts indicate purchase intent.

However, I wanted to get a step closer by understanding what leads to brand mentions in AI chatbots in the first place.

After matching many metrics with AI chatbot visibility, I found one factor that stands out more than anything else: Brand search volume.

The number of AI chatbot mentions, and brand search volume have a correlation of .334 – pretty good in this field. In other words, the popularity of a brand broadly decides how visible it is in AI chatbots.

Image Credit: Kevin Indig

Popularity is the most significant predictor for ChatGPT, which also sends the most traffic and has the highest usage of all AI chatbots.

When breaking it down by AI chatbot, I found ChatGPT has the highest correlation with .542 (strong) ,but Perplexity (.196) and Google AIOs (.254) have lower correlations.

To be clear, there is a lot of nuance on the prompt and category level. But broadly, a brand’s visibility seems to be severely impacted by how popular it is.

Example of popular brands and their visibility in the health category (Image Credit: Kevin Indig)

However, when brands are mentioned, all AI chatbots prefer popular brands and consistently rank them in the same order.

  • There is a clear link between the categories of the users’ questions (mental health, skincare, weight loss, hair loss, erectile dysfunction) and brands.
  • Early data shows that the most visible brands are digital-first and invest heavily in their online presence with content, SEO, reviews, social media, and digital advertising.

💡Takeaway: Popularity is the biggest criterion that decides whether a brand is mentioned in AI chatbots or not. The way consumers connect brands to product categories also matters.

Comparing brand search volume and product category presence with your competitors gives you the best idea of how competitive you are on ChatGPT & Co.

Examples: All models in my analysis cite Healthline most often. Not a single other domain was in the top 10 citations for all four models, showing their distinctly different tastes and how important it is to keep track of many models as opposed to only ChatGPT – if those models also send you traffic.

Image Credit: Kevin Indig

Other well-cited domains across most models:

  • verywellmind.com
  • onlinedoctor.com
  • medicalnewstoday.com
  • byrdie.com
  • cnet.com
  • ncoa.org
Image Credit: Kevin Indig

Context: Not all AI chatbots mentioned brands with the same frequency. Even though ChatGPT has the highest adoption and sends the most referral traffic to sources, Perplexity mentions the most brands per average in answers.

Prompt structure matters for brand visibility:

  • The word “best” was a strong trigger for brand mentions in 69.71% of prompts.
  • Words like “trusted” (5.77%), “source” (2.88%), “recommend” (0.96%), and “reliable” (0.96%) were also associated with an increased likelihood of brand mentions.
  • Prompts including “recommend” often mention public organizations like the FDA, especially when the prompt includes words like “trusted” or “leading.”
  • Google AIOs show the highest brand diversity, followed by Perplexity, then ChatGPT.

💡Takeaway: Prompt structure has a meaningful impact on the brands that come up in the answer.

However, we’re not yet able to truly know what prompts users utilize. This is important to keep in mind: All prompts we look at and track are just proxies for what users might be doing.

Image Credit: Kevin Indig

🔍Context: In my research, I encountered several ways brands unintentionally sabotage their AI chatbot visibility.

I surface them here because the pre-requisite to being visible in LLMs is, of course, their ability to crawl your site, whether that’s directly or through training data.

For example, Copilot doesn’t cite onlinedoctor.com because it’s not indexed in Bing. I couldn’t find indicators that this was done on purpose, so I assume it’s an accident that could quickly be fixed and rewarded with referral traffic.

On the other hand, ChatGPT 4o doesn’t cite cnet.com, and Perplexity doesn’t cite everydayhealth.com because both sites intentionally block the respective LLM in their robots.txt.

But there are also cases in which AI chatbots reference sites even though they technically shouldn’t.

The most cited domain in Perplexity in my dataset is blocked.goodrx.com. GoodRX blocks users from non-U.S. countries, and it seems it accidentally or intentionally blocks Perplexity.

Image Credit: Kevin Indig

It’s important to single out Google’s AI Overviews here: There is no opt-out for AIOs, meaning if you want to get organic traffic from Google, you need to allow it to crawl your site, potentially use your content to train its models and surface it in AI Overviews. Chegg recently filed a lawsuit against Google for this.

💡Takeaway: Monitor your site, especially if all wanted URLs are indexed, in Google Search Console and Bing Webmaster Tools.

Double-check whether you accidentally block an LLM crawler in your robots.txt or through your CDN.

If you intentionally block LLM crawlers, double-check whether you appear in their answers simply by asking them what they know about your domain.

Summary: 6 Key Learnings

  • Classic SEO metrics don’t strongly influence AI chatbot citations.
  • Content depth (higher word and sentence counts) and readability (good Flesch Score) matter more.
  • Different AI chatbots have distinct preferences – monitoring multiple platforms is important.
  • Brand popularity (measured by search volume) is the strongest predictor of brand mentions in AI chatbots, especially in ChatGPT.
  • Prompt structure influences brand visibility, and we don’t yet know how user phrase prompts.
  • Technical issues can sabotage AI visibility – ensure your site isn’t accidentally blocking LLM crawlers through robots.txt or CDN settings.

Featured Image: Paulo Bobita/Search Engine Journal

Google PMax: Inside The Negative Keyword Limit Increase & What’s Next via @sejournal, @adsliaison

As Google’s Ad Product Liaison, I often share updates and insights with the community of digital advertisers and, best of all, get to hear your feedback first-hand.

We heard quite a lot after our recent announcement that, after a period of beta testing, we’re rolling out negative keywords in Performance Max (PMax) campaigns with a restriction.

We had set a cap of 100 negative keywords per campaign.

While the ability to add negative keywords in PMax directly in Google Ads without having to request them through Support or an account rep has been a long-time ask, we heard very quickly that the cap of 100 negative keywords felt too restrictive for many.

Here’s a look behind the scenes at the reasoning behind the initial cap, what we learned from your feedback, and the subsequent decision to increase the limit to 10,000 negative keywords per campaign.

Why The Cap In The First Place?

AI, by its nature, thrives on flexibility, adapting to real-time data and user behavior.

Performance Max is an AI-powered, goal-based campaign type that’s designed to find conversions based on the goals you set.

The intention of capping negative keywords in PMax at 100 was to give advertisers additional control while still giving PMax the flexibility to achieve your campaign’s stated goal – a limit of 100 negatives felt like a reasonable starting point.

To arrive at that number, we analyzed PMax campaigns in which negative keywords had been added via Support or their account rep.

We found that the 100-keyword limit would cover the vast majority of campaigns using negative keywords.

We also saw that the majority of submitted negative keywords had no actual serving impact – their ads already weren’t triggering for terms advertisers had concerns about.

In many other cases, other targeting exclusions would have been more suitable for blocking unwanted traffic.

We saw this in our beta testing as well. In short, 100 felt like a good compromise between offering enough flexibility without dramatically increasing the risk of accidentally blocking valuable traffic.

Negative keywords are just one way to control where your ads show on Search. Other controls such as brand exclusions, account level negative keywords and keyword prioritization are also available.

The initial cap of 100 negative keywords aimed to:

  • Preserve AI Optimization: Excessive negative keywords can act as rigid constraints, preventing the AI from exploring valuable search paths and hindering its ability to identify emerging trends. Essentially, it can stifle the algorithm’s ability to find the most efficient conversions. Very large negative keyword lists can potentially negatively impact the machine learning systems and hurt performance.
  • Prevent Accidental Traffic Exclusion: We aimed to prevent advertisers from inadvertently excluding valuable traffic through overly broad negative keyword scopes and missing potential high-intent customers.

What Your Feedback Told Us

We heard advertiser feedback loud and clear that while negative keywords are welcomed, the cap of 100 felt too restrictive.

We heard from brands that quickly hit the 100 limit before including the key themes they wanted to negate. In short, it wasn’t a practical solution for many.

After looking at options, the team agreed to align with the limits in Search campaigns and raise the threshold to 10,000 negative keywords per PMax campaign.

That’s obviously a significant jump from 100 and way more than nearly every business will need or should use, but aligning on one common threshold simplifies things and gives advertisers plenty of room to experiment.

Actionable Insights And Considerations For Measuring Impact

Adding negative keywords to a Performance Max campaign can, of course, impact where your ads show on Search and Shopping inventory.

While the increased limit provides greater control, it’s crucial to use negative keywords strategically. Here are several things to keep in mind when applying negative keywords in PMax:

  • Judicious Application: Avoid overly broad exclusions that might hinder the AI’s ability to find valuable conversions. Prioritize high-impact negatives that address specific ROI concerns. Keep in mind that account-level negative keywords you’ve added for brand suitability purposes already apply to your PMax campaigns.
  • Match Type Precision: Understand the nuances of broad, phrase, and exact match negative keywords in PMax. Negative match types work differently than their positive counterparts. For negative broad match keywords, your ad won’t show if the search contains all your negative keyword terms, even if the terms are in a different order. Phrase match negatives exclude queries containing the exact phrase, while exact match excludes only the specific query. Use them strategically to balance precision and reach.
  • Performance Monitoring: Closely monitor key metrics like conversions, conversion value, and conversion rates to ensure negative keywords have a positive rather than negative impact on performance.
  • Conflict Resolution: Be aware that if a user search matches both a positive signal and a negative keyword, the negative keyword will take precedence, and your ad will not be eligible to serve for that query.
  • Beyond Negative Keywords: Remember that PMax offers other control mechanisms to inform when your ads can trigger on Search.
  • Regular Audits: Just as with your Search campaigns, be sure to regularly audit your negative keywords to identify where you might be blocking potential valuable traffic. And Search Term Insights can help you identify query themes and individual search terms you might want to block with negative keywords.

Your Questions Answered

I received several questions about this update from advertisers on LinkedIn and X (Twitter) and want to address some of those here.

“The real challenge is how negative keywords interact with PMax’s black-box decision-making. Will we get more visibility into which search terms PMax is actually serving against? And how will negatives impact machine learning optimization long term?”

While PMax is designed to automate many aspects of campaign management, we recognize the importance of providing advertisers with meaningful insights.

The introduction of negative keywords is one of several recent steps towards providing additional controls.

Search Terms Insights for PMax provides a view of the search term categories as well as specific search terms that triggered your ads in Search. You’ll find performance metrics at the search term level.

Search Terms Insights is designed to make analyzing search term data easier by already grouping similar searches into broader categories, saving you the time to sift through individual search terms.

This data can be downloaded and available via scripts and the Google Ads API.

As for the long-term impact of negative keywords on campaign optimization, it’s important to strike a balance.

While negative keywords provide crucial control, an overly restrictive approach could limit the system’s ability to learn and adapt to new opportunities.

As noted above, our recommendation remains to use negative keywords strategically to exclude truly irrelevant traffic, allowing the AI to continue exploring and finding valuable conversions within the defined boundaries you set.

Reporting and insights are areas the team is actively focused on. Stay tuned for more on this.

“Google never needed <100 negative keywords in order to have>

Our intention was never to encourage spending on irrelevant queries.

Performance Max is a goal-based campaign type which means it’s designed to find more of the conversions that you indicate are valuable to your business.

The initial cap of 100 negative keywords was tested in beta and seemed to provide a reasonable level of control while still allowing the AI the necessary flexibility.

We acknowledge that our initial assessment was not sufficient for many advertisers, and that’s why we listened to your feedback and made the significant increase to 10,000.

“Why can’t negative keywords be limitless at any/every account level? Are there technical/operational issues that would be impacted?”

This is a fair question. There are limits on certain entities in Google Ads accounts to help ensure system and process stability. We have more details on various entity limits here.

“Will Google give us the ability to see the previously applied negative keyword lists we used to do via Support or our reps.”

Yes, you’ll be able to see and edit negative keywords and negative keyword lists that were previously added by Support or a rep.

“Why weren’t negative keywords available from the very start when PMax launched.”

The core principle behind PMax is leveraging AI to discover conversions across Google’s channels.

When PMax launched in 2021, the vision was to give advertisers a streamlined way to tell Google what they want to optimize for and then allow the system to learn and find those desired customers across all of Google’s inventory.

Exclusions were seen as unnecessary and potential impediments to optimization.

Over time, and with advertiser feedback in mind, features within PMax have expanded. And the pace of new insights and controls has been accelerating in recent months.

“What about negative keyword lists?”

Many of you asked about the possibility of using negative keyword lists within Performance Max campaigns, as you can in Search campaigns.

We are actively working on this and expect to have more to share on support for negative keyword lists in PMax later this year.

How PMax Is Evolving

I recently shared the overview below of many of the recent reporting and control updates for PMax at the Paid Search Association Conference.

These features are aimed at giving you more tools and information to steer PMax to find more of the conversions you want to generate for your business.

Features like brand guidelines help ensure your responsive display ads and auto-generated video ads reflect your brand’s visual identity.

Ginny Marvin presented recent PMax controls and insights updates at the Paid Search Association ConferenceRecent controls and insights updates for PMax. Image from author, March 2025

Stay tuned for more on search terms data and analysis capabilities as well as additional insights this year.

This is an area we are actively focused on. And keep the feedback coming.

More Resources:


Featured Image: Gorodenkoff/Shutterstock

The State Of Performance Max: How To Optimize Google Ads In 2025 via @sejournal, @MenachemAni

In the beginning, there was only Search. Then Google said, “Let there be Shopping.” And so began the golden age of search advertising.

Fast forward, and machines now perform the more granular and recurring optimizations at scale that we had to manually.

Algorithmic campaigns like Performance Max have become Google’s golden goose. They claim that in the near future, businesses will be able to input their goals and information, and Google’s system will run its advertising program for them.

Agencies and marketers have naturally pushed back, claiming that Google wants to put them out of business when they’re still needed. Some even say that machine learning isn’t necessary when brilliant human minds are on the job.

The truth, as always, is somewhere in the middle.

Performance Max isn’t going anywhere, and neither are agencies and marketers. And if you plan to manage Google Ads this year, you will need to accept both sides of that coin.

So, with some very welcome changes from Google behind us, here’s the state of Performance Max and what I envision for it going forward.

Why Does Performance Max Have A Negative Reputation?

PPC marketers have many complaints about Performance Max. Some are valid, and others feel unfair.

The inability to see most of your keyword data is one of the reasons I hear the most.

The introduction of search categories is welcome, but they are not necessarily the keyword a user searched for.

You can expand the category somewhat and get an idea of intent, but it’s not a one-to-one deal like seeing the actual query.

And while longtime advertisers are accustomed to seeing every search term, the reality is that Google has been removing more and more data for years, all in the name of privacy.

This limited and unclean data around what people search for – what we’re used to seeing in the search terms report – is a valid frustration, especially when budgets are limited, or the pressure to deliver is particularly high.

There are improvements to take note of, though.

By default, the system shows seven days, and you can go back to the last 28 days. Google has also added the ability to look at longer time frames for search terms.

The addition of these new capabilities – even if they don’t cover everything we want – tells me that Google sees that the adoption of Performance Max is not going to reach the desired levels unless we have the tools we need to make use of it.

And even though this data only started in March 2023, having it now is helpful.

Another reason why Performance Max has a negative reputation is its attribution shyness. You can’t fully see where success or failure is coming from, which is a challenge in performance marketing.

A campaign could show you 10x return on ad spend, but you may have, at best, a sneaky suspicion that it’s coming from primarily retargeting traffic. There’s no real way to see the data that confirms that hypothesis (or refutes it).

And so the mindset shifts to one of “it’s not worth the hassle,” compounded by the fact that third-party attribution tools like Triple Whale are still unable to weigh Performance Max very well within its system because it cannot see view data for YouTube like it does for Meta.

This makes Performance Max look like it’s not working as well as it is.

One of the trickiest pieces of Performance Max is that people just have a hard time reconciling the data that Google shows and that they want to get from the campaign, which is typically profitable net new customer acquisition.

By moving back to Shopping – even if it shows a slightly lower return on ad spend (ROAS) – marketers at least know what they’re getting for their money as the reporting and attribution are clearer.

On the flip side, while third-party attribution tools do underreport for Performance Max (likely because of channels like YouTube and Display that affect performance), my experience is that mixing the two – putting some products in Shopping and some in Performance Max – often works well if the campaigns are being used properly.

Do We Need Granular Control In Performance Max?

Playing the devil’s advocate for a minute, I think the whole idea of Performance Max is that you shouldn’t have to add negative keywords.

You’re meant to optimize the campaign based on your bidding strategy, ROAS or cost-per-acquisition (CPA) target, account and campaign structure, landing page, and proprietary data.

This ties into another source of frustration: low-quality ad inventory.

My answer to both complaints is to focus on getting the best performance out of the campaign, or switch back to Search or Shopping.

I think we have to accept some amount of poor traffic and unwanted conversions in exchange for incremental gains in profitable new customer acquisition.

In the bigger picture, search terms and placements don’t really matter as much as the system will learn to focus less on that kind of traffic if it’s not converting.

Performance Max does take time and money to get going, so it’s fully understandable if your niche or vertical means you can’t justify the investment due to factors like limited budget, low search volumes, unavailability of data inputs, or tight industry regulation.

This is your reminder that Performance Max is an option, not a necessity.

The Resurgence Of Search And Shopping: Why Performance Max Won’t Replace PPC Marketers

Performance Max saw widespread adoption at launch, even though we were coming from Smart Shopping, which worked far better at the time.

Still, we were quick to adopt and switch because Google pushed hard on the narrative that it was the future.

Over time and as reality set in, many advertisers started to move back to Search and Shopping for three primary reasons:

  1. A high proportion of spam and low-quality leads.
  2. For ecommerce, a lack of control over products and campaign behavior.
  3. Cannibalization of non-algorithmic legacy campaigns by Performance Max.

Today, I find that we create the most success for clients by running a mix of Shopping and Performance Max side by side.

We haven’t moved away from the latter completely, but I have heard from others that they’ve returned fully to standard Shopping.

This will be furthered by recent developments around which campaign is valued by Google.

Upon the launch of Performance Max, both campaigns running alongside each other meant that Performance Max always took priority while Shopping didn’t enter auctions.

Over the years, there have been some changes to that prioritization. Anything you excluded from Performance Max (such as brand terms) would always fall back to Shopping. And now, Google has announced that Performance Max will not override Shopping.

Both will enter the auctions they qualify for, and ad rank will determine which one shows.

Performance Max In 2025: 5 Optimizations For Better Results

So, how do you regain control when Performance Max takes it away? What can you really do to improve campaign performance, and what options are realistically at your disposal?

Here are five of my Performance Max optimizations to never leave home without.

1. Data input quality is absolutely critical to success with Performance Max and is virtually essential if you run lead generation campaigns.

Offline conversions, audience signals, and enhanced conversions all help improve results.

Synchronizing your customer list and having the campaign focus solely on new customer acquisition is a great way to avoid spending money on people who have already bought from you, improving profitability.

2. Asset group segmentation and how you set up a Performance Max campaign really make a difference in what kind of traffic it brings in.

Without the right decisions here, the campaign will automatically go after traffic that it believes is most likely to convert – site visitors, people searching for your brand, and past/existing customers.

3. The quality of your creative assets and landing pages has a direct impact on your ability to get those big performance lifts that aren’t really possible any longer through old-school account optimization.

You simply must stand out and be relevant in a market where competitive saturation is at its peak, and consumers are bombarded with messages to buy things everywhere on the internet.

4. For ecommerce, feed quality and optimization are non-negotiable for both Performance Max and Shopping.

The feed is the heartbeat of the account – it’s where the system looks for information on your products to help it decide who should see them.

Skipping this step or running a poorly written feed will directly and negatively impact your marketing efficiency.

5. Sculpting options are limited but should still be employed where they make sense. One option is to remove branded traffic using brand exclusions.

You can also add negative keyword lists through Google support and then just block specific keywords. Soon enough, you’ll be able to add campaign-level negatives to Performance Max yourself.

Ultimately, you’ve got to optimize where you can to improve the consumer experience.

This might be something as fundamental as tracking the right conversion actions, writing a sharper landing page with stronger social proof, improving mobile responsiveness, and setting up rules to only advertise products that are in stock.

In short, focus on what you can control and do a wonderful job with those things.

Google Is Listening To PPC Advertisers And Agencies

The PPC community complained about the lack of negative keywords – Google gave them to us. We asked for more detailed reporting – we got it. The cannibalization of Shopping became a problem – Google resolved it.

I think, at this point, Google is due the credit for listening to us.

Not only is it adding more (and more relevant) features to Performance Max, but it is also seeing that agencies and marketers have a role to play in the future of search advertising.

I think the decreased adoption and vocal critique on social media have undoubtedly influenced the decision to give us back a portion of control and visibility.

It’s our turn to adopt these features, adapt to the limitations of Performance Max (when it makes sense for the account), and, most importantly, keep a fair and honest dialog open on social media with Google’s representatives.

More Resources:


Featured Image: Jack Frog/Shutterstock

[SEO & PPC] How To Unlock Hidden Conversion Sources In Your Sales & Marketing Funnel via @sejournal, @calltrac

 This post was sponsored by CallTrackingMetrics. The opinions expressed in this article are the sponsor’s own.

Did you know 92% of all customer interactions are from phone calls?

And very few know how to track conversions from phone calls.

Brands meticulously track clicks, impressions, and online interactions through SEO, pay-per-click (PPC) ads, and data-driven strategies.

Yet, one critical piece is often missing: offline conversions.

Many high-intent customer interactions, especially in industries like healthcare, legal, home services, and B2B, happen over the phone.

If you’re in an industry that receives any number of calls, you may be struggling to connect these calls to your digital marketing efforts, leading to:

  1. Inefficient marketing strategies.
  2. Wasted ad spend.
  3. Difficulty proving ROI.

How do you fix this? Call tracking.

By leveraging AI-powered tools and advanced attribution technology, marketers can bridge the online-offline gap, ensuring no lead goes unnoticed.

How To Attribute Sales To Phone Calls

TL;DR: Historically, you could not attribute conversions to phone calls; now, you can.

Yes, offline conversions can be tracked.

And despite the high percentage of customer interactions happening over the phone, many brands fail to track which ad or campaign led to those calls.

This could stem from knowledge gaps, tight budgets, or reluctance to integrate more technology into their stack.

Without call attribution, businesses are left guessing about what’s driving revenue.

What Is Offline Conversion Attribution?

Offline conversion attribution is the process of linking your online marketing efforts to offline sales or actions.

It helps you understand which digital marketing channels and campaigns contribute to offline conversions, such as in-store purchases, phone call inquiries, or signed contracts.

How Offline Conversion & Phone Call Attribution Works

By paying attention to phone call conversion data, you can:

1. Connect Online Interactions To A Phone Call: A user clicks on a digital ad, visits a website, fills out a form, or calls a business after seeing an online campaign.
2. Store User Data In One Place: Data from these interactions (such as email, phone number, or a unique tracking ID) is captured and stored.
3. Match Callers With Offline Events: When a purchase or conversion happens in-store, over the phone, or through a sales team, businesses match it back to the initial online touchpoint.
4. Analyze & Optimize Webpages With Content That Converts: You can analyze which digital campaigns, keywords, or ads drive the most offline conversions, optimizing their marketing strategy accordingly.

What You Can Do With Phone Call Conversion Data

When you introduce a tool that acts as Google Analytics for phones, you’ll be able to:

  • Improve ROI Measurement: Helps businesses understand the real impact of digital marketing on offline sales.
    Enhance Ad Targeting: Enables better retargeting of high-intent users.
    Optimize Budget Allocation: Allows marketers to invest more in channels that drive actual sales, not just clicks or website visits.
    Bridge the Online-Offline Gap: Particularly important for industries like retail, automotive, healthcare, and B2B, where many transactions happen offline.

Examples of Offline Conversion Attribution

  1. A customer finds your business through organic search.
  2. They see a retargeting ad on Facebook.
  3. Finally, they click a PPC ad and call to book an appointment.

Without call tracking, the PPC ad might receive full credit, even though SEO and social played key roles. Choosing the right attribution model ensures data-driven marketing decisions.

Best Tools for Offline Conversion Tracking

  • Google Ads Offline Conversion Tracking
  • Facebook Offline Conversions API
  • CRMs like HubSpot or Salesforce
  • Call tracking software like CallTrackingMetrics

SEO & Call Tracking: Connecting Organic Efforts To Real-World Conversions

Gain Keyword Attribution Beyond Clicks

Rankings, traffic, and forms typically measure SEO success fills. But what about phone calls? Call tracking technology with dynamic number insertion (DNI) allows businesses to:

  • Identify which organic search queries lead to phone calls
  • Optimize content around real customers’ questions and concerns
  • Understand which landing pages drive the most offline conversions

For example, if multiple callers reference a specific product-related question, that insight can inform new blog topics or FAQ pages to improve SEO efforts, driving even more right-fit traffic into your sales funnel and conversion metrics.

Optimize For True Local SEO

Local search is a major driver of inbound calls. When combined with call tracking, businesses can finally understand:

  • Which local listings (Google Business Profile, Yelp, etc.) generate the most calls?
  • What information do customers search for before calling?
  • How to refine location-based content for higher engagement

How Call Insights Can Strengthen Your SEO Strategy

Phone calls aren’t just conversions—they’re valuable sources of customer insights that your teams can use to refine ad strategies, train teams on sales pitches, and identify areas for growth in your content strategy. Each conversation has the potential to reveal the common questions, pain points, and content gaps that businesses can address to improve their marketing performance.

1. Identify FAQs for Stronger Content

Often, customers call a company’s support phone number when they can’t find information online, either about a product or service they’re considering buying or one they’ve already purchased. By analyzing call transcripts, businesses can spot recurring questions and proactively address them in blog posts, FAQs, or product pages.

For example, if a home services company frequently gets calls asking, “Do you offer emergency repairs on weekends?”, this signals a need to make that information more visible on their website. A dedicated service page or blog post could reduce unnecessary calls while improving customer experience.

2. Refine Your Website Messaging

If callers repeatedly ask about pricing, product differences, or service details, your website messaging probably isn’t clear enough.

For instance, an e-commerce brand selling fitness equipment might notice that callers often ask, “What’s the difference between your basic and premium treadmill?” Adding a simple comparison chart or explainer video can help lessen confusion and improve conversions.

3. Fill Content Gaps To Reduce Sales Friction

Repeated calls about the same topic are a good indicator of missing or unclear content. A B2B SaaS company, for example, might receive frequent inquiries about integrating with a particular CRM or social platform. Instead of solely relying on customer support, the marketing team could identify this pain point and create a step-by-step guide or video tutorial to address it, which would reduce friction and improve self-service for prospects.

PPC & Call Attribution: Maximizing ROI With Better Insights

Tracking clicks alone doesn’t reveal the full ROI of PPC campaigns. Many conversions, especially phone calls, happen offline and go untracked. Without attribution, businesses may waste ad spend and overlook high-intent leads. This section explores how call tracking connects PPC efforts to real conversions, improving marketing efficiency.

Paid Search: Wasted Spend Without the Full Picture

A high cost-per-click (CPC) doesn’t guarantee strong ROI if businesses aren’t tracking offline conversions. Without call tracking, marketers risk:

  • Over-investing in underperforming keywords
  • Missing opportunities to optimize campaigns for call-driven leads
  • Failing to attribute revenue-generating phone calls to PPC efforts

When a business fails to account for ROI in the form of phone calls, they’re losing an opportunity to accurately account for their real CPC and allocate resources accordingly.

Call Tracking + Google Ads = Smarter Bidding

PPC campaigns are only as effective as the data behind them. Without tracking phone calls, businesses risk misallocating budgets to keywords that drive clicks but not conversions. Integrating call tracking with Google Ads provides a clearer picture by linking calls to the specific campaigns, ad groups, and keywords that drive valuable conversions.

With AI-powered call scoring, marketers can identify high-intent leads and adjust bidding strategies based on actual conversion data—not just clicks. This ensures ad spend is focused on quality leads rather than wasted traffic.

Retargeting with First-Party Data

Not every caller converts immediately. Call tracking allows businesses to retarget high-intent leads with personalized follow-ups. By analyzing call topics, marketers can tailor ads or email sequences to address specific customer concerns, increasing the likelihood of conversion.

Additionally, integrating call data with CRM platforms like HubSpot and Salesforce ensures sales teams can nurture prospects effectively, preventing lost opportunities. By combining PPC insights with offline conversions, businesses gain a clearer understanding of customer behavior, leading to smarter ad spend and more targeted outreach.

Back To Basics: Omnichannel Attribution & The Power Of Call Data

As marketing shifts to a mix of online and offline tactics, attribution models must evolve. By integrating call tracking with Google Analytics, CRM systems, and automation tools, businesses can gain a complete view of the customer journey.

A company that integrates CallTrackingMetrics with Google Analytics and its CRM can:

  • See exactly which campaigns drive calls.
  • Automate follow-ups based on conversation insights.
  • Optimize for higher-value interactions.

AI & Conversation Intelligence

Call tracking is no longer just about recordings or basic attribution. AI-driven call analysis provides deep insights, such as:

  • Customer intent and sentiment analysis.
  • Common objections that impact sales.
  • Automated lead qualification based on real conversations.

By leveraging AI, businesses can better understand customer needs, improve sales strategies, and ensure marketing efforts are driving meaningful engagement. Implementing AI-driven call tracking empowers teams to make data-backed decisions that enhance both customer experience and conversion rates.

Proving Marketing’s True Impact

Marketers are often challenged to prove ROI beyond what we might call “vanity metrics”, like impressions and clicks. Though these have a place in any strategy, these metrics don’t necessarily move the needle toward sales goals.

Call tracking, on the other hand, delivers revenue-focused attribution, showing exactly how digital marketing contributes to bottom-line growth. This kind of revenue-focused attribution can help an entire company analyze past efforts and accurately forecast revenue based on real campaigns, real calls, and real results

Case Study: This study from CallTrackingMetrics demonstrated how AI-driven call tracking optimized PPC ROAS and improved lead quality​.

Want to see how conversation intelligence can improve your marketing performance? Check out our guide to building an effective omnichannel communications strategy.

Ready to get to work? Book a demo with our team and see how CallTrackingMetrics’ products can help you.


Image Credits

Featured Image: Image by CallTrackingMetrics. Used with permission.

Is Google playing catchup on search with OpenAI?

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter about the biggest stories in tech from our editor in chief. Sign up here to get the next one in your inbox.

I’ve been mulling over something that Will Heaven, our senior editor for AI, pointed out not too long ago: that all the big players in AI seem to be moving in the same directions and converging on the same things. Agents. Deep research. Lightweight versions of models. Etc. 

Some of this makes sense in that they’re seeing similar things and trying to solve similar problems. But when I talked to Will about this, he said, “it almost feels like a lack of imagination, right?” Yeah. It does.

What got me thinking about this, again, was a pair of announcements from Google over the past couple of weeks, both related to the ways search is converging with AI language models, something I’ve spent a lot of time reporting on over the past year. Google took direct aim at this intersection by adding new AI features from Gemini to search, and also by adding search features to Gemini. In using both, what struck me more than how well they work is that they are really just about catching up with OpenAI’s ChatGPT.  And their belated appearance in March of the year 2025 doesn’t seem like a great sign for Google. 

Take AI Mode, which it announced March 5. It’s cool. It works well. But it’s pretty much a follow-along of what OpenAI was already doing. (Also, don’t be confused by the name. Google already had something called AI Overviews in search, but AI Mode is different and deeper.) As the company explained in a blog post, “This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions.”

Rather than a brief overview with links out, the AI will dig in and offer more robust answers. You can ask followup questions too, something AI Overviews doesn’t support. It feels like quite a natural evolution—so much so that it’s curious why this is not already widely available. For now, it’s limited to people with paid accounts, and even then only via the experimental sandbox of Search Labs. But more to the point, why wasn’t it available, say, last summer?

The second change is that it added search history to its Gemini chatbot, and promises even more personalization is on the way. On this one, Google says “personalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.”

Much of what these new features are doing, especially AI Mode’s ability to ask followup questions and go deep, feels like hitting feature parity with what ChatGPT has been doing for months. It’s also been compared to Perplexity, another generative AI search engine startup. 

What neither feature feels like is something fresh and new. Neither feels innovative. ChatGPT has long been building user histories and using the information it has to deliver results. While Gemini could also remember things about you, it’s a little bit shocking to me that Google has taken this long to bring in signals from its other products. Obviously there are privacy concerns to field, but this is an opt-in product we’re talking about. 

The other thing is that, at least as I’ve found so far, ChatGPT is just better at this stuff. Here’s a small example. I tried asking both: “What do you know about me?” ChatGPT replied with a really insightful, even thoughtful, profile based on my interactions with it. These aren’t  just the things I’ve explicitly told it to remember about me, either. Much of it comes from the context of various prompts I’ve fed it. It’s figured out what kind of music I like. It knows little details about my taste in films. (“You don’t particularly enjoy slasher films in general.”) Some of it is just sort of oddly delightful. For example: “You built a small shed for trash cans with a hinged wooden roof and needed a solution to hold it open.”

Google, despite having literal decades of my email, search, and browsing history, a copy of every digital photo I’ve ever taken, and more darkly terrifying insight into the depths of who I really am than I probably I do myself, mostly spat back the kind of profile an advertiser would want, versus a person hoping for useful tailored results. (“You enjoy comedy, music, podcasts, and are interested in both current and classic media”)

I enjoy music, you say? Remarkable! 

I’m also reminded of something an OpenAI executive said to me late last year, as the company was preparing to roll out search. It has more freedom to innovate precisely because it doesn’t have the massive legacy business that Google does. Yes, it’s burning money while Google mints it. But OpenAI has the luxury of being able to experiment (at least until the capital runs out) without worrying about killing a cash cow like Google has with traditional search. 

Of course, it’s clear that Google and its parent company Alphabet can innovate in many areas—see Google DeepMind’s Gemini Robotics announcement this week, for example. Or ride in a Waymo! But can it do so around its core products and business? It’s not the only big legacy tech company with this problem. Microsoft’s AI strategy to date has largely been reliant on its partnership with OpenAI. And Apple, meanwhile, seems completely lost in the wilderness, as this scathing takedown from longtime Apple pundit John Gruber lays bare

Google has billions of users and piles of cash. It can leverage its existing base in ways OpenAI or Anthropic (which Google also owns a good chunk of) or Perplexity just aren’t capable of. But I’m also pretty convinced that unless it can be the market leader here, rather than a follower, it points to some painful days ahead. But hey, Astra is coming. Let’s see what happens.