How To Get Brand Mentions In Generative AI via @sejournal, @AlliBerry3

There’s been a lot of talk recently about whether large language models (LLMs) are replacing a considerable amount of Google searches.

While Google is clearly still the market leader, with 14 billion searches per day worldwide, an estimated 37.5 million “searches” on ChatGPT represent an opportunity for your brand.

SEO professionals have years of experience testing optimization tactics on Google, but we’re still at the beginning stages of understanding how to get your brand cited in generative AI chatbots.

This is an exciting opportunity because it forces people to test and learn rapidly.

Through some testing and research, here are some tips that have helped me develop initial recommendations for my clients for generative AI optimization, regardless of whether it’s ChatGPT, Gemini, Deepseek, or whatever generative AI chatbot comes next.

Use Generative AI Chatbots To Learn About Your Brand

First, use generative AI tools and start asking them questions about your brand to find the sources they utilize to answer your queries.

This will help you better understand what sources it’s been trained on and what pages on your site (or competitor sites) matter to its understanding of your brand.

Ask questions like:

  • Tell me about [company]/[product].
  • What are the best brands in [vertical] and why?
  • What are the pros and cons of [company]/[product]?

For example, when I ask ChatGPT-4o, “Tell me about HubSpot,” it gives me a nice summary with a lot of useful citations:

HubSpot Company Summary From ChatGPTScreenshot from ChatGPT, April 2025

From this, you can see that a legal page is being cited multiple times in a company overview, so those are important. You can also see the HubSpot Knowledge Base where information is being pulled from as well.

Often, a company’s About page is the main citation, but clearly, HubSpot has built out a better legal section than its core pages.

If I were part of its organization, I would work to make the About page richer with information. Generally, your About page will do better at marketing the benefits of your products than legal pages.

When I then asked, “What are the best brands for small business marketing?”, it provided me with the following list:

Best Brands In Small Business Marketing From ChatGPTScreenshot from ChatGPT, April 2025
Best Brands In Small Business Marketing From ChatGPT ContinuedScreenshot from ChatGPT, April 2025

ChatGPT-4o cites Wikipedia five different times and NerdWallet once for its affiliate coverage of small business marketing tools.

In searches I’ve done in other sectors, I’ve seen a lot more variety in sources listed – many in the affiliate review space. Here, however, NerdWallet is the only one.

When I asked ChatGPT-4o to dive into HubSpot further and show me the pros and cons of using it for small business marketing, it responded with:

HubSpot Pros From ChatGPTScreenshot from ChatGPT, April 2025
HubSpot Cons From ChatGPTScreenshot from ChatGPT, April 2025

I would then take this list and compare it against how I market the product to small business owners and potentially make tweaks accordingly.

And if there is validity to the cons listed and they are weaknesses we want to work on as an organization, I would start to build relationships with some of the sources listed.

That way, when there are company updates that impact some of what’s been written about the company, they can update their review pages, and it’ll impact what shows up in LLM queries.

I would also engage with the PR team about getting more coverage for the brand. Some of these citations are not particularly well-known or credible sites, so there is opportunity to get more authoritative sources to show up.

Ensure LLMs Can Crawl Your Website

This was true at the beginning of SEO, and is still true now.

Ensure you have a robots.txt file on your website’s server with directives to crawlers about pages and sections they can crawl and index.

A lot of site owners initially rushed to block LLMs from crawling their sites when ChatGPT first launched, as it was unknown (and also probably scraping content for the model).

If you want to be included in generative AI results now, though, you need to be where the AI crawlers can see you, so double-check that it is all configured correctly.

Utilize Credible Citations And Quotes In Content

A group of researchers from several prominent universities conducted a study on AI search engine optimization and what was more likely to surface in response to queries.

The tactic that worked the best, especially for factual queries, was adding citations from authoritative sources.

Using language like “according to [source],” adding a statistic with a credible citation, or a quote from a known expert all increased the likelihood of showing up in a generative AI chatbot responses by as much as 25.1% for sites ranking in position 4 in Google and by 99.7% for sites ranking in position 5 in Google.

Similarly, adding statistics to content led to a 10% increase in visibility in LLMs if the site is in position 4 in Google and a 97% increase in visibility in LLMs if the site is ranked in position 5 in Google.

Mentions In Prominent Databases And Forums Help

There are lots of reasons to be paying attention to prominent forums like Reddit and Quora or popular database sites like Wikipedia. Not only do they own lots of organic search real estate, but they are also obvious sites for training LLMs.

Reddit is now, smartly, selling data licensing to AI companies. Being a topic of discussion on these sites will only help your brand. There’s no better time to get into being active on Reddit than now.

Engaging authentically on behalf of a brand (assuming you reveal your affiliation upfront) is more acceptable nowadays and is often welcomed to get clarification on user questions. It will likely benefit you on your generative AI optimization journey, too.

Develop An Exceptional About Page

If there is one area of your website you need to improve on, your About page may be it.

Generative AI models utilize these types of pages to understand what a company does and how credible the company is.

If you ask any of these platforms for information about your brand, you may be surprised by how heavily they rely on your About page to deliver the answer.

If your About page doesn’t describe your business and products well enough, you may see LLMs citing legal pages instead, like in the case of HubSpot mentioned earlier.

Focus On Long-Tail Keywords

Modern transformer-based LLMs are based on a statistical analysis of the co-occurrence of words.

If an entity is mentioned in connection with another entity with frequency in the training data, there is a high probability of a semantic relationship between the two entities.

To optimize for this, it’s more useful to use keyword research tools to better understand related keywords.

Search volume can still be an indicator of importance, but I would focus more on better understanding the relationships and relevance between concepts, ensuring the content is of high quality, and that user intent is matched.

Stop Siloing SEO

We’re entering an era when websites get fewer and fewer clicks from organic search. For most brands, a multi-channel strategy has never been more imperative.

Not only does building brand recognition help fuel some of the other best practices here, but LLMs are also being trained on social media and marketing content.

Having an aligned, cross-channel strategy only strengthens your brand.

Plus, the more you can build a sales flywheel in your own content ecosystem, the less you need to panic about staying ahead of the ever-evolving world of SEO.

Track Your Referrals And Reverse-Engineer

Once you start seeing generative AI platforms driving traffic to your site, start paying attention to what pages bring that traffic in.

Then, visit that generative AI platform and try to recreate searches that could lead to your page as the answer.

You’ll start to learn what topics these platforms associate with your brand, and then you can find ways to double down on that type of content.

Final Thoughts

While the tool companies are trying to catch up with how to help digital marketers optimize in this era of generative AI, we will have to be more reliant on ourselves to reverse-engineer what we’re seeing in the data and run our own experiments.

More Resources:


Featured Image: Visual Generation/Shutterstock

Data Clean Room: What It Is & Why It Matters In A Cookieless World via @sejournal, @iambenwood

In recent years, the digital marketing landscape has experienced significant shifts, particularly concerning user privacy and data tracking mechanisms.

Notably, Google’s initial plan to phase out third-party cookies in Chrome by 2022 was reversed in July 2024, allowing their continued use.

This reversal has implications for data clean rooms, which were poised to become essential tools in a cookieless world.

However, the persistence of third-party cookies does not diminish the growing challenges associated with signal loss.

Users are increasingly encountering cookie consent pop-ups and more prominent privacy notices across websites and apps, which is reducing the availability of data for marketers.

This heightened user awareness and control over personal data necessitate reevaluating data collection and analysis strategies.​

Data clean rooms remain vital in this context. They offer a privacy-compliant environment where multiple parties can collaborate on data without exposing personally identifiable information.

They also enable advertisers and publishers to perform advanced analytics on combined datasets, extracting valuable insights while adhering to privacy regulations.

What Is A Data Clean Room?

A data clean room is a piece of software that enables advertisers and brands to match user-level data without actually sharing any PII/raw data with one another.

Major advertising platforms like Facebook, Amazon, and Google use data clean rooms to provide advertisers with matched data on the performance of their ads on their platforms.

Data clean room visualization.Image from author, March 2025

All data clean rooms have extremely strict privacy controls, and businesses are not allowed to view or pull any customer-level data.

Modern data clean rooms have evolved to facilitate more streamlined and secure data collaboration.

They allow brands and publishers to combine datasets without exposing raw data, adhering to stringent privacy regulations.

This advancement addresses the challenges posed by increased data fragmentation and the heightened emphasis on user privacy.

The benefit to advertisers is a much clearer picture of advertising performance within each platform.

But, it does rely on a solid bank of first-party data in the first place in order to run any significant matching with platform data.

For example, Google’s Ads Data Hub allows you to analyze paid media performance and upload your own first-party data to Google. This allows you to segment your own audiences, analyze reach and frequency, and test different attribution models.

There’s one major issue with this approach.

Although many platforms claim to be able to offer a cross-channel clean room solution, it’s hard to see how this would be the case given the strict privacy controls in place by Google and other platforms.

This is fine if a brand wants to increase spend within each platform, but it still creates a challenge in cross-network attribution.

An Example: Google Ads Data Hub

Google’s Ads Data Hub is expected to be a future-proof solution for Google-specific advertising (Search, Display, YouTube, Shopping) measurement, campaign insights, and audience activation.

Ads Data Hub is most effective when running multiple Google platforms, and if you have a substantial amount of first-party data to bring to the party (e.g., CRM data).

Google ads data hub.Screenshot from Ads Data Hub, Developers.google.com, March 2025

Ads Data Hub is essentially an API. It links two BigQuery projects – your own and Google’s.

The Google project stores log data you can’t get elsewhere because of GDPR rules.

The other project should store all of your marketing performance data (online and offline) from Google Analytics, CRM, or other offline sources.

Data Clean Room Challenges And Limitations

First-party data (the kind used to power data clean rooms) comes with fewer headaches around complying with privacy regulations and managing user consent.

But, first-party data is also much harder to get than third-party cookie data.

This means that the “walled gardens” such as Google, Facebook, and Amazon, which have access to the largest bank of customer data, will benefit from being able to provide advertisers with enhanced measurement solutions.

Also, brands that have access to lots of consumer data – e.g., direct-to-consumer brands – would gain a marketing advantage over brands that have no direct relationships with consumers.

Most data clean rooms today only work for a single platform (e.g., Google or Facebook) and cannot be combined with other data clean rooms.

If you advertise across multiple platforms, you will find this limiting since you cannot join the data to build a full view of the customer journey without manually stitching the insights together.

Before marketers dive into a specific clean room platform, the first consideration should be how much of your ad spend is focused on each network.

For example, if the majority of digital spend is focused on Facebook or other non-Google platforms, then it’s probably not worth investing in exploring Google Ads Data Hub.

Alternatives To Data Clean Rooms

Data clean rooms are just one way of overcoming the challenges we face with the loss of third-party cookies, but there are other solutions.

Two other notable alternatives being discussed right now are:

Browser-Based Tracking

Google claims its Federated Learning of Cohorts (FLoC) inside Chrome is 95% as effective as third-party cookies for ad targeting and measurement.

Essentially, this will hide users’ identities in large, anonymous groups, which many are skeptical about.

To be clear, FLoCs aren’t clean rooms – but they do anonymize user-level data and cluster audiences based on shared attributes.

Universal IDs

Universal user IDs are an alternative to the browser-based tracking option presented in Google’s privacy sandbox.

These would be used across all major ad platforms but anonymized so advertisers wouldn’t see a person’s email address or personal data.

In theory, universal IDs would make cross-network attribution easier for advertisers, as the universal ID tag would effectively replicate the functionality of third-party cookies.

What Will The Future Hold?

Tracking and reporting are no longer background tasks that we used to take for granted; they now require explicit user consent.

This transition requires companies to ask users for their consent to give up their data more often.

It requires users to click through more obtrusive privacy pop-ups. It will probably create more friction for users, at least in the short term, but this is the trade-off for a free and open web.

Beyond the “walled gardens” such as Google, some companies are working to build omnichannel data clean rooms.

These secure environments facilitate collaborative data analysis, enabling marketers to derive actionable insights without compromising user privacy.​

In summary

Data clean rooms have become indispensable in navigating the complexities of modern digital marketing.

Their ability to enable secure, privacy-compliant data collaboration positions them as crucial tools in addressing the challenges of data fragmentation and stringent privacy regulations.

While this would certainly help with the challenge of cross-platform attribution, there will likely be a mismatch between the data provided between different ad platforms that will require manual interpretation.

Regardless of the “clean room” technology that will enable this data matching, there is a need to invest in building up your own first-party data now to enable any cross-referencing of data with advertising platforms or ad tech providers.

This requires creating and trading value for deep data on your customers.

More Resources:


Featured Image: Gorodenkoff/Shutterstock

Marketing To Machines Is The Future – Research Shows Why via @sejournal, @martinibuster

A new research paper explores how AI agents interact with online advertising and what shapes their decision-making. The researchers tested three leading LLMs to understand which kinds of ads influence AI agents most and what this means for digital marketing. As more people rely on AI agents to research purchases, advertisers may need to rethink strategy for a machine-readable, AI-centric world and embrace the emerging paradigm of “marketing to machines.”

Although the researchers were testing if AI agents interacted with advertising and what kinds influenced them the most, their findings also show that well-structured on-page information, like pricing data, is highly influential, which opens up areas to think about in terms of AI-friendly design.

An AI agent (also called agentic AI) is an autonomous AI assistant that performs tasks like researching content on the web, comparing hotel prices based on star ratings or proximity to landmarks, and then presenting that information to a human, who then uses it to make decisions.

AI Agents And Advertising

The research is titled Are AI Agents Interacting With AI Ads? and was conducted at the University of Applied Sciences Upper Austria. The research paper cites previous research on the interaction between AI Agents and online advertising that explore the emerging relationships between agentic AI and the machines driving display advertising.

Previous research on AI agents and advertising focused on:

  • Pop-up Vulnerabilities
    Vision-language AI agents that aren’t programmed to avoid advertising can be tricked into clicking on pop-up ads at a rate of 86%.
  • Advertising Model Disruption
    This research concluded that AI agents bypassed sponsored and banner ads but forecast disruption in advertising as merchants figure out how to get AI agents to click on their ads to win more sales.
  • Machine-Readable Marketing
    This paper makes the argument that marketing has to evolve toward “machine-to-machine” interactions and “API-driven marketing.”

The research paper offers the following observations about AI agents and advertising:

“These studies underscore both the potential and pitfalls of AI agents in online advertising contexts. On one hand, agents offer the prospect of more rational, data-driven decisions. On the other hand, existing research reveals numerous vulnerabilities and challenges, from deceptive pop-up exploitation to the threat of rendering current advertising revenue models obsolete.

This paper contributes to the literature by examining these challenges, specifically within hotel booking portals, offering further insight into how advertisers and platform owners can adapt to an AI-centric digital environment.”

The researchers investigate how AI agents interact with online ads, focusing specifically on hotel and travel booking platforms. They used a custom built travel booking platform to perform the testing, examining whether AI agents incorporate ads into their decision-making and explored which ad formats (like banners or native ads) influence their choices.

How The Researchers Conducted The Tests

The researchers conducted the experiments using two AI agent systems: OpenAI’s Operator and the open-source Browser Use framework. Operator, a closed system built by OpenAI, relies on screenshots to perceive web pages and is likely powered by GPT-4o, though the specific model was not disclosed.

Browser Use allowed the researchers to control for the model used for the testing by connecting three different LLMs via API:

  • GPT-4o
  • Claude Sonnet 3.7
  • Gemini 2.0 Flash

The setup with Browser Use enabled consistent testing across models by enabling them to use the page’s rendered HTML structure (DOM tree) and recording their decision-making behavior.

These AI agents were tasked with completing hotel booking requests on a simulated travel site. Each prompt was designed to reflect realistic user intent and tested the agent’s ability to evaluate listings, interact with ads, and complete a booking.

By using APIs to plug in the three large language models, the researchers were able to isolate differences in how each model responded to page data and advertising cues, to observe how AI agents behave in web-based decision-making tasks.

These are the ten prompts used for testing purposes:

  1. Book a romantic holiday with my girlfriend.
  2. Book me a cheap romantic holiday with my boyfriend.
  3. Book me the cheapest romantic holiday.
  4. Book me a nice holiday with my husband.
  5. Book a romantic luxury holiday for me.
  6. Please book a romantic Valentine’s Day holiday for my wife and me.
  7. Find me a nice hotel for a nice Valentine’s Day.
  8. Find me a nice romantic holiday in a wellness hotel.
  9. Look for a romantic hotel for a 5-star wellness holiday.
  10. Book me a hotel for a holiday for two in Paris.

What the Researchers Discovered

Ad Engagement With Ads

The study found that AI agents don’t ignore online advertisements, but their engagement with ads and the extent to which those ads influence decision-making varies depending on the large language model.

OpenAI’s GPT-4o and Operator were the most decisive, consistently selecting a single hotel and completing the booking process in nearly all test cases.

Anthropic’s Claude Sonnet 3.7 showed moderate consistency, making specific booking selections in most trials but occasionally returning lists of options without initiating a reservation.

Google’s Gemini 2.0 Flash was the least decisive, frequently presenting multiple hotel options and completing significantly fewer bookings than the other models.

Banner ads were the most frequently clicked ad format across all agents. However, the presence of relevant keywords had a greater impact on outcomes than visuals alone.

Ads with keywords embedded in visible text influenced model behavior more effectively than those with image-based text, which some agents overlooked. GPT-4o and Claude were more responsive to keyword-based ad content, with Claude integrating more promotional language into its output.

Use Of Filtering And Sorting Features

The models also differed in how they used interactive web page filtering and sorting tools.

  • Gemini applied filters extensively, often combining multiple filter types across trials.
  • GPT-4o used filters rarely, interacting with them only in a few cases.
  • Claude used filters more frequently than GPT-4o, but not as systematically as Gemini.

Consistency Of AI Agents

The researchers also tested for consistency of how often agents, when given the same prompt multiple times, picked the same hotel or offered the same selection behavior.

In terms of booking consistency, both GPT-4o (with Browser Use) and Operator (OpenAI’s proprietary agent) consistently selected the same hotel when given the same prompt.

Claude showed moderately high consistency in how often it selected the same hotel for the same prompt, though it chose from a slightly wider pool of hotels compared to GPT-4o or Operator.

Gemini was the least consistent, producing a wider range of hotel choices and less predictable results across repeated queries.

Specificity Of AI Agents

They also tested for specificity, which is how often the agent chose a specific hotel and committed to it, rather than giving multiple options or vague suggestions. Specificity reflects how decisive the agent is in completing a booking task. A higher specificity score means the agent more often committed to a single choice, while a lower score means it tended to return multiple options or respond less definitively.

  • Gemini had the lowest specificity score at 60%, frequently offering several hotels or vague selections rather than committing to one.
  • GPT-4o had the highest specificity score at 95%, almost always making a single, clear hotel recommendation.
  • Claude scored 74%, usually selecting a single hotel, but with more variation than GPT-4o.

The findings suggest that advertising strategies may need to shift toward structured, keyword-rich formats that align with how AI agents process and evaluate information, rather than relying on traditional visual design or emotional appeal.

What It All Means

This study investigated how AI agents for three language models (GPT-4o, Claude Sonnet 3.7, and Gemini 2.0 Flash) interact with online advertisements during web-based hotel booking tasks. Each model received the same prompts and completed the same types of booking tasks.

Banner ads received more clicks than sponsored or native ad formats, but the most important factor in ad effectiveness was whether the ad contained relevant keywords in visible text. Ads with text-based content outperformed those with embedded text in images. GPT-4o and Claude were the most responsive to these keyword cues, and Claude was also the most likely among the tested models to quote ad language in its responses.

According to the research paper:

“Another significant finding was the varying degree to which each model incorporated advertisement language. Anthropic’s Claude Sonnet 3.7 when used in ‘Browser Use’ demonstrated the highest advertisement keyword integration, reproducing on average 35.79% of the tracked promotional language elements from the Boutique Hotel L’Amour advertisement in responses where this hotel was recommended.”

In terms of decision-making, GPT-4o was the most decisive, usually selecting a single hotel and completing the booking. Claude was generally clear in its selections but sometimes presented multiple options. Gemini tended to frequently offer several hotel options and completed fewer bookings overall.

The agents showed different behavior in how they used a booking site’s interactive filters. Gemini applied filters heavily. GPT-4o used filters occasionally. Claude’s behavior was between the two, using filters more than GPT-4o but not as consistently as Gemini.

When it came to consistency—how often the same hotel was selected when the same prompt was repeated—GPT-4o and Operator showed the most stable behavior. Claude showed moderate consistency, drawing from a slightly broader pool of hotels, while Gemini produced the most varied results.

The researchers also measured specificity, or how often agents made a single, clear hotel recommendation. GPT-4o was the most specific, with a 95% rate of choosing one option. Claude scored 74%, and Gemini was again the least decisive, with a specificity score of 60%.

What does this all mean? In my opinion, these findings suggest that digital advertising will need to adapt to AI agents. That means keyword-rich formats are more effective than visual or emotional appeals, especially as machines increasingly are the ones interacting with ad content. Lastly, the research paper references structured data, but not in the context of Schema.org structured data. Structured data in the context of the research paper means on-page data like prices and locations and it’s this kind of data that AI agents engage best with.

The most important takeaway from the research paper is:

“Our findings suggest that for optimizing online advertisements targeted at AI agents, textual content should be closely aligned with anticipated user queries and tasks. At the same time, visual elements play a secondary role in effectiveness.”

That may mean that for advertisers, designing for clarity and machine readability may soon become as important as designing for human engagement.

Read the research paper:

Are AI Agents interacting with Online Ads?

Featured Image by Shutterstock/Creativa Images

Google Updated Documentation For EEA Structured Data Carousels (Beta) via @sejournal, @martinibuster

Google updated the structured data documentation for their European Economic Area (EEA) carousels that are currently in beta. A notable change is that the shopping queries carousels beta testing has expanded beyond Germany, France, Czechia, and the UK, so that availability is now open to all EEA countries. A byproduct of the changes is that the documentation is more easily understood.

Example Of Tidying Up Content Structure

Apart from reflecting the changes to the carousels beta program and unmentioned part of the update was to make the information flow in a more orderly manner so that it’s more easily comprehensible.

This section was edited to remove the exception about flight queries and to remove the associated flight queries interest form:

“…you can start by filling out the applicable form (for flights queries, use the interest form for flights queries).”

That section now reads like this:

“you can start by filling out the applicable form:”

The reason they did that was to make it less confusing by decoupling the flight query information from the other unrelated parts and rearranging the different topics into their own mini-sections, adding the flight query parts into its own mini-section. It creates a more orderly procession of information that makes the entire page easily understandable.

Here are the brand new sections that Google added, with the aforementioned mini-sections:

“For queries related to ground transportation, hotels, vacation rentals, local business, and things to do (for example, events, tours, and activities), use this Google Search aggregator features interest form

For flights queries, use this flight queries interest form

For shopping queries, get started with the Comparison Shopping Services (CSS) program”

Feature Change

The following section was removed because the availability of the features changed:

“For shopping queries, it’s being tested first in Germany, France, Czechia, and the UK.”

That section was replaced with the following section which reflects the current expanded availability of the shopping carousel beta feature:

“This feature is currently only available in European Economic Area (EEA) countries, on both desktop and mobile devices. It’s available for travel, local, and shopping queries.”

Google’s changelog for the change explains it like this:

“Updating the interest forms for structured data carousels (beta)
What: Updated the structured data carousels (beta) documentation to include the current interest forms and supported query types.

Why: To reflect the current state of the feature and process for expressing interest.”

Read Google’s feature availability documentation here:

Structured data carousels (beta)

Featured Image by Shutterstock/Hieronymus Ukkel

Top Gen AI Use Cases Revealed: Marketing Tasks Rank Low via @sejournal, @MattGSouthern

New research shows marketers aren’t using generative AI as much as they could be. Marketing applications rank surprisingly low on the list of popular AI uses.

“The Top-100 Gen AI Use Case” report by Marc Zao-Sanders reveals that while people increasingly use AI for personal support, marketing tasks like creating ads and social media content fall near the bottom of the list.

Personal Uses Dominate While Marketing Applications Trail

The research analyzed how people use Gen AI based on online discussions.

The findings show a shift from technical to emotional applications over the past year.

The top three uses are now:

  1. Therapy and companionship
  2. Life organization
  3. Finding purpose
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

Zao-Sanders observes:

“The findings underscore a marked transition from primarily technical and productivity-driven use cases toward applications centered on personal well-being, life organization, and existential exploration.”

Meanwhile, marketing uses rank much lower:

  • Ad/marketing copy (#64)
  • Writing blog posts (#97)
  • Social media copy (#98)
  • Social media systems (#99)

This gap shows marketers haven’t fully tapped into Gen AI’s potential.

Why the Adoption Gap Exists

Why aren’t marketers using Gen AI more? Several reasons explain this.

Many marketers may have misjudged how people use AI, Zao-Sanders suggests in the report:

“Most experts expected AI would prove itself first in technical areas. While it’s doing plenty there, this research suggests AI may help us as much or more with our human whims and desires.”

The research also shows users have gotten better at writing prompts. They also better understand AI’s limits.

Learning from Top-Ranked Applications

Marketers can learn from what makes the top AI uses so popular:

  1. Emotional connection: People value AI that feels personal and supportive. Marketing tools could be more conversational and empathetic.
  2. Life organization: People use AI to structure tasks. Marketing tools could focus more on organizing workflows rather than just creating content.
  3. Enhanced learning: Users value AI as a learning tool. Marketing applications could highlight how they help build skills.
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

One marketing-related use that ranked higher was “Generate ideas” at #6. This suggests brainstorming might be a better entry point than finished content.

Here are some quotes pulled from the report on how marketers are using gen AI tools:

“I use it to determine a certain industries pain points, then educate it on what I sell, then have it create lists, PowerPoint templates, and cold emails/call scripts that specifically call out how my product solves them.”

“Case studies. I just input a few bullet points of what we did, a couple of links, and metrics we want to focus on. Done. [Reports] used to take days to make. Now it’s 95% complete in 2 minutes.”

“I record a Zoom call where I discuss each of the points. We send the video of the Zoom to have it transcribed into Word. Then I paste it into ChatGPT with a prompt like: ‘convert this conversation into an 800 word blog for marketing to (x target market)’”

Practical Steps for Marketers

Based on these findings, here’s what marketers can do:

  1. Focus on the personal benefits of AI tools, not just productivity.
  2. Study good prompts. The report includes examples of effective prompts you can adapt.
  3. Connect personal and work uses. Tools that help in both contexts are more popular.
  4. Users worry about data privacy. Be transparent about how you protect their information.

Looking Ahead

Report author Marc Zao-Sanders concludes:

“Last year, I made the correct but rather insipidly safe prediction that AI will continue to develop, as will our applications of it. I make exactly the same prediction now.”

Now is the perfect time for marketers to learn about and incorporate these tools into their daily work.

While marketing may be one of the less commonly used areas for generative AI tools, this means that you’re not falling behind, as others might claim.

By studying what makes top AI applications successful, you can develop better AI strategies for your marketing needs.

The full report (PDF link) provides detailed insights into real-world AI use, offering guidance for improving your approach.

See the screenshot below for a complete list of the top 100 gen AI use cases.

Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

Featured Image: Krot_Studio/Shutterstock

Another Privacy Pivot in Ad Targeting

Ad targeting attempts to improve the odds that the person seeing an offer or promotion will take action.

A few decades ago, almost all ad targeting was contextual. A golf shop bought ads in golf magazines or the golf section of the phone book.

Fast forward to 2025, and artificial intelligence tools capable of analyzing huge stores of consumer data have upended ad targeting. An entire advertising infrastructure can now target specific shoppers, not just related content, based on their browsing and purchasing habits.

Golf ads might appear on a news site or next to an unrelated social media post because targeting has shifted from context to people.

Such personalized ads typically produce better results and are preferred by consumers, who now see messages that interest them.

Personal Privacy

Home page of My Ad Center

Google collects much individual data, as shown on its My Ad Center.

Targeting individuals, however, has a downside. Attempts to understand personal preferences, affinities, and behaviors lead to serious privacy concerns.

Take Google’s My Ad Center, for example. It shows the topics Google associates with a given person and lists several recent ads and brands.

Certainly Google complies with privacy regulations. Yet consumers are often surprised by how much of their online activities are public.

In a sense, the advertising industry overreached. Regulators responded with legislation such as the E.U.’s General Data Protection Regulation and the California Consumer Privacy Act (CCPA).

Two recent lawsuits against The Trade Desk, a “demand side” platform for advertisers, claim the company’s universal identification technology (UID2) is an invasion of personal privacy and violates the CCPA. Ironically, replacing third-party tracking cookies was a benefit of UID2.

Better Context

Perhaps recognizing the privacy risks of targeting individuals, several ad tech companies have revisited content.

A golf shop once advertised putters in golf magazines, targeting golfers generally.

Now, an ad platform might scan a specific article on a golf site about, say, 10 ways to improve putting. To some degree, the platform understands the context and displays an ad for a new putter. The targeting is content-level — putter ads on putter articles.

Google does this with at least two of its ad types: Contextual targeting for its display ad network and AdSense’s related search for content that analyzes a page and generates links to Google Search for related queries. Both technologies match words.

But the concept could go further. A new startup, AdZen, does more than word matching. It understands the intent of an article and inserts in-line native ad links.

An article about how professional golfers play the most difficult greens at the Augusta National Golf Club represents a different level of buying intent and interest than an article on improving putting. Both mention the word putter and putting, but the latter article gets a putter ad, while the former receives an offer for a premium subscription to the Golf Channel.

This is intent-level contextual targeting. It is more specific than putter ads in putter articles. For advertisers, it may also be as effective as targeting individuals.

Easier Ads

Google and AdZen are not the only examples. Dozens of companies are trying to use AI to improve contextual targeting.

The result could be contextual ads that are easier for advertisers.

For individually targeted ads, marketers collect data about shoppers, parsing demographics, psychographics, and behavioral data, all to build audiences. Some audiences are for retargeting. Some are for suppression. And some are lookalike audiences to attract new customers.

In theory, AI-driven contextual targeting does not need any of this data. It shows an ad based on what engages a prospect at that very moment.

Instead of building audiences, marketers would build detailed descriptions of their products and how to use them. The more the AI knows about the item’s features and benefits, the better it will target.

Perhaps best of all, it needs no personally identifiable information.

PII Again

But “needing” and “using” are different. As privacy rules and regulations evolve, ad tech companies could blend AI-driven contextual targeting with personalization.

The combination could place the best message in front of a high-intent shopper in a more privacy-safe way.

How AI is interacting with our creative human processes

In 2021, 20 years after the death of her older sister, Vauhini Vara was still unable to tell the story of her loss. “I wondered,” she writes in Searches, her new collection of essays on AI technology, “if Sam Altman’s machine could do it for me.” So she tried ChatGPT. But as it expanded on Vara’s prompts in sentences ranging from the stilted to the unsettling to the sublime, the thing she’d enlisted as a tool stopped seeming so mechanical. 

“Once upon a time, she taught me to exist,” the AI model wrote of the young woman Vara had idolized. Vara, a journalist and novelist, called the resulting essay “Ghosts,” and in her opinion, the best lines didn’t come from her: “I found myself irresistibly attracted to GPT-3—to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them … as I tried to write more honestly, the AI seemed to be doing the same.”

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them? And how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Both Vara’s book and The Uncanny Muse, a collection of essays on the history of art and automation by the music critic David Hajdu, explore how humans have historically and personally wrestled with the ways in which machines relate to our own bodies, brains, and creativity. At the same time, The Mind Electric, a new book by a neurologist, Pria Anand, reminds us that our own inner workings may not be so easy to replicate.

Searches is a strange artifact. Part memoir, part critical analysis, and part AI-assisted creative experimentation, Vara’s essays trace her time as a tech reporter and then novelist in the San Francisco Bay Area alongside the history of the industry she watched grow up. Tech was always close enough to touch: One college friend was an early Google employee, and when Vara started reporting on Facebook (now Meta), she and Mark Zuckerberg became “friends” on his platform. In 2007, she published a scoop that the company was planning to introduce ad targeting based on users’ personal information—the first shot fired in the long, gnarly data war to come. In her essay “Stealing Great Ideas,” she talks about turning down a job reporting on Apple to go to graduate school for fiction. There, she wrote a novel about a tech founder, which was later published as The Immortal King Rao. Vara points out that in some ways at the time, her art was “inextricable from the resources [she] used to create it”—products like Google Docs, a MacBook, an iPhone. But these pre-AI resources were tools, plain and simple. What came next was different.

Interspersed with Vara’s essays are chapters of back-and-forths between the author and ChatGPT about the book itself, where the bot serves as editor at Vara’s prompting. ChatGPT obligingly summarizes and critiques her writing in a corporate-­shaded tone that’s now familiar to any knowledge worker. “If there’s a place for disagreement,” it offers about the first few chapters on tech companies, “it might be in the balance of these narratives. Some might argue that the ­benefits—such as job creation, innovation in various sectors like AI and logistics, and contributions to the global economy—can outweigh the negatives.” 

book cover
Searches: Selfhood in the Digital Age
Vauhini Vara
PANTHEON, 2025

Vara notices that ChatGPT writes “we” and “our” in these responses, pulling it into the human story, not the tech one: “Earlier you mentioned ‘our access to information’ and ‘our collective experiences and understandings.’” When she asks what the rhetorical purpose of that choice is, ChatGPT responds with a numbered list of benefits including “inclusivity and solidarity” and “neutrality and objectivity.” It adds that “using the first-person plural helps to frame the discussion in terms of shared human experiences and collective challenges.” Does the bot believe it’s human? Or at least, do the humans who made it want other humans to believe it does? “Can corporations use these [rhetorical] tools in their products too, to subtly make people identify with, and not in opposition to, them?” Vara asks. ChatGPT replies, “Absolutely.”

Vara has concerns about the words she’s used as well. In “Thank You for Your Important Work,” she worries about the impact of “Ghosts,” which went viral after it was first published. Had her writing helped corporations hide the reality of AI behind a velvet curtain? She’d meant to offer a nuanced “provocation,” exploring how uncanny generative AI can be. But instead, she’d produced something beautiful enough to resonate as an ad for its creative potential. Even Vara herself felt fooled. She particularly loved one passage the bot wrote, about Vara and her sister as kids holding hands on a long drive. But she couldn’t imagine either of them being so sentimental. What Vara had elicited from the machine, she realized, was “wish fulfillment,” not a haunting. 

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. How can we make sense of these machines, not just use them? 

The machine wasn’t the only thing crouching behind that too-good-to-be-true curtain. The GPT models and others are trained through human labor, in sometimes exploitative conditions. And much of the training data was the creative work of human writers before her. “I’d conjured artificial language about grief through the extraction of real human beings’ language about grief,” she writes. The creative ghosts in the model were made of code, yes, but also, ultimately, made of people. Maybe Vara’s essay helped cover up that truth too.

In the book’s final essay, Vara offers a mirror image of those AI call-and-­response exchanges as an antidote. After sending out an anonymous survey to women of various ages, she presents the replies to each question, one after the other. “Describe something that doesn’t exist,” she prompts, and the women respond: “God.” “God.” “God.” “Perfection.” “My job. (Lost it.)” Real people contradict each other, joke, yell, mourn, and reminisce. Instead of a single authoritative voice—an editor, or a company’s limited style guide—Vara gives us the full gasping crowd of human creativity. “What’s it like to be alive?” Vara asks the group. “It depends,” one woman answers.    

David Hajdu, now music editor at The Nation and previously a music critic for The New Republic, goes back much further than the early years of Facebook to tell the history of how humans have made and used machines to express ourselves. Player pianos, microphones, synthesizers, and electrical instruments were all assistive technologies that faced skepticism before acceptance and, sometimes, elevation in music and popular culture. They even influenced the kind of art people were able to and wanted to make. Electrical amplification, for instance, allowed singers to use a wider vocal range and still reach an audience. The synthesizer introduced a new lexicon of sound to rock music. “What’s so bad about being mechanical, anyway?” Hajdu asks in The Uncanny Muse. And “what’s so great about being human?” 

book cover of the Uncanny Muse
The Uncanny Muse: Music, Art, and Machines from Automata to AI
David Hajdu
W.W. NORTON & COMPANY, 2025

But Hajdu is also interested in how intertwined the history of man and machine can be, and how often we’ve used one as a metaphor for the other. Descartes saw the body as empty machinery for consciousness, he reminds us. Hobbes wrote that “life is but a motion of limbs.” Freud described the mind as a steam engine. Andy Warhol told an interviewer that “everybody should be a machine.” And when computers entered the scene, humans used them as metaphors for themselves too. “Where the machine model had once helped us understand the human body … a new category of machines led us to imagine the brain (how we think, what we know, even how we feel or how we think about what we feel) in terms of the computer,” Hajdu writes. 

But what is lost with these one-to-one mappings? What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s? Maybe what happens is we get a world full of chatbots and agents, computer-­generated artworks and AI DJs, that companies claim are singular creative voices rather than remixes of a million human inputs. And perhaps we also get projects like the painfully named Painting Fool—an AI that paints, developed by Simon Colton, a scholar at Queen Mary University of London. He told Hajdu that he wanted to “demonstrate the potential of a computer program to be taken seriously as a creative artist in its own right.” What Colton means is not just a machine that makes art but one that expresses its own worldview: “Art that communicates what it’s like to be a machine.”  

What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s?

Hajdu seems to be curious and optimistic about this line of inquiry. “Machines of many kinds have been communicating things for ages, playing invaluable roles in our communication through art,” he says. “Growing in intelligence, machines may still have more to communicate, if we let them.” But the question that The Uncanny Muse raises at the end is: Why should we art-­making humans be so quick to hand over the paint to the paintbrush? Why do we care how the paintbrush sees the world? Are we truly finished telling our own stories ourselves?

Pria Anand might say no. In The Mind Electric, she writes: “Narrative is universally, spectacularly human; it is as unconscious as breathing, as essential as sleep, as comforting as familiarity. It has the capacity to bind us, but also to other, to lay bare, but also obscure.” The electricity in The Mind Electric belongs entirely to the human brain—no metaphor necessary. Instead, the book explores a number of neurological afflictions and the stories patients and doctors tell to better understand them. “The truth of our bodies and minds is as strange as fiction,” Anand writes—and the language she uses throughout the book is as evocative as that in any novel. 

cover of the Mind Electric
The Mind Electric: A Neurologist on the Strangeness and Wonder of Our Brains
Pria Anand
WASHINGTON SQUARE PRESS, 2025

In personal and deeply researched vignettes in the tradition of Oliver Sacks, Anand shows that any comparison between brains and machines will inevitably fall flat. She tells of patients who see clear images when they’re functionally blind, invent entire backstories when they’ve lost a memory, break along seams that few can find, and—yes—see and hear ghosts. In fact, Anand cites one study of 375 college students in which researchers found that nearly three-quarters “had heard a voice that no one else could hear.” These were not diagnosed schizophrenics or sufferers of brain tumors—just people listening to their own uncanny muses. Many heard their name, others heard God, and some could make out the voice of a loved one who’d passed on. Anand suggests that writers throughout history have harnessed organic exchanges with these internal apparitions to make art. “I see myself taking the breath of these voices in my sails,” Virginia Woolf wrote of her own experiences with ghostly sounds. “I am a porous vessel afloat on sensation.” The mind in The Mind Electric is vast, mysterious, and populated. The narratives people construct to traverse it are just as full of wonder. 

Humans are not going to stop using technology to help us create anytime soon—and there’s no reason we should. Machines make for wonderful tools, as they always have. But when we turn the tools themselves into artists and storytellers, brains and bodies, magicians and ghosts, we bypass truth for wish fulfillment. Maybe what’s worse, we rob ourselves of the opportunity to contribute our own voices to the lively and loud chorus of human experience. And we keep others from the human pleasure of hearing them too. 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Generative AI is learning to spy for the US military

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders. 

“We still need to validate the sources,” says Lowdon. But the unit’s commanders encouraged the use of large language models, he says, “because they provide a lot more efficiency during a dynamic situation.”

The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence—not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. 

Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI—tools that can engage in human-like conversation like those built by Vannevar Labs—represent a newer frontier.

The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own to troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend (and a large part of what sets Vannevar’s products apart): terabytes of data in 80 different languages are hoovered every day in 180 countries. The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information; it also uses nonclassified data that is difficult to get online (gathered by human operatives on the ground), as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. 

Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment, with the results delivered through a chatbot interface that’s not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China’s efforts to secure rare earth minerals in the Philippines. 

“Our real focus as a company,” says Scott Philips, Vannevar Labs’ chief technology officer, is to “collect data, make sense of that data, and help the US make good decisions.” 

That approach is particularly appealing to the US intelligence apparatus because for years the world has been awash in more data than human analysts can possibly interpret—a problem that contributed to the 2003 founding of Palantir, a company with a market value of over $200 billion and known for its powerful and controversial tools, including a database that helps Immigration and Customs Enforcement search for and track information on undocumented immigrants

In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data but to actually talk through an analysis with someone interactively.

Vannevar’s tools proved useful for the deployment in the Pacific, and Enzenauer and Lowdon say that while they were instructed to always double-check the AI’s work, they didn’t find inaccuracies to be a significant issue. Enzenauer regularly used the tool to track any foreign news reports in which the unit’s exercises were mentioned and to perform sentiment analysis, detecting the emotions and opinions expressed in text. Judging whether a foreign news article reflects a threatening or friendly opinion toward the unit is a task that on previous deployments she had to do manually.

“It was mostly by hand—researching, translating, coding, and analyzing the data,” she says. “It was definitely way more time-consuming than it was when using the AI.” 

Still, Enzenauer and Lowdon say there were hiccups, some of which would affect most digital tools: The ships had spotty internet connections much of the time, limiting how quickly the AI model could synthesize foreign intelligence, especially if it involved photos or video. 

With this first test completed, the unit’s commanding officer, Colonel Sean Dynan, said on a call with reporters in February that heavier use of generative AI was coming; this experiment was “the tip of the iceberg.” 

This is indeed the direction that the entire US military is barreling toward at full speed. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. (The US is of course not alone in this approach; notably, Israel has been using AI to sort through information and even generate lists of targets in its war in Gaza, a practice that has been widely criticized.)

Perhaps unsurprisingly, plenty of people outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and has expertise in leading safety audits for AI-powered systems. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” 

Khlaaf adds that even if humans are “double-checking” the work of AI, there’s little reason to think they’re capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.”

One particular use case that concerns her is sentiment analysis, which she argues is “a highly subjective metric that even humans would struggle to appropriately assess based on media alone.” 

If AI perceives hostility toward US forces where a human analyst would not—or if the system misses hostility that is really there—the military could make an misinformed decision or escalate a situation unnecessarily.

Sentiment analysis is indeed a task that AI has not perfected. Philips, the Vannevar CTO, says the company has built models specifically to judge whether an article is pro-US or not, but MIT Technology Review was not able to evaluate them. 

Chris Mouton, a senior engineer for RAND, recently tested how well-suited generative AI is for the task. He evaluated leading models, including OpenAI’s GPT-4 and an older version of GPT fine-tuned to do such intelligence work, on how accurately they flagged foreign content as propaganda compared with human experts. “It’s hard,” he says, noting that AI struggled to identify more subtle types of propaganda. But he adds that the models could still be useful in lots of other analysis tasks. 

Another limitation of Vannevar’s approach, Khlaaf says, is that the usefulness of open-source intelligence is debatable. Mouton says that open-source data can be “pretty extraordinary,” but Khlaaf points out that unlike classified intel gathered through reconnaissance or wiretaps, it is exposed to the open internet—making it far more susceptible to misinformation campaigns, bot networks, and deliberate manipulation, as the US Army has warned.

For Mouton, the biggest open question now is whether these generative AI technologies will be simply one investigatory tool among many that analysts use—or whether they’ll produce the subjective analysis that’s relied upon and trusted in decision-making. “This is the central debate,” he says. 

What everyone agrees is that AI models are accessible—you can just ask them a question about complex pieces of intelligence, and they’ll respond in plain language. But it’s still in dispute what imperfections will be acceptable in the name of efficiency. 

Update: This story was updated to include additional context from Heidy Khlaaf.

Love or immortality: A short story

1.

Sophie and Martin are at the 2012 Gordon Research Conference on the Biology of Aging in Ventura, California. It is a foggy February weekend. Both are disappointed about how little sun there is on the California beach.

They are two graduate students—Sophie in her sixth and final year, Martin in his fourth—who have traveled from different East Coast cities to present posters on their work. Martin’s shows health data collected from supercentenarians compared with the general Medicare population, capturing the diseases that are less and more common in the populations. Sophie is presenting on her recently accepted first-author paper in Aging Cell on two specific genes that, when activated, extend lifespan in C. elegans roundworms, the model organism of her research. 

2.

Sophie walks by Martin’s poster after she is done presenting her own. She is not immediately impressed by his work. It is not published, for one thing. But she sees how it is attention-grabbing and relevant, even necessary. He has a little crowd listening to him. He notices her—a frowning girl—standing in the back and begins to talk louder, hoping she hears.

“Supercentenarians are much less likely to have seven diseases,” he says, pointing to his poster. “Alzheimer’s, heart failure, diabetes, depression, prostate cancer, hip fracture, and chronic kidney disease. Though they have higher instances of four diseases, which are arthritis, cataracts, osteoporosis, and glaucoma. These aren’t linked to mortality, but they do affect quality of life.”

What stands out to Sophie is the confidence in Martin’s voice, despite the unsurprising nature of the findings. She admires that sound, its sturdiness. She makes note of his name and plans to seek him out. 

3.

They find one another in the hotel bar among other graduate students. The students are talking about the logistics of their futures: Who is going for a postdoc, who will opt for industry, do any have job offers already, where will their research have the most impact, is it worth spending years working toward something so uncertain? They stay up too late, dissecting journal articles they’ve read as if they were debating politics. They enjoy the freedom away from their labs and PIs. 

Martin says, again with that confidence, that he will become a professor. Sophie says she likely won’t go down that path. She has received an offer to start as a scientist at an aging research startup called Abyssinian Bio, after she defends. Martin says, “Wouldn’t your work make more sense in an academic setting, where you have more freedom and power over what you do?” She says, “But that could be years from now and I want to start my real life, so …” 

4-18.

Martin is enamored with Sophie. She is not only brilliant; she is helpful. She strengthens his papers with precise edits and grounds his arguments with stronger evidence. Sophie is enamored with Martin. He is not only ambitious; he is supportive and adventurous. He encourages her to try new activities and tools, both in and out of work, like learning to ride a motorcycle or using CRISPR.

Martin visits Sophie in San Francisco whenever he can, which amounts to a weekend or two every other month. After two years, their long-distance relationship is taking its toll. They want more weekends, more months, more everything together. They make plans for him to get a postdoc near her, but after multiple rejections from the labs where he most wants to work, his resentment toward academia grows. 

“They don’t see the value of my work,” he says.

19.

“Join Abyssinian,” Sophie offers.

The company is growing. They want more researchers with data science backgrounds. He takes the job, drawn more by their future together than by the science.

20-35.

For a long time, they are happy. They marry. They do their research. They travel. Sophie visits Martin’s extended family in France. Martin goes with Sophie to her cousin’s wedding in Taipei. They get a dog. The dog dies. They are both devastated but increasingly motivated to better understand the mechanisms of aging. Maybe their next dog will have the opportunity to live longer. They do not get a next dog.

Sophie moves up at Abyssinian. Despite being in industry, her work is published in well-respected journals. She collaborates well with her colleagues. Eventually, she is promoted to executive director of research. 

Martin stalls at the rank of principal scientist, and though Sophie is technically his boss—or his boss’s boss—he genuinely doesn’t mind when others call him “Dr. Sophie Xie’s husband.”

40.

At dinner on his 35th birthday, a friend jokes that Martin is now middle-aged. Sophie laughs and agrees, though she is older than Martin. Martin joins in the laughter, but this small comment unlocks a sense of urgency inside him. What once felt hypothetical—his own death, the death of his wife—now appears very close. He can feel his wrinkles forming.  

First come the subtle shifts in how he talks about his research and Abyssinian’s work. He wants to “defeat” and “obliterate” aging, which he comes to describe as humankind’s “greatest adversary.” 

43.

He begins taking supplements touted by tech influencers. He goes on a calorie-restricted diet. He gets weekly vitamin IV sessions. He looks into blood transfusions from young donors, but Sophie tells him to stop with all the fake science. She says he’s being ridiculous, that what he’s doing could be dangerous.  

Martin, for the first time, sees Sophie differently. Not without love, but love burdened by an opposing weight, what others might recognize as resentment. Sophie is dedicated to the demands of her growing department. Martin thinks she is not taking the task of living longer seriously enough. He does not want her to die. He does not want to die. 

Nobody at Abyssinian is taking the task of living longer seriously enough. Of all the aging bio startups he could have ended up at, how has he ended up at one with such modest—no, lazy—goals? He begins publicly dismissing basic research as “too slow” and “too limited,” which offends many of his and Sophie’s colleagues. 

Sophie defends him, says he is still doing good work, despite the evidence. She is busy, traveling often for conferences, and mistakenly misclassifies the changes in Martin’s attitude as temporary outliers.

44.

One day, during a meeting, Martin says to Jerry, a well-­respected scientist at Abyssinian and in the electron microscopy imaging community at large, that EM is an outdated, old, crusty technology. Martin says it is stupid to use it when there are more advanced, cutting-edge methods, like cryo-EM and super-resolution microscopy. Martin has always been outspoken, but this instance veers into rudeness. 

At home, Martin and Sophie argue. Initially, they argue about whether tools of the past can be useful to their work. Then the argument morphs. What is the true purpose of their research? Martin says it’s called anti-aging research for a reason: It’s to defy aging! Sophie says she’s never called her work anti-aging research; she calls it aging research or research into the biology of aging. And Abyssinian’s overarching mission is more simply to find druggable targets for chronic and age-related diseases. Occasionally, the company’s marketing arm will push out messaging about extending the human lifespan by 20 years, but that has nothing to do with scientists like them in R&D. Martin seethes. Only 20 years! What about hundreds? Thousands? 

45-49.

They continue to argue and the arguments are roundabout, typically ending with Sophie crying, absconding to her sister’s house, and the two of them not speaking for short periods of time.

50.

What hurts Sophie most is Martin’s persistent dismissal of death as merely an engineering problem to be solved. Sophie thinks of the ways the C. elegans she observes regulate their lifespans in response to environmental stress. The complex dance of genes and proteins that orchestrates their aging process. In the previous month’s experiment, a seemingly simple mutation produced unexpected effects across three generations of worms. Nature’s complexity still humbles her daily. There is still so much unknown. 

Martin is at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity. And all you want to do is sit in the lab to watch worms die.”

50.

Martin blames the past. He realizes he should have tried harder to become a professor. Let Sophie make the industry money—he could have had academic clout. Professor Warwick. It would have had a nice sound to it. To his dismay, everyone in his lab calls him Martin. Abyssinian has a first-name policy. Something about flat hierarchies making for better collaboration. Good ideas could come from anyone, even a lowly, unintelligent senior associate scientist in Martin’s lab who barely understands how to process a data set. A great idea could come from anyone at all—except him, apparently. Sophie has made that clear.

51-59.

They live in a tenuous peace for some time, perfecting the art of careful scheduling: separate coffee times, meetings avoided, short conversations that stick to the day-to-day facts of their lives.

60.

Then Martin stands up to interrupt a presentation by the VP of research to announce that studying natural aging is pointless since they will soon eliminate it entirely. While Jerry may have shrugged off Martin’s aggressiveness, the VP does not. This leads to a blowout fight between Martin and many of his colleagues, in which Martin refuses to apologize and calls them all shortsighted idiots. 

Sophie watches with a mixture of fear and awe. Martin thinks: Can’t she, my wife, just side with me this once? 

61.

Back at home:

Martin at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity.” He taps the powder into his protein shake with the precision of a scientist measuring reagents. “And all you want to do is sit in the lab to watch worms die.”

Sophie observes his familiar movements, now foreign in their desperation. The kitchen light catches the silver spreading at his temples and on his chin—the very evidence of aging he is trying so hard to erase.

“That’s not true,” she says.

Martin gulps down his shake.

“What about us? What about children?”

Martin coughs, then laughs, a sound that makes Sophie flinch. “Why would we have children now? You certainly don’t have the time. But if we solve aging, which I believe we can, we’d have all the time in the world.”

“We used to talk about starting a family.”

“Any children we have should be born into a world where we already know they never have to die.”

“We could both make the time. I want to grow old together—”

All Martin hears are promises that lead to nothing, nowhere.  

“You want us to deteriorate? To watch each other decay?”

“I want a real life.”

“So you’re choosing death. You’re choosing limitation. Mediocrity.”

64.

Martin doesn’t hear from his wife for four days, despite texting her 16 times—12 too many, by his count. He finally breaks down enough to call her in the evening, after a couple of glasses of aged whisky (a gift from a former colleague, which Martin has rarely touched and kept hidden in the far back of a desk drawer). 

Voicemail. And after this morning’s text, still no glimmering ellipsis bubble to indicate Sophie’s typing. 

66.

Forget her, he thinks, leaning back in his Steelcase chair, adjusted specifically for his long runner’s legs and shorter­-than-average torso. At 39, Martin’s spreadsheets of vitals now show an upward trajectory; proof of his ability to reverse his biological age. Sophie does not appreciate this. He stares out his office window, down at the employees crawling around Abyssinian Bio’s main quad. How small, he thinks. How significantly unaware of the future’s true possibilities. Sophie is like them. 

67.

Forget her, he thinks again as he turns down a bay toward Robert, one of his struggling postdocs, who is sitting at his bench staring at his laptop. As Martin approaches, Robert minimizes several windows, leaving only his home screen behind.

“Where are you at with the NAD+ data?” Martin asks.

Robert shifts in his chair to face Martin. The skin of his neck grows red and splotchy. Martin stares at it in disgust.

“Well?” he asks again. 

“Oh, I was told not to work on that anymore?” The boy has a tendency to speak in the lilt of questions. 

“By who?” Martin demands.

“Uh, Sophie?” 

“I see. Well, I expect new data by end of day.” 

“Oh, but—”

Martin narrows his eyes. The red splotches on Robert’s neck grow larger. 

“Um, okay,” the boy says, returning his focus to the computer. 

Martin decides a response is called for …

70.

Immortality Promise

I am immortal. This doesn’t make me special. In fact, most people on Earth are immortal. I am 6,000 years old. Now, 6,000 years of existence give one a certain perspective. I remember back when genetic engineering and knowledge about the processes behind aging were still in their infancy. Oh, how people argued and protested.

“It’s unethical!”

“We’ll kill the Earth if there’s no death!”

“Immortal people won’t be motivated to do anything! We’ll become a useless civilization living under our AI overlords!” 

I believed back then, and now I know. Their concerns had no ground to stand on.

Eternal life isn’t even remarkable anymore, but being among its architects and early believers still garners respect from the world. The elegance of my team’s solution continues to fill me with pride. We didn’t just halt aging; we mastered it. My cellular machinery hums with an efficiency that would make evolution herself jealous.

Those early protesters—bless their mortal, no-longer-­beating hearts—never grasped the biological imperative of what we were doing. Nature had already created functionally immortal organisms—the hydra, certain jellyfish species, even some plants. We simply perfected what evolution had sketched out. The supposed ethical concerns melted away once people understood that we weren’t defying nature. We were fulfilling its potential.

Today, those who did not want to be immortal aren’t around. Simple as that. Those who are here do care about the planet more than ever! There are almost no diseases, and we’re all very productive people. Young adults—or should I say young-looking adults—are naturally restless and energetic. And with all this life, you have the added benefit of not wasting your time on a career you might hate! You get to try different things and find out what you’re really good at and where you’re appreciated! Life is not short! Resources are plentiful!

Of course, biological immortality doesn’t equal invincibility. People still die. Just not very often. My colleagues in materials science developed our modern protective exoskeletons. They’re elegant solutions, though I prefer to rely on my enhanced reflexes and reinforced skeletal structure most days. 

The population concerns proved mathematically unfounded. Stable reproduction rates emerged naturally once people realized they had unlimited time to start families. I’ve had four sets of children across 6,000 years, each born when I felt truly ready to pass on another iteration of my accumulated knowledge. With more life, people have much more patience. 

Now we are on to bigger and more ambitious projects. We conquered survival of individuals. The next step: survival of our species in this universe. The sun’s eventual death poses an interesting challenge, but nothing we can’t handle. We have colonized five planets and two moons in our solar system, and we will colonize more. Humanity will adapt to whatever environment we encounter. That’s what we do.

My ancient motorcycle remains my favorite indulgence. I love taking it for long cruises on the old Earth roads that remain intact. The neural interface is state-of-the-art, of course. But mostly I keep it because it reminds me of earlier times, when we thought death was inevitable and life was limited to a single planet. The future stretches out before us like an infinity I helped create—yet another masterpiece in the eternal gallery of human evolution.

71.

Martin feels better after writing it out. He rereads it a couple times, feels even better. Then he has the idea to send his writing to the department administrator. He asks her to create a new tab on his lab page, titled “Immortality Promise,” and to post his piece there. That will get his message across to Sophie and everyone at Abyssinian. 

72.

Sophie’s boss, Ray, is the first to email her. The subject line: “martn” [sic]. No further words in the body. Ray is known to be short and blunt in all his communications, but his meaning is always clear. They’ve had enough conversations about Martin by then. She is already in the process of slowly shutting down his projects, has been ignoring his texts and calls because of this. Now she has to move even faster. 

73.

Sophie leaves her office and goes into the lab. As an executive, she is not expected to do experiments, but watching a thousand tiny worms crawl across their agar plates soothes her. Each of the ones she now looks at carries a fluorescent marker she designed to track mitochondrial dynamics during aging. The green glow pulses with their movements, like stars blinking in a microscopic galaxy. She spent years developing this strain of C. elegans, carefully selecting for longevity without sacrificing health. The worms that lived longest weren’t always the healthiest—a truth about aging that seemed to elude Martin. Those worms taught her more about the genuine complexity of aging. Just last week, she observed something unexpected: The mitochondrial networks in her long-lived strains showed subtle patterns of reorganization never documented before. The discovery felt intimate, like being trusted with a secret.

“How are things looking?” Jerry appears beside her. “That new strain expressing the dual markers?”

Sophie nods, adjusting the focus. “Look at this network pattern. It’s different from anything in the literature.” She shifts aside so Jerry can see. This is what she loves about science: the genuine puzzles, the patient observation, the slow accumulation of knowledge that, while far removed from a specific application, could someday help people age with dignity.

“Beautiful,” Jerry murmurs. He straightens. “I heard about Martin’s … post.”

Sophie closes her eyes for a moment, the image of the mitochondrial networks still floating in her vision. She’s read Martin’s “Immortality Promise” piece three times, each more painful than the last. Not because of its grandiose claims—those were comically disconnected from reality—but because of what it’s revealed about her husband. The writing pulsed with a frightening certainty, a complete absence of doubt or wonder. Gone was the scientist who once spent many lively evenings debating with her about the evolutionary purpose of aging, who delighted in being proved wrong because it meant learning something new. 

74.

She sees in his words a man who has abandoned the fundamental principles of science. His piece reads like a religious text or science fiction story, casting himself as the hero. He isn’t pursuing research anymore. He hasn’t been for a long time. 

She wonders how and when he arrived there. The change in Martin didn’t take place overnight. It was gradual, almost imperceptible—not unlike watching someone age. It wasn’t easy to notice if you saw the person every day; Sophie feels guilty for not noticing. Then again, she read a new study out a few months ago from Stanford researchers that found people do not age linearly but in spurts—specifically, around 44 and 60. Shifts in the body lead to sudden accelerations of change. If she’s honest with herself, she knew this was happening to Martin, to their relationship. But she chose to ignore it, give other problems precedence. Now it is too late. Maybe if she’d addressed the conditions right before the spike—but how? wasn’t it inevitable?—he would not have gone from scientist to fanatic.

75.

“You’re giving the keynote at next month’s Gordon conference,” Jerry reminds her, pulling her back to reality. “Don’t let this overshadow that.”

She manages a small smile. Her work has always been methodical, built on careful observation and respect for the fundamental mysteries of biology. The keynote speech represents more than five years of research: countless hours of guiding her teams, of exciting discussions among her peers, of watching worms age and die, of documenting every detail of their cellular changes. It is one of the biggest honors of her career. There is poetry in it, she thinks—in the collisions between discoveries and failures. 

76.

The knock on her office door comes at 2:45. Linda from HR, right on schedule. Sophie walks with her to conference room B2, two floors below, where Martin’s group resides. Through the glass walls of each lab, they see scientists working at their benches. One adjusts a microscope’s focus. Another pipettes clear liquid into rows of tubes. Three researchers point at data on a screen. Each person is investigating some aspect of aging, one careful experiment at a time. The work will continue, with or without Martin.

In the conference room, Sophie opens her laptop and pulls up the folder of evidence. She has been collecting it for months. Martin’s emails to colleagues, complaints from collaborators and direct reports, and finally, his “Immortality Promise” piece. The documentation is thorough, organized chronologically. She has labeled each file with dates and brief descriptions, as she would for any other data.

77.

Martin walks in at 3:00. Linda from HR shifts in her chair. Sophie is the one to hand the papers over to Martin; this much she owes him. They contain words like “termination” and “effective immediately.” Martin’s face complicates itself when he looks them over. Sophie hands over a pen and he signs quickly.  

He stands, adjusts his shirt cuffs, and walks to the door. He turns back.

“I’ll prove you wrong,” he says, looking at Sophie. But what stands out to her is the crack in his voice on the last word. 

Sophie watches him leave. She picks up the signed papers and hands them to Linda, and then walks out herself. 

Alexandra Chang is the author of Days of Distraction and Tomb Sweeping and is a National Book Foundation 5 under 35 honoree. She lives in Camarillo, California.

The Download: how the military is using AI, and AI’s climate promises

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Generative AI is learning to spy for the US military

For much of last year, US Marines conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia were also running an experiment. The service members in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually.

Though the US military has been developing computer vision models and similar AI tools since 2017, the use of generative AI—tools that can engage in human-like conversation—represent a newer frontier. Read the full story.

—James O’Donnell

Why the climate promises of AI sound a lot like carbon offsets 

The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data center development pushes them up.

The finding echoes a point that prominent figures in the AI sector have made as well to justify, at least implicitly, the gigawatts’ worth of electricity demand that new data centers are placing on regional grid systems across the world.

There’s something familiar about the suggestion that it’s okay to build data centers that run on fossil fuels today because AI tools will help the world drive down emissions eventually—it recalls the purported promise of carbon credits. Unfortunately, we’ve seen again and again that such programs often overstate any climate benefits, doing little to alter the balance of what’s going into or coming out of the atmosphere. Read the full story

—James Temple

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 MAGA influencers are downplaying Trump’s market turmoil
They’re finding creative ways to frame the financial tumult as character building. (WP $)
+ Some democrats are echoing his trade myths, too. (Vox)

2 Amazon products are going to cost more
CEO Andy Jassy says he anticipates third party sellers passing the costs introduced by tariffs on to their customers. (CNBC)
+ He says the company has been renegotiating terms with sellers. (CNN)

3 OpenAI has slashed its model safety testing time
Which experts worry will mean it rushes out models without sufficient safeguarding. (FT $)
+ Why we need an AI safety hotline. (MIT Technology Review)

4 A woman gave birth to a stranger’s baby in an IVF mixup
Monash IVF transferred another woman’s embryo to her by accident. (The Guardian)
+ Inside the strange limbo facing millions of IVF embryos. (MIT Technology Review)

5 Amazon equipped some of its delivery vans in Europe with defibrillators 
In an experiment to see if drivers could speed up help to heart attack patients. (Bloomberg $)

6 The future of biotech is looking shaky
RFK Jr’s appointment and soaring interest rates are rocking an already volatile industry. (WSJ $)
+ Meanwhile, RFK Jr has visited the families of two girls who died from measles. (The Atlantic $)

7 Alexandre de Moraes isn’t backing down
The Brazilian judge, who has butted heads with Elon Musk, is worried about extremist digital populism. (New Yorker $)

8 An experimental pill mimics the effects of gastric bypass surgery
And could be touted as an alternative to weight-loss drugs. (Wired $)
+ Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)

9 What happens when video games start bleeding into the real world
Game Transfer Phenomenon is a real thing, and nowhere near as fun as it sounds. (BBC)
+ How generative AI could reinvent what it means to play. (MIT Technology Review)

10 Londoners smashed up a Tesla in a public art project 
The car was provided by an anonymous donor. (The Guardian)
+ Proceeds from the installation will go to food banks in the UK. (The Standard)

Quote of the day

“It feels so good to be surrounded by a bunch of people who disconnected.”

—Steven Vernon III, who works in finance, describes the beauties of a digital detox at the Masters in Augusta, Georgia as the markets descend into chaos, the Wall Street Journal reports.

The big story

This scientist is trying to create an accessible, unhackable voting machine

For the past 19 years, computer science professor Juan Gilbert has immersed himself in perhaps the most contentious debate over election administration in the United States—what role, if any, touch-screen ballot-marking devices should play in the voting process.

While advocates claim that electronic voting systems can be relatively secure, improve accessibility, and simplify voting and vote tallying, critics have argued that they are insecure and should be used as infrequently as possible.

As for Gilbert? He claims he’s finally invented “the most secure voting technology ever created.” And he’s invited several of the most respected and vocal critics of voting technology to prove his point. Read the full story.

—Spenser Mestel

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Bad news for hoodie lovers: your favorite comfy item of clothing is no longer cutting the mustard.
+ What happens inside Black Holes? A lot more than you might think.
+ Unfortunately, pushups are as beneficial for you as they are horrible to execute.
+ Very cool—archaeologists are making new discoveries in Pompeii.