The State Of AI In Marketing: 6 Key Findings From Marketing Leaders via @sejournal, @theshelleywalsh

AI is being rapidly implemented, but that doesn’t mean it’s being used effectively.

The current lack of clear benchmarks and data about AI usage has meant that everyone has been operating in the dark.

This led us to create our first State Of AI In Marketing report, so that chief marketing officers and marketing decision-makers can have insights to make better informed decisions as they navigate the fast-moving developments in our industry.

We asked eight key questions about generative AI in marketing to a selection of U.S.-based decision-makers and leaders.

We got 155 responses from mostly senior marketers, directors, and C-suite to offer fresh insights into how industry leaders perceive AI, and how they are using AI right now.

While some marketers are unlocking major gains in efficiency, others are struggling with poor output quality, lack of brand voice consistency, and legal uncertainties.

Our whitepaper presents their responses, broken down across five core themes:

  • Which AI tools are most broadly adopted among marketers.
  • How marketers are using AI.
  • The results they’re seeing.
  • The challenges they face.
  • Where they plan to invest next.

Whether you’re leading a team or building a roadmap, this report is designed to help you benchmark your AI strategy to make confident decisions as our industry moves at an unprecedented pace.

6 Key Findings From The Report

1. ChatGPT Is Currently Dominating The Tools

Over 83% of marketers said ChatGPT has positively impacted their efficiency or effectiveness.

But it’s not the only player: Tools like Gemini, Perplexity, Claude, and Canva AI are also making their mark, with many marketers assembling AI “stacks” that combine different strengths across platforms.

2. Content Has Seen The Most Tangible Impact From AI

Unsurprisingly, the most impact in marketing so far among our respondents is based around content, where 64.5% experienced value with content creation, 43.9% with content optimization, and 43.9% with idea generation.

However, AI is not replacing creativity; it’s augmenting it. Marketing teams are using tools to speed up, optimize and break through creative blocks, not to replace human insight.

The report also shares other key areas where AI has delivered the most value to those surveyed.

3. Time Savings Were The Single Most Improved Outcome

The majority of respondents (76.8%) cited time savings as the biggest improvement since adopting AI.

To enhance productivity and efficiency, marketers are gaining hours back to relocate their time to more strategic work.

4. Direct ROI-Linked Results Are Lacking

While operational efficiency is clearly impacted, strategic metrics like customer lifetime value, lead quality, and attribution remain largely unchanged.

In other words, AI is streamlining how we work, but not necessarily improving what we deliver without human oversight and a sound strategy.

5. Output Quality Remains A Top Concern

More than half (54.2%) of respondents identified inaccurate, unreliable, or inconsistent output quality as the biggest limitation in using AI for marketing.

This highlights a central theme that AI still requires substantial human oversight to produce marketing-ready content.

6. Misinformation Is The No. 1 Concern

The most cited concern about AI’s rise in marketing wasn’t job loss; it was the risk of misinformation.

A full 62.6% of respondents flagged AI-generated misinformation as their top worry, revealing the importance of trust, accuracy, and reputation for AI-powered content.

The report also highlights the other areas of concern where marketers are experiencing limitations and inefficiencies.

More Key Findings In The State Of AI Report

Marketing Leaders Are Planning To Invest In These Key Areas

Marketing decision-makers surveyed are prioritizing AI investments where value has already been proven. The report breaks down how much of that investment is across analytics, customer experience, SEO, marketing attribution, or content production, amongst other areas.

How Marketing Leaders Are Restructuring Their Teams

The report findings also indicate whether and how our respondents restructured to accommodate AI within their organization.

Where Will Be The Biggest Impact Over The Next Few Months

Possibly the most insightful section is where respondents gave their thoughts into what would be AI’s biggest impact on marketing over the next 12 months.

Many expect a content explosion, where the market is flooded with AI-generated assets, raising the bar for originality and quality.

Others foresee a reshaped search industry and reduced roles, with an emphasis on those who don’t embrace AI getting left behind.

But, not all forecasts are negative. Several marketers believe AI will level the playing field for small businesses, increase access to high-quality tools, and empower individuals to do the work of many.

You can find many more comments and predictions in the full report.

The State Of AI In Marketing Report For 2025 Can Help Shed Light

Right now is one of the most challenging times our industry has faced, and marketing leaders have hard decisions to make.

Hopefully, this whitepaper will help to shed light on how and where leaders can move forward.

In the report, you can find:

  • A real-world look at what tools marketers are using now and how they’re stacking them.
  • Insights into what areas of marketing AI are transforming, and where it’s still falling short.
  • A clear view into future investments, from personalization to performance tracking.
  • Actionable recommendations for how to evaluate, deploy, and govern AI effectively.
  • Perspectives from marketers navigating AI’s challenges.

Download the full AI in marketing survey report to make confident decisions in your AI implementation strategy.


Featured Image: Paulo Bobita/Search Engine Journal

Human-Centered Marketing: The Right Message To The Right People

This edited excerpt is from Human-Centered Marketing by Ashley Faus ©2025 and is reproduced and adapted with permission from Kogan Page Ltd.

Mapping content to the traditional funnel adds no value, since the journey behaves more like a playground than a linear progression.

Pitfalls of the traditional funnel include assuming that every person in the audience intends to and will become a customer, underestimating the sophistication of the buyer, and offering limited options for post-purchase retention strategies.

Building a seamless, delightful journey on a foundation of trust means that we must fundamentally rethink our framework for the audience journey.

Most marketers are familiar with the traditional funnel to outline the buyer’s journey, focusing on three key phases: awareness, consideration, and decision.

The funnel assumes that the audience journey begins with awareness, when, in fact, the audience journey begins long before marketers rec­ognize that this person is on a journey.

Introducing The Playground

We need to think about the journey as a playground: people can go up, down, sideways, and around. They can go to the equip­ment (content) in any order. They can enter and exit as they please. And they can use the content in the “wrong” way.

How many times do you force your audience to go through un­necessary steps because you’re trying to make them buy when they’re not ready; or, worse, you add friction to the buying process because you need to check the boxes on providing a white paper and a demo and a case study?

Why does this happen? It stems from the idea that we need to push prospects down the funnel to become leads and keep moving them through until they become customers.

Pitfalls Of The Customer Journey Map

Ultimately, every customer journey map ends with the prospect be­coming a lead and deciding to purchase the company’s offering. This linear journey map ignores retention, cross-sell, up-sell, and expansion opportunities.

While the looping journey does, at least, acknowledge the additional post-purchase phase, it fails to capture the complexity.

For example, many software-as-a-service companies have user limits for different tiers of their product offerings.

With the rise of product-led growth (PLG) as a key go-to-market motion, many SaaS products include a free tier, with user gates, feature gates, or both, to prompt free users to become paying customers.

Traditional journey maps obscure the messy middle of the customer journey, with weird hacks to stay under the user limit, lengthy negotiations on larger contracts for seat expansion, and fighting competitors when it comes time for the cus­tomer to renew.

This highlights another pitfall of the funnel and associated journey maps to move people through the funnel. It’s a retrospective measure­ment tool, not a forward-looking strategy tool.

If you map the journeys of people who did, in fact, become custom­ers, then you are correct in starting with a prospect having a problem, searching for a solution, and ultimately, choosing your offering.

If, however, you are trying to build a net-new audience, hone your nar­ratives to resonate with that audience, and map your content and distribution strategy, you can’t simply look at what happened in the past, on owned platforms, in the condensed time period where the buying process “officially” started.

In addition, consider a post-pur­chase scenario where the marketer makes the journey more difficult for a customer precisely because they want to track the interaction in more detail.

Collecting more information gives us a false sense of security. If we know just a bit more about this person, surely we can convince them to spend more money.

Traditional funnel models also fail to recognize the differences be­tween a user and an economic buyer. Many marketers recognize that buying involves multiple different people, but they assume that each stakeholder joins the process in a linear way.

For example, in larger companies, an economic buyer might need to go through a procure­ment process that includes a security assessment, compliance check­list, and legal or contract review before bringing in a new tool.

The linear funnel assumes that these stakeholders need to be addressed in the “decision” phase of the buying process.

And yet, ask anyone who’s been through procurement in a large enterprise, and they’ll tell you that it’s difficult, and, often, a deterrent to even starting a buying process.

In order to convince me to buy, you need to convince me that I’ll be able to buy. If you make it easy for me to make it through the procurement process, I’m much more likely to choose you as a ven­dor because I know that I’ll be successful in completing the process.

Consider another scenario, where individual teams are empowered to purchase tools and services on their own. These teams are all in the “post-purchase” phase.

At some point, the invoices might be large enough to warrant consolidation, which might trigger a wider vendor review. In that case, you’ve won over many users, but the economic buyer is now in the “awareness” phase, as they’ve just discovered you as a vendor.

Or, they might need to be convinced that solving this prob­lem should continue to be a priority at all. Alternatively, the buyers might immediately move to the “consideration” phase by opening a request for proposal (RFP) or researching competitors.

Maybe they’ve bought in on the problem, but they want to explore different solutions. They might need to learn about different possible solutions, even though there’s already a vendor solving this problem.

Once you decide to consolidate a contract, the spend might be big enough to require a more thorough vetting by the procurement, secu­rity, compliance, and legal teams.

At this point, who knows which phase of the funnel you’re in? Is it “retention” with the users who no longer actually have buying power?

Is it awareness or consideration with the economic buyer? Is it awareness or decision with teams who have the ability to block the deal, but aren’t the economic buyer?

As you can see, attempting to map content to a linear funnel by also mapping linear personas becomes quite a challenge!

These scenarios also minimize or ignore the sophistication of the buyer. In a B2B (business-to-business) context, most buyers are quite sophisticated. They’re well-versed in the problem space, and might have purchased solutions in the past.

They’re equipped to do their own research, and often prefer working through the initial vetting phases before reaching out to a company to initiate a buying process.

In fact, TrustRadius found that, in 2021, 43% of buyers re­ported consulting with vendor representatives, and that number dropped to an average of one out of four buyers in all but the largest deal sizes.

Instead, buyers preferred to conduct their own research, with a bias to­wards non-vendor-provided material.

Buyers favored free trials or ac­counts (56%), user reviews (55%), and community forums (37%) over vendor-provided materials such as customer references (15%), blogs (14%), and marketing collateral (14%).1

This trend continued in a 2024 report from 6sense, a company that arms revenue teams with data to accelerate deal conversions. It found that, when B2B buyers directly engage sellers, they are already 70% through their buying process.2

We see over and over that, by the time a marketer becomes aware that someone is in the buying process, they’re significantly behind the buyer’s knowledge of the problem space, research into the solution space, and affinity for a select list of solution providers.

They’re not coming to the company website cold, or blindly reaching out to a salesperson.

Instead, they’ve consulted a curated list of trusted sources, including conversations with their personal network, crowd­sourcing information, and recommendations from peers on social media and forums, and they’ve read about the pros and cons of dif­ferent providers from people like themselves.

To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code SEJ25 at koganpage.com here.

More Resources:


[1] TrustRadius (2022), 2022 Buying Disconnect: The Age of the Self-Serve Buyer, go.trustradius.com/rs/827-FOI-687/images/TrustRadius_2022_ B2B_Buying_Disconnect_6.27.22.pdf (archived at https://perma.cc/ TG6X-UU8T)

[2] 6sense Research (2023), Out of Sight, Almost Out of Time: The 2023 6sense B2B Buyer Experience Report, 6sense, 6sense.com/report/ buyer-experience (archived at https://perma.cc/XJ3Z-ULJ4)


Featured Image: Natalya Kosarevich/Shutterstock

The Smart SEO Team’s Guide To Timing & Executing A Large-Scale Site Migration via @sejournal, @inmotionhosting

This post was sponsored by InMotion Hosting. The opinions expressed in this article are the sponsor’s own.

We’ve all felt it, that sinking feeling in your stomach when your site starts crawling instead of sprinting.

Page speed reports start flashing red. Search Console is flooding your inbox with errors.

You know it’s time for better hosting, but here’s the thing: moving a large website without tanking your SEO is like trying to change tires while your car is still moving.

We’ve seen too many migrations go sideways, which is why we put together this guide.

Let’s walk through a migration plan that works. One that’ll future-proof your site without disrupting your rankings or overburdening your team.

Free Website Migration Checklist

Step 1: Set Your Performance Goals & Audit Your Environment

Establish Performance Benchmarks

Before you touch a single line of code, you need benchmarks. Think of these as your “before” pictures in a website makeover.

If you skip this step, you’ll regret it later. How will you know if your migration was successful if you don’t know where you started?

Gather your current page speed numbers, uptime percentages, and server response times. These will serve as proof that the migration was worth it.

Document Current Site Architecture

Next, let’s identify what’s working for your site and what’s holding it back. Keep a detailed record of your current setup, including your content management system (CMS), plugins, traffic patterns, and peak periods.

Large sites often have unusual, hidden connections that only reveal themselves at the worst possible moments during migrations. Trust us, documenting this now prevents those 2 AM panic attacks later.

Define Your Website Migration Goals

Let’s get specific about what success looks like. Saying “we want the site to be faster” is like saying “we want more leads.” It sounds great, but how do you measure it?

Aim for concrete targets, such as:

  • Load times under 2 seconds on key pages (we like to focus on product pages first).
  • 99.99% uptime guarantees (because every minute of downtime is money down the drain).
  • Server response times under 200ms.
  • 30% better crawl efficiency (so Google sees your content updates).

We recommend running tests with Google Lighthouse and GTmetrix at different times of day. You’d be surprised how performance can vary between your morning coffee and afternoon slump.

Your top money-making pages deserve special attention during migration, so keep tabs on those.

Step 2: Choose The Right Hosting Fit

Not all hosting options can handle the big leagues.

We’ve seen too many migrations fail because someone picked a hosting plan better suited for a personal blog than an enterprise website.

Match Your Needs To Solutions

Let’s break down what we’ve found works best.

Managed VPS is excellent for medium-sized sites. If you’re receiving 100,000 to 500,000 monthly visitors, this might be your sweet spot. You’ll have the control you need without the overkill.

Dedicated servers are what we recommend for the major players. If you’re handling millions of visitors or running complex applications, this is for you.

What we appreciate about dedicated resources is that they eliminate the “noisy neighbor” problem, where someone else’s traffic spike can tank your performance. Enterprise sites on dedicated servers load 40-60% faster and rarely experience those resource-related outages.

WordPress-optimized hosting is ideal if you’re running WordPress. These environments come pre-tuned with built-in caching and auto-updates. Why reinvent the wheel, right?

Understand The Must-Have Features Checklist

Let’s talk about what your web hosting will need for SEO success.

Free Website Migration Checklist

NVMe SSDs are non-negotiable these days. They’re about six times faster than regular storage for database work, and you’ll feel the difference immediately.

A good CDN is essential if you want visitors from different regions to have the same snappy experience. Server-level caching makes a huge difference, as it reduces processing work and speeds up repeat visits and search crawls.

Illustration showing how caching works on a websiteImage created by InMotion Hosting, June 2025

Staging environments aren’t optional for big migrations. They’re your safety net. Keep in mind that emergency fixes can cost significantly more than setting up staging beforehand.

And please ensure you have 24/7 migration support from actual humans. Not chatbots, real engineers who answer the phone when things go sideways at midnight.

Key Considerations for Growth

Think about where your site is headed, not just where it is now.

Are you launching in new markets? Planning a big PR push? Your hosting should handle growth without making you migrate again six months later.

One thing that often gets overlooked: redirect limits. Many platforms cap at 50,000-100,000 redirects, which sounds like a lot until you’re migrating a massive product catalog.

Step 3: Prep for Migration – The Critical Steps

Preparation separates smooth migrations from disasters. This phase makes or breaks your project.

Build Your Backup Strategy

First things first: backups, backups, backups. We’re talking complete copies of both files and databases.

Don’t dump everything into one giant folder labeled “Site Stuff.” Organizing backups by date and type. Include the entire file system, database exports, configuration files, SSL certificates, and everything else.

Here’s a common mistake we often see: not testing the restore process before migration day. A backup you can’t restore is wasted server space. Always conduct a test restore on a separate server to ensure everything works as expected.

Set Up the New Environment and Test in Staging

Your new hosting environment should closely mirror your production environment. Match PHP versions, database settings, security rules, everything. This isn’t the time to upgrade seven different things at once (we’ve seen that mistake before).

Run thorough pre-launch tests on staging. Check site speed on different page types. Pull out your phone and verify that the mobile display works.

Use Google’s testing tools to confirm that your structured data remains intact. The goal is no surprises on launch day.

Map Out DNS Cutover and Minimize TTL for a Quick Switch

DNS strategy might sound boring, but it can make or break your downtime window.

Here’s what works: reduce your TTL to at least 300 seconds (5 minutes) about 48 hours before migration. This makes DNS changes propagate quickly when you flip the switch.

Have all your DNS records prepared in advance: A records, CNAMEs for subdomains, MX records for email, and TXT records for verification. Keep a checklist and highlight the mission-critical ones that would cause panic if forgotten.

Freeze Non-Essential Site Updates Before Migration

This might be controversial, but we’re advocates for freezing all content and development changes for at least 48 hours before migration.

The last thing you need is someone publishing a new blog post right as you’re moving servers.

You can use this freeze time for team education. It’s a perfect moment to run workshops on technical SEO or explain how site speed affects rankings. Turn downtime into learning time.

Step 4: Go-Live Without the Guesswork

Migration day! This is where all your planning pays off, or where you realize what you forgot.

Launch Timing Is Everything

Choose your timing carefully. You should aim for when traffic is typically lowest.

For global sites, consider the “follow-the-sun” approach. This means migrating region by region during their lowest traffic hours. While it takes longer, it dramatically reduces risk.

Coordinate Your Teams

Clear communication is everything. Everyone should know exactly what they’re doing and when.

Define clear go/no-go decision points. Who makes the call if something looks off? What’s the threshold for rolling back vs. pushing through?

Having these conversations before you’re in the middle of a migration saves a ton of stress.

Live Performance Monitoring

Once you flip the switch, monitoring becomes your best friend. Here are the key items to monitor:

  • Watch site speed across different page types and locations.
  • Set up email alerts for crawl errors in Search Console.
  • Monitor 404 error rates and redirect performance.

Sudden spikes in 404 errors or drops in speed need immediate attention. They’re usually signs that something didn’t migrate correctly.

The faster you catch these issues, the less impact they’ll have on your rankings.

Post-Migration Validation

After launch, run through a systematic checklist:

  • Test redirect chains (we recommend Screaming Frog for this).
  • Make sure internal links work.
  • Verify your analytics tracking (you’d be surprised how often this breaks).
  • Check conversion tracking.
  • Validate SSL certificates.
  • Watch server logs for crawl issues.

One step people often forget: resubmitting your sitemap in Search Console as soon as possible. This helps Google discover your new setup faster.

Even with a perfect migration, most large sites take 3-6 months for complete re-indexing, so patience is key.

Step 5: Optimize, Tune, and Report: How To Increase Wins

The migration itself is just the beginning. Post-migration tuning is where the magic happens.

Fine-Tune Your Configuration

Now that you’re observing real traffic patterns, you can optimize your setup.

Start by enhancing caching rules based on actual user behavior. Adjust compression settings, and optimize those database queries that seemed fine during testing but are sluggish in production.

Handling redirects at the server level, rather than through plugins or CMS settings, is faster and reduces server load.

Automate Performance Monitoring

Set up alerts for issues before they become problems. We recommend monitoring:

  • Page speed drops by over 10%.
  • Uptime drops.
  • Changes in crawl rates.
  • Spikes in server resource usage.
  • Organic traffic drops by over 20%.

Automation saves you from constantly checking dashboards, allowing you to focus on improvements instead of firefighting.

Analyze for SEO Efficiency

Server logs tell you a lot about how well your migration went from an SEO perspective. Look for fewer crawl errors, faster Googlebot response times, and better crawl budget usage.

Improvements in crawl efficiency mean Google can discover and index your new content much faster.

Measure and Report Success

Compare your post-migration performance to those baseline metrics you wisely collected.

When showing results to executives, connect each improvement to business outcomes. For example:

  • “Faster pages reduced our bounce rate by 15%, which means more people are staying on the site.”
  • “Better uptime means we’re not losing sales during peak hours.”
  • “Improved crawl efficiency means our new products get indexed faster.”

Pro tip: Build easy-to-read dashboards that executives can access at any time. This helps build confidence and alleviate concerns.

Ready to Execute Your High-Performance Migration?

You don’t need more proof that hosting matters. Every slow page load and server hiccup already demonstrates it. What you need is a plan that safeguards your SEO investment while achieving tangible improvements.

This guide provides you with that playbook. You now know how to benchmark, choose the right solutions, and optimize for success.

This approach can be applied to sites of all sizes, ranging from emerging e-commerce stores to large enterprise platforms. The key lies in preparation and partnering with the right support team.

If you’re ready to take action, consider collaborating with a hosting provider that understands the complexities of large-scale migrations. Look for a team that manages substantial redirect volumes and builds infrastructure specifically for high-traffic websites. Your future rankings will thank you!

Image Credits

Featured Image: Image by InMotion Hosting. Used with permission.

In-Post Image: Images by InMotion Hosting. Used with permission.

See the stunning first images from the Vera C. Rubin Observatory

The first spectacular images taken by the Vera C. Rubin Observatory have been released for the world to peruse: a panoply of iridescent galaxies and shimmering nebulas. “This is the dawn of the Rubin Observatory,” says Meg Schwamb, a planetary scientist and astronomer at Queen’s University Belfast in Northern Ireland.

Much has been written about the observatory’s grand promise: to revolutionize our understanding of the cosmos by revealing a once-hidden population of far-flung galaxies, erupting stars, interstellar objects, and elusive planets. And thanks to its unparalleled technical prowess, few doubted its ability to make good on that. But over the past decade, during its lengthy construction period, “everything’s been in the abstract,” says Schwamb.

Today, that promise has become a staggeringly beautiful reality. 

Rubin’s view of the universe is unlike any that preceded it—an expansive vision of the night sky replete with detail, including hazy envelopes of matter coursing around galaxies and star-paved bridges arching between them. “These images are truly stunning,” says Pedro Bernardinelli, an astronomer at the University of Washington.

During its brief perusal of the night sky, Rubin even managed to spy more than 2,000 never-before-seen asteroids, demonstrating that it should be able to spotlight even the sneakiest denizens, and darkest corners, of our own solar system.

A small section of the Vera C. Rubin Observatory’s view of the Virgo Cluster. Three merging galaxies can be seen on the upper right. The view also includes two striking spiral galaxies (lower right), distant galaxies, and many Milky Way stars.
NSF-DOE VERA C. RUBIN OBSERVATORY

Today’s reveal is a mere amuse-bouche compared with what’s to come: Rubin, funded by the US National Science Foundation and the Department of Energy, is set for at least 10 years of planned observations. But this moment, and these glorious inaugural images, are worth celebrating for what they represent: the culmination of over a decade of painstaking work. 

“This is a direct demonstration that Rubin is no longer in the future,” says Bernardinelli. “It’s the present.”

The observatory is named after the late Vera Rubin, an astronomer who uncovered strong evidence for dark matter, a mysterious and as-yet-undetected something that’s binding galaxies together more strongly than the gravity of ordinary, visible matter alone can explain. Trying to make sense of dark matter—and its equally mysterious, universe-stretching cousin, dubbed dark energy—is a monumental task, one that cannot be addressed by just one line of study or scrutiny of one type of cosmic object.

That’s why Rubin was designed to document anything and everything that shifts or sparkles in the night sky. Sitting atop Chile’s Cerro Pachón mountain range, it boasts a 7,000-pound, 3,200-megapixel digital camera that can take detailed snapshots of a large patch of the night sky; a house-size cradle of mirrors that can drink up extremely distant and faint starlight; and a maze of joints and pistons that allow it to swivel about with incredible speed and precision. A multinational computer network permits its sky surveys to be largely automated, its images speedily processed, any new objects easily detected, and the relevant groups of astronomers quickly alerted.

All that technical wizardry allows Rubin to take a picture of the entire visible night sky once every few days, filling in the shadowed gaps and unseen activity between galaxies. “The sky [isn’t] static. There are asteroids zipping by, and supernovas exploding,” says Yusra AlSayyad, Rubin’s overseer of image processing. By conducting a continuous survey over the next decade, the facility will create a three-dimensional movie of the universe’s ever-changing chaos that could help address all sorts of astronomic queries. What were the very first galaxies like? How did the Milky Way form? Are there planets hidden in our own solar system’s backyard?

Rubin’s first glimpse of the firmament is predictably bursting with galaxies and stars. But the resolution, breadth, and depth of the images have taken astronomers aback. “I’m very impressed with these images. They’re really incredible,” says Christopher Conselice, an extragalactic astronomer at the University of Manchester in England.

One shot, created from 678 individual exposures, showcases the Trifid and Lagoon nebulas—two oceans of luminescent gas and dust where stars are born. Others depict a tiny portion of Rubin’s view of the Virgo Cluster, a zoo of galaxies. Hues of blue are coming from relatively nearby whirlpools of stars, while red tints emanate from remarkably distant and primeval galaxies. 

The rich detail in these images is already proving to be illuminating. “As galaxies merge and interact, the galaxies are pulling stars away from each other,” says Conselice. This behavior can be seen in plumes of diffuse light erupting from several galaxies, creating halos around them or illuminated bridges between them—records of these ancient galaxies’ pasts.

Images like these are also likely to contain several supernovas, the explosive final moments of sizable stars. Not only do supernovas seed the cosmos with all the heavy elements that planets—and life—rely on, but they can also hint at how the universe has expanded over time. 

Anais Möller, an astrophysicist at the Swinburne University of Technology in Melbourne, Australia, is a supernova hunter. “I search for exploding stars in very far away galaxies,” she says. Older sky surveys have found plenty, but they can lack context: You can see the explosion, but not what galaxy it’s from. Thanks to Rubin’s resolution—amply demonstrated by the Virgo Cluster set of images—astronomers can now “find where those exploding stars live,” says Möller.

Another small section of the observatory’s view of the Virgo Cluster. The image includes many distant galaxies along with stars from our own Milky Way galaxy.
NSF-DOE VERA C. RUBIN OBSERVATORY

While taking these images of the distant universe, Rubin also discovered 2,104 asteroids flitting about in our own solar system—including seven whose orbits hew close to Earth’s own. This number may sound impressive, but it’s just par for the course for Rubin. In just a few months, it will find over a million new asteroids—doubling the current known tally. And over the course of its decadal survey, Rubin is projected to identify 89,000 near-Earth asteroids, 3.7 million asteroids in the belt between Mars and Jupiter, and 32,000 icy objects beyond Neptune. 

Finding more than 2,000 previously hidden asteroids in just a few hours of observations, then, “wasn’t even hard” for Rubin, says Mario Jurić, an astronomer at the University of Washington. “The asteroids really popped out.”

Rubin’s comprehensive inventorying of the solar system has two benefits. The first is scientific: All those lumps of rocks and ice are the remnants of the solar system’s formative days, which means astronomers can use them to understand how everything around us was pieced together. 

The second benefit is security. Somewhere out there, there could be an asteroid on an Earthbound trajectory—one whose impact could devastate an entire city or even several countries. Engineers are working on defensive tech designed to either deflect or obliterate such asteroids, but if astronomers don’t know where they are, those defenses are useless. In quickly finding so many asteroids, Rubin has clearly shown that it will bolster Earth’s planetary defense capabilities like no other ground-based telescope.

Altogether, Rubin’s debut has validated the hopes of countless astronomers: The observatory won’t just be an incremental improvement on what’s come before. “I think it’s a generational leap,” says Möller. It is a ruthlessly efficient, discovery-making behemoth—and a firehose of astronomic delights is about to inundate the scientific community. “It’s very scary,” says Möller. “But very exciting at the same time.”

It’s going to be a very hectic decade. As Schwamb puts it, “The roller-coaster starts now.”

Book review: Surveillance & privacy

Privacy only matters to those with something to hide. So goes one of the more inane and disingenuous justifications for mass government and corporate surveillance. There are others, of course, but the “nothing to hide” argument remains a popular way to rationalize or excuse what’s become standard practice in our digital age: the widespread and invasive collection of vast amounts of personal data.

One common response to this line of reasoning is that everyone, in fact, has something to hide, whether they realize it or not. If you’re unsure of whether this holds true for you, I encourage you to read Means of Control by Byron Tau. 

cover of Means of Control
Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
CROWN, 2024

Midway through his book, Tau, an investigative journalist, recalls meeting with a disgruntled former employee of a data broker—a shady company that collects, bundles, and sells your personal data to other (often shadier) third parties, including the government. This ex-employee had managed to make off with several gigabytes of location data representing the precise movements of tens of thousands of people over the course of a few weeks. “What could I learn with this [data]—­theoretically?” Tau asks the former employee. The answer includes a laundry list of possibilities that I suspect would make even the most enthusiastic oversharer uncomfortable.

“If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed.”

Bryon Tau, author of Means of Control

Did someone in this group recently visit an abortion clinic? That would be easy to figure out, says the ex-employee. Anyone attend an AA meeting or check into inpatient drug rehab? Again, pretty simple to discern. Is someone being treated for erectile dysfunction at a sexual health clinic? If so, that would probably be gleanable from the data too. Tau never opts to go down that road, but as Means of Control makes very clear, others certainly have done so and will.

While most of us are at least vaguely aware that our phones and apps are a vector for data collection and tracking, both the way in which this is accomplished and the extent to which it happens often remain murky. Purposely so, argues Tau. In fact, one of the great myths Means of Control takes aim at is the very idea that what we do with our devices can ever truly be anonymized. Each of us has habits and routines that are completely unique, he says, and if an advertiser knows you only as an alphanumeric string provided by your phone as you move about the world, and not by your real name, that still offers you virtually no real privacy protection. (You’ll perhaps not be surprised to learn that such “anonymized ad IDs” are relatively easy to crack.)

“I’m here to tell you if you’ve ever been on a dating app that wanted your location, or if you ever granted a weather app permission to know where you are 24/7, there’s a good chance a detailed log of your precise movement patterns has been vacuumed up and saved in some data bank somewhere that tens of thousands of total strangers have access to,” writes Tau.

Unraveling the story of how these strangers—everyone from government intelligence agents and local law enforcement officers to private investigators and employees of ad tech companies—gained access to our personal information is the ambitious task Tau sets for himself, and he begins where you might expect: the immediate aftermath of 9/11.

At no other point in US history was the government’s appetite for data more voracious than in the days after the attacks, says Tau. It was a hunger that just so happened to coincide with the advent of new technologies, devices, and platforms that excelled at harvesting and serving up personal information that had zero legal privacy protections. 

Over the course of 22 chapters, Tau gives readers a rare glimpse inside the shadowy industry, “built by corporate America and blessed by government lawyers,” that emerged in the years and decades following the 9/11 attacks. In the hands of a less skilled reporter, this labyrinthine world of shell companies, data vendors, and intelligence agencies could easily become overwhelming or incomprehensible. But Tau goes to great lengths to connect dots and plots, explaining how a perfect storm of business motivations, technological breakthroughs, government paranoia, and lax or nonexistent privacy laws combined to produce the “digital panopticon” we are all now living in.

Means of Control doesn’t offer much comfort or reassurance for privacy­-minded readers, but that’s arguably the point. As Tau notes repeatedly throughout his book, this now massive system of persistent and ubiquitous surveillance works only because the public is largely unaware of it. “If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed,” he writes. 

As another new book makes clear, this conversation also needs to include student data. Lindsay Weinberg’s Smart University: Student Surveillance in the Digital Age reveals how the motivations and interests of Big Tech are transforming higher education in ways that are increasingly detrimental to student privacy and, arguably, education as a whole.

cover of Smart University
Smart University: Student Surveillance in the Digital Age
Lindsay Weinberg
JOHNS HOPKINS UNIVERSITY PRESS, 2024

By “smart university,” Weinberg means the growing number of public universities across the country that are being restructured around “the production and capture of digital data.” Similar in vision and application to so-called “smart cities,” these big-data-pilled institutions are increasingly turning to technologies that can track students’ movements around campus, monitor how much time they spend on learning management systems, flag those who seem to need special “advising,” and “nudge” others toward specific courses and majors. “What makes these digital technologies so seductive to higher education administrators, in addition to promises of cost cutting, individualized student services, and improved school rankings, is the notion that the integration of digital technology on their campuses will position universities to keep pace with technological innovation,” Weinberg writes. 

Readers of Smart University will likely recognize a familiar logic at play here. Driving many of these academic tracking and data-gathering initiatives is a growing obsession with efficiency, productivity, and convenience. The result is a kind of Silicon Valley optimization mindset, but applied to higher education at scale. Get students in and out of university as fast as possible, minimize attrition, relentlessly track performance, and do it all under the guise of campus modernization and increased personalization. 

Under this emerging system, students are viewed less as self-empowered individuals and more as “consumers to be courted, future workers to be made employable for increasingly smart workplaces, sources of user-generated content for marketing and outreach, and resources to be mined for making campuses even smarter,” writes Weinberg. 

At the heart of Smart University seems to be a relatively straightforward question: What is an education for? Although Weinberg doesn’t provide a direct answer, she shows that how a university (or society) decides to answer that question can have profound impacts on how it treats its students and teachers. Indeed, as the goal of education becomes less to produce well-rounded humans capable of thinking critically and more to produce “data subjects capable of being managed and who can fill roles in the digital economy,” it’s no wonder we’re increasingly turning to the dumb idea of smart universities to get the job done.  

If books like Means of Control and Smart University do an excellent job exposing the extent to which our privacy has been compromised, commodified, and weaponized (which they undoubtedly do), they can also start to feel a bit predictable in their final chapters. Familiar codas include calls for collective action, buttressed by a hopeful anecdote or two detailing previously successful pro-privacy wins; nods toward a bipartisan privacy bill in the works or other pieces of legislation that could potentially close some glaring surveillance loophole; and, most often, technical guides that explain how each of us, individually, might better secure or otherwise take control and “ownership” of our personal data.

The motivations behind these exhortations and privacy-centric how-to guides are understandable. After all, it’s natural for readers to want answers, advice, or at least some suggestion that things could be different—especially after reading about the growing list of degradations suffered under surveillance capitalism. But it doesn’t take a skeptic to start to wonder if they’re actually advancing the fight for privacy in the way that its advocates truly want.

For one thing, technology tends to move much faster than any one smartphone privacy guide or individual law could ever hope to keep up with. Similarly, framing rampant privacy abuses as a problem we each have to be responsible for addressing individually seems a lot like framing the plastic pollution crisis as something Americans could have somehow solved by recycling. It’s both a misdirection and a misunderstanding of the problem.     

It’s to his credit, then, that Lowry Pressly doesn’t include a “What is to be done” section at the end of The Right to Oblivion: Privacy and the Good Life. In lieu of offering up any concrete technical or political solutions, he simply reiterates an argument he has carefully and convincingly built over the course of his book: that privacy is important “not because it empowers us to exercise control over our information, but because it protects against the creation of such information in the first place.” 

cover of The Right to Oblivion
The Right to Oblivion: Privacy and the Good Life
Lowry Pressly
HARVARD UNIVERSITY PRESS, 2024

For Pressly, a Stanford instructor, the way we currently understand and value privacy has been tainted by what he calls “the ideology of information.” “This is the idea that information has a natural existence in human affairs,” he writes, “and that there are no aspects of human life which cannot be translated somehow into data.” This way of thinking not only leads to an impoverished sense of our own humanity—it also forces us into the conceptual trap of debating privacy’s value using a framework (control, consent, access) established by the companies whose business model is to exploit it.

The way out of this trap is to embrace what Pressly calls “oblivion,” a kind of state of unknowing, ambiguity, and potential—or, as he puts it, a realm “where there is no information or knowledge one way or the other.” While he understands that it’s impossible to fully escape a modern world intent on turning us into data subjects, Pressly’s book suggests we can and should support the idea that certain aspects of our (and others’) subjective interior lives can never be captured by information. Privacy is important because it helps to both protect and produce these ineffable parts of our lives, which in turn gives them a sense of dignity, depth, and the possibility for change and surprise. 

Reserving or cultivating a space for oblivion in our own lives means resisting the logic that drives much of the modern world. Our inclination to “join the conversation,” share our thoughts, and do whatever it is we do when we create and curate a personal brand has become so normalized that it’s practically invisible to us. According to Pressly, all that effort has only made our lives and relationships shallower, less meaningful, and less trusting.

Calls for putting our screens down and stepping away from the internet are certainly nothing new. And while The Right to Oblivion isn’t necessarily prescriptive about such things, Pressly does offer a beautiful and compelling vision of what can be gained when we retreat not just from the digital world but from the idea that we are somehow knowable to that world in any authentic or meaningful way. 

If all this sounds a bit philosophical, well, it is. But it would be a mistake to think of The Right to Oblivion as a mere thought exercise on privacy. Part of what makes the book so engaging and persuasive is the way in which Pressly combines a philosopher’s knack for uncovering hidden assumptions with a historian’s interest in and sensitivity to older (often abandoned) ways of thinking, and how they can often enlighten and inform modern problems.

Pressly isn’t against efforts to pass more robust privacy legislation, or even to learn how to better protect our devices against surveillance. His argument is that in order to guide such efforts, you have to both ask the right questions and frame the problem in a way that gives you and others the moral clarity and urgency to act. Your phone’s privacy settings are important, but so is understanding what you’re protecting when you change them. 

Bryan Gardiner is a writer based in Oakland, California. 

The Download: the Vera C. Rubin Observatory’s first pictures, and reframing privacy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

See the stunning first images from the Vera C. Rubin Observatory

The first spectacular images taken by the Vera C. Rubin Observatory have been released for the world to peruse: a panoply of iridescent galaxies and shimmering nebulas.

Much has been written about the observatory’s grand promise: to revolutionize our understanding of the cosmos by revealing a once-hidden population of far-flung galaxies, erupting stars, interstellar objects, and elusive planets. And thanks to its unparalleled technical prowess, few doubted its ability to make good on that. But over the past decade, during its lengthy construction period, everything’s been in the abstract.

Today, that promise has become a staggeringly beautiful reality. Read the full story.

—Robin George Andrews

Back in January, we selected the Vera C. Rubin Observatory as one of our 10 Breakthrough Technologies of 2025. Read more about why it’s such a promising tool for enhancing our understanding of the universe.

Why we need to think differently about privacy 

Privacy only matters to those with something to hide. So goes one of the more inane and disingenuous justifications for mass government and corporate surveillance. It remains a popular way to rationalize or excuse what’s become standard practice in our digital age: the widespread and invasive collection of vast amounts of personal data.

One common response to this line of reasoning is that everyone, in fact, has something to hide, whether they realize it or not. If you’re unsure of whether this holds true for you, three new books examine the rise of the surveillance state, its infiltration of higher education, and why we need a new framework for thinking about privacy. Read the full story.

—Bryan Gardiner

This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live this Wednesday, so subscribe & save 25% to read it and get a copy of the issue when it lands!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Satellite images reveal the damage America’s bombs caused in Iran
The attack focused on three nuclear sites in the country. (Wired $) 
+ Iran has insisted its nuclear program will not be stopped. (The Guardian)
+ Here’s how the US bunker-busting bombs work. (Economist $)
+ The risks of a nuclear accident appear low, for now. (New Scientist $)

2 Tesla has launched its Texas robotaxi service
But for now, at least, it’s pretty restricted. (NYT $)
+ Elon Musk says the firm is “super paranoid about safety.” (WP $)
+ But there’s still plenty of unanswered questions around how it’ll work. (TechCrunch)

3 OpenAI and Jony Ive’s startup are facing a trademark dispute
It appears to be over their use of the IO name. (Bloomberg $)
+ OpenAI has scrubbed all mention of the partnership online. (Insider $) 

4 Meta is throwing tens of millions of dollars at top AI talent
Mark Zuckerberg is on a personal mission to recruit for its Superintelligence lab. (WSJ $)
+ Alexandr Wang of Scale will lead the charge. (Fortune $)

5 Elon Musk wants to retrain xAI’s Grok
Foundation AI models contain too much garbage, apparently. (Insider $)
+ Investors aren’t keen to sink money into xAI. (Reuters)
+ Why does AI hallucinate? (MIT Technology Review)

6 Donald Trump’s phone network is based in Florida
Seven-year old Liberty Mobile Wireless buys network capacity from bigger players. (FT $)

7 Reddit is reportedly considering using World ID to verify users
The controversial firm claims to preserve users’ anonymity while also confirming they are human. (Semafor)
+ How the startup recruited its first half a million test users. (MIT Technology Review)

8 What happens inside the phones of 25 teenagers
Life isn’t always easy for the first generation of social media natives. (The Guardian)
+ What it’s like to have never owned a smartphone. (The Atlantic $)
+ How to log off. (MIT Technology Review)

9 A dead NASA satellite let off a powerful radio pulse 🛰
So powerful, it briefly outshone everything else in the sky. (New Scientist $)

10 What does AI mean for the future of fonts?
They could eventually swim into focus, or shift during the day. (The Verge)

Quote of the day

“We’re not playing a kid’s game here. We’re not naming Care Bears.”

—Ira Winkler, chief information security officer at cybersecurity firm CYE Security, decries cybersecurity’s obsession with cutesy names to the Wall Street Journal.

One more thing

What is death?

Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone.

But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back.

Scientists and many doctors have already embraced this more nuanced understanding of death. And as society catches up, the implications for the living could be profound. Read the full story

—Rachel Nuwer

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ Have you booked your tickets to 28 Years Later yet?
+ If you happen to be planning a flying visit to Rome, here’s a guide to cramming in as much of its breathtaking art as possible.
+ What community gardens can give us.
+ Who really runs New York? The bodega cats ($)

Scaling integrated digital health

Around the world, countries are facing the challenges of aging populations, growing rates of chronic disease, and workforce shortages, leading to a growing burden on health care systems. From diagnosis to treatment, AI and other digital solutions can enhance the efficiency and effectiveness of health care, easing the burden on straining systems. According to the World Health Organization (WHO), spending an additional $0.24 per patient per year on digital health interventions could save more than two million lives from non-communicable diseases over the next decade.

To work most effectively, digital solutions need to be scaled and embedded in an ecosystem that ensures a high degree of interoperability, data security, and governance. If not, the proliferation of point solutions— where specialized software or tools focus on just one specific area or function—could lead to silos and digital canyons, complicating rather than easing the workloads of health care professionals, and potentially impacting patient treatment. Importantly, technologies that enhance workforce productivity should keep humans in the loop, aiming to augment their capabilities, rather than replace them. 

Through a survey of 300 health care executives and a program of interviews with industry experts, startup leaders, and academic researchers, this report explores the best practices for success when implementing integrated digital solutions into health care, and how these can support decision-makers in a range of settings, including laboratories and hospitals. 

Key findings include: 

Health care is primed for digital adoption. The global pandemic underscored the benefits of value-based care and accelerated the adoption of digital and AI-powered technologies in health care. Overwhelmingly, 96% of the survey respondents say they are “ready and resourced” to use digital health, while one in four say they are “very ready.” However, 91% of executives agree interoperability is a challenge, with a majority (59%) saying it will be “tough” to solve. Two in five leaders say balancing security with usability is the biggest challenge for digital health. With the adoption of cloud solutions, organizations can enjoy the benefits of modernized IT infrastructure: 36% of the survey respondents believe scalability is the main benefit, followed by improved security (28%). 

Digital health care can help health care institutions transform patient outcomes—if built on the right foundations. Solutions like AI-powered diagnostics, telemedicine, and remote monitoring can offer measurable impact across the patient journey, from improving early disease detection to reducing hospital readmission rates. However, these technologies can only support fully connected health care when scaled up and embedded in ecosystems with robust data governance, interoperability, and security. 

Health care data has immense potential—but fragmentation and poor interoperability hinder impact. Health care systems generate vast quantities of data, yet much of it remains siloed or unusable due to inconsistent formats and incompatible IT systems, limiting scalability. 

Digital tools must augment, not overload, the workforce. With global health care workforce shortages worsening, digital solutions like clinical decision support tools, patient prediction, and remote monitoring can be seen as essential aids rather than threats to the workforce. Successful deployment depends on usability, clinician engagement, and training. 

Regulatory evolution, open data policies, and economic sustainability are key to scaling digital health. Even the best digital tools struggle to scale without reimbursement frameworks, regulatory support, and viable business models. Open data ecosystems are needed to unleash the clinical and economic value of innovation. Regulatory and reimbursement innovation is also critical to transitioning from pilot projects to high-impact, system-wide adoption.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

A Chinese firm has just launched a constantly changing set of AI benchmarks

When testing an AI model, it’s hard to tell if it is reasoning or just regurgitating answers from its training data. Xbench, a new benchmark developed by the Chinese venture capital firm HSG, or HongShan Capital Group, might help to sidestep that issue. That’s thanks to the way it evaluates models not only on the ability to pass arbitrary tests, like most other benchmarks, but also on the ability to execute real-world tasks, which is more unusual. It will be updated on a regular basis to try to keep it evergreen. 

This week the company is making part of its question set open-source and letting anyone use for free. The team has also released a leaderboard comparing how mainstream AI models stack up when tested on Xbench. (ChatGPT o3 ranked first across all categories, though ByteDance’s Doubao, Gemini 2.5 Pro, and Grok all still did pretty well, as did Claude Sonnet.) 

Development of the benchmark at HongShan began in 2022, following ChatGPT’s breakout success, as an internal tool for assessing which models are worth investing in. Since then, led by partner Gong Yuan, the team has steadily expanded the system, bringing in outside researchers and professionals to help refine it. As the project grew more sophisticated, they decided to release it to the public.

Xbench approached the problem with two different systems. One is similar to traditional benchmarking: an academic test that gauges a model’s aptitude on various subjects. The other is more like a technical interview round for a job, assessing how much real-world economic value a model might deliver.

Xbench’s methods for assessing raw intelligence currently include two components: Xbench-ScienceQA and Xbench-DeepResearch. ScienceQA isn’t a radical departure from existing postgraduate-level STEM benchmarks like GPQA and SuperGPQA. It includes questions spanning fields from biochemistry to orbital mechanics, drafted by graduate students and double-checked by professors. Scoring rewards not only the right answer but also the reasoning chain that leads to it.

DeepResearch, by contrast, focuses on a model’s ability to navigate the Chinese-language web. Ten subject-matter experts created 100 questions in music, history, finance, and literature—questions that can’t just be googled but require significant research to answer. Scoring favors breadth of sources, factual consistency, and a model’s willingness to admit when there isn’t enough data. A question in the publicized collection is “How many Chinese cities in the three northwestern provinces border a foreign country?” (It’s 12, and only 33% of models tested got it right, if you are wondering.)

On the company’s website, the researchers said they want to add more dimensions to the test—for example, aspects like how creative a model is in its problem solving, how collaborative it is when working with other models, and how reliable it is.

The team has committed to updating the test questions once a quarter and to maintain a half-public, half-private data set.

To assess models’ real-world readiness, the team worked with experts to develop tasks modeled on actual workflows, initially in recruitment and marketing. For example, one task asks a model to source five qualified battery engineer candidates and justify each pick. Another asks it to match advertisers with appropriate short-video creators from a pool of over 800 influencers.

The website also teases upcoming categories, including finance, legal, accounting, and design. The question sets for these categories have not yet been open-sourced.

ChatGPT-o3 again ranks first in both of the current professional categories. For recruiting, Perplexity Search and Claude 3.5 Sonnet take second and third place, respectively. For marketing, Claude, Grok, and Gemini all perform well.

“It is really difficult for benchmarks to include things that are so hard to quantify,” says Zihan Zheng, the lead researcher on a new benchmark called LiveCodeBench Pro and a student at NYU. “But Xbench represents a promising start.”

How to Extract ChatGPT’s Fan-Out Queries

Large language models such as ChatGPT generate answers to prompts by querying massive, publicly available training data. The queries address the initial prompt plus additional info likely relevant to users’ needs and then consolidate the findings into a single, comprehensive response.

The additional queries extend beyond the literal text of the prompt. Hence Google coined the term “fan-out queries” earlier this year to describe the tactic.

Knowing the fan-out queries associated with an initial prompt helps publishers understand the platform’s interpretations and priorities.

Two new tools reveal those queries in ChatGPT. These tools work only for ChatGPT’s web searches and only in Google Chrome.

The SEO Pub

The SEO Pub is a newsletter and community from Mike Friedman, a search engine optimizer and owner of Clicked Consulting. His “ChatGPT Search Reasoning and Query Extractor” is a free Chrome bookmarklet to access the platform’s fan-out web queries and reasoning when compiling an answer.

For example, for a prompt of “best headphones for running,” ChatGPT’s fan-out searches, per the bookmarklet, were:

  • “Shokz OpenRun Pro 2 review running 2025”
  • “Beats Fit Pro 2025 review running”
  • “Jabra Elite 8 Active review sweatproof”
  • “Soundcore Sport X20 earbuds review”
  • “Bose Ultra Open Earbuds running review”
  • “best running headphones 2025 Runner’s World”
  • “AirPods Pro USB-C 2nd gen running sweat test review”
  • “Sony LinkBuds S running comfort review 2024”
  • “JLab Go Air Sport review running 2024.”

And ChatGPT’s reasoning was enlightening, especially for content marketers and product developers:

The user is asking for a comparison of the best headphones for running. This likely means recommending models with features like a secure fit, being sweatproof, and having a transparency mode. I’ll need to gather the latest info for 2025 models. It’s not location-specific, but it feels like a shopping query, so I should focus on up-to-date reviews and comparison of features, pricing, and performance. I’ll browse for the most current models to include in the recommendation.

The bookmarklet enables users to view and export the searches and reasoning on an easy-to-read page.

Screenshot of a sample export from the bookmarket

Users can view and export ChatGPT’s fan-out searches and reasoning. Click image to enlarge.

To install the bookmarklet,

  • Create a new bookmark in Chrome.
  • Assign a name such as “ChatGPT Extractor” or similar
  • Replace the URL field with the code on the SEO Pub page
  • Go to ChatGPT and perform a search that triggers a web search
  • Once you have results, click the bookmarklet. It will open a new tab showing ChatGPT’s searches and reasoning.

Fan-out Helper

Another free tool, a Chrome extension called “ChatGPT Search Fan-out Helper,” saves ChatGPT’s fan-out queries and sources, providing the URLs, titles, and descriptions.