Yoast SEO announced a new feature that enables SEO and readability analysis within Google Docs, allowing publishers and teams to integrate search marketing best practices at the moment content is created instead of as an editing activity that comes after the fact.
Two Functionalities Carry Over To Google Docs
Yoast SEO is providing SEO optimization and readability feedback within the Google Docs editing environment.
SEO feedback consists of the familiar traffic light system that offers visual confirmation that the content is search optimized according to Yoast SEO’s content metrics on keywords, structure and optimization.
The readability analysis offers feedback on paragraph structure, sentence length, and headings to help the writer create engaging content, which is increasingly important in today’s content-first search engines that prioritize high quality content.
According to Yoast SEO:
“The Google Docs add-on tool is available to all Yoast SEO Premium subscribers, offering them a range of advanced optimization tools. For those not yet subscribed to Yoast Premium, the add-on is also available as a single purchase, making it accessible to a broader audience.
For those managing multiple team members, additional Google accounts can be linked for just $5 a month per account or annually for a 10% discount ($54). This flexibility ensures that anyone who writes content and in-house marketing teams managing multiple projects can benefit from high-quality SEO guidance.”
This new offering is an interesting step for Yoast SEO. Previously known as the developer of the Yoast SEO WordPress plugin, it’s expanded to Shopify and now it’s breaking out of the CMS paradigm to encompass the optimization process that happens before the content gets into the CMS.
Most brands don’t know they’re wasting money on branded ads. Are you one of them?
What if your Google Ads strategy is quietly draining your budget? Many advertisers are paying high CPCs even when there’s no real competition. It’s often because they’re unknowingly bidding against themselves.
Join BrandPilot AI on July 17, 2025 for a live session with Jenn Paterson and John Beresford, as they explain The Uncontested Paid Search Problem and how to stop it before it eats into your performance.
In this data-backed session, you’ll learn:
Why CPCs rise even without competitor bidding
How to detect branded ad waste in your own account
What this hidden flaw is costing your brand
Tactical strategies to reclaim lost budget and improve your results
Why this matters:
Brands are overspending on Google Ads without knowing the real reason. If you’re running branded search campaigns, this session will show you how to identify and fix what’s costing you the most.
Register today to protect your spend and improve performance. If you can’t attend live, sign up anyway and we’ll send you the full recording after the event.
Internet Marketing Ninjas has been acquired by SEO consultancy Previsible, an industry leader co-founded by a former head of SEO at eBay. The acquisition brings link building and digital PR expertise to Previsible. While both companies are now under shared ownership, they will continue to operate as separate brands.
Internet Marketing Ninjas
Founded in 1999 by Jim Boykin as We Build Pages, the Internet Marketing Ninjas consultancy story is one of steady innovation and pivoting in response to changes brought by Google. In my opinion, Jim’s talent was his ability to scale the latest tactics in order to offer the services to a large number of clients, and his ability to nimbly ramp up new strategies at scale in response to changes at Google. The names of the people he employed are a who’s who of legendary marketers.
In the early days of SEO, when reciprocal linking was the rage, it was Jim Boykin who became known as a bulk provider of that service, and when directories became a hot service, he was able to scale that tactic and make it easy for business owners to pick up links fast. Over time, the ability to provide links became increasingly harder, and yet Jim Boykin kept on innovating with strategies that made it easy for customers to attain links. I’ve long been an admirer of Boykin because he is the rare individual who can be both a brilliant SEO strategizer and a savvy business person.
Jordan Koene, CEO and co-founder at Previsible, commented:
“Previsible believes that the future of discovery and search lies at the intersection of trust and visibility. Our acquisition of Internet Marketing Ninjas brings one of the most experienced trusted-link and digital PR teams into our ecosystem. As search continues to evolve beyond keywords into authority, reputation, and real-world relevance, link strategies are essential for brands to stand out.”
Previsible and Internet Marketing Ninjas will continue to operate as separate brands, leveraging Boykin’s existing team for their expertise.
Jim Boykin explained:
“Combining forces with Previsible kicks off an incredibly exciting new chapter for Internet Marketing Ninjas. We’re not just an SEO company anymore, we’re at the forefront of the future of digital visibility. Together with Previsible, we’re leading the charge in both search and AI-driven discovery.
By merging decades of deep SEO expertise with bold, forward-thinking innovation, we’re meeting the future of online marketing head-on. From Google’s AI Overviews to ChatGPT and whatever comes next, our newly united team is perfectly positioned to help brands get found, build trust, and be talked about across the entire digital landscape. I’m absolutely stoked about what we’re building together and how we’re going to shape the next era of internet marketing.”
Previsible’s acquisition of Internet Marketing Ninjas merges long-standing experience in link building while retaining the distinct brands and teams that make each consultancy a search marketing leader. The partnership will enable clients to increase visibility by bringing the expertise of both companies together.
YouTube has responded to concerns surrounding its upcoming monetization policy update, clarifying that the July 15 changes are aimed at improving the detection of inauthentic content.
The update isn’t a crackdown on popular formats like reaction videos or clip compilations.
The clarification comes from Renee Richie, a creator liaison at YouTube, after a wave of confusion and concern followed the initial announcement.
“If you’re seeing posts about a July 2025 update to the YouTube Partner Program monetization policies and you’re concerned it’ll affect your reaction or clips or other type of channel. This is a minor update to YouTube’s long-standing YPP policies to help better identify when content is mass-produced or repetitive.”
Clarifying What’s Changing
Richie explained that the types of content targeted by the update, mass-produced and repetitious material, have already been ineligible for monetization under the YouTube Partner Program (YPP).
The update doesn’t change the rules but is intended to enhance how YouTube enforces them.
That distinction is important: while the policy itself isn’t new, enforcement may reach creators who were previously flying under the radar.
Why Creators Were Concerned
YouTube’s original announcement said the platform would “better identify mass-produced and repetitious content,” but didn’t clearly define those terms or how the update would be applied.
This vagueness led to speculation that reaction videos, clip compilations, or commentary content might be targeted, especially if those formats reuse footage or follow repetitive structures.
Richie’s clarification helps narrow the scope of the update, but it doesn’t explicitly exempt all reaction or clips channels. Channels relying on recycled content without significant added value may run into issues.
Understanding The Policy Context
YouTube’s Partner Program has always required creators to produce “original” and “authentic” content to qualify for monetization.
The July 15 update reiterates that standard, while providing more clarity around what the platform considers inauthentic today.
According to the July 2 announcement:
“On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content. This update better reflects what ‘inauthentic’ content looks like today.”
YouTube emphasized two patterns in particular:
Mass-produced content
Repetitious content
While some reaction or commentary videos could fall under these categories, Richie’s statement suggests that the update is not meant to penalize formats that include meaningful creative input.
What This Means
Transformative content, such as reactions, commentary, and curated clips with original insights or editing, is still eligible for monetization.
But creators using these formats should ensure they’re offering something new or valuable in each upload.
The update appears aimed at:
Auto-generated or templated videos with minimal variation
Reposted or duplicated content with little editing or context
Channels that publish near-identical videos in large quantities
For creators who invest in original scripting, commentary, editing, or creative structure, this update likely won’t require changes. But those leaning on low-effort or highly repetitive content strategies may be at increased risk of losing monetization.
Looking Ahead
The updated policy will take effect on July 15. Channels that continue to publish content flagged as mass-produced or repetitive after this date may face removal from the Partner Program.
While Richie’s clarification aims to calm fears, it doesn’t override the enforcement language in the original announcement. Creators still have time to review their libraries and adjust strategies to ensure compliance.
For most of its history, SEO has been a reactive discipline, being asked to “make it rank” once a site is built, with little input into the process.
Even crazier, most SEO professionals are assigned a set of key performance indicators (KPIs) for which they are accountable, metrics tied to visibility, engagement, and revenue.
Still, they have no real control over the underlying systems that affect them. These metrics often rely on the performance of disconnected teams, including content, engineering, brand, and product, which don’t always share the same objectives.
When my previous agency, Global Strategies, was acquired by Ogilvy, I recommended that our team be viewed as building inspectors, not just an SEO package upsell added at the end, but involved at key phases when architects, engineers, and tradespeople had laid out the structural components.
Ideally, we’d come in after the site framing (wireframes) was complete, reviewing the plumbing (information architecture), electrical (navigation and links), and foundation (technical performance), but before the drywall and paint obscured what lies beneath.
We’d validate that the right materials were used and that construction followed a standard fit for long-term performance.
However, in reality, we were rarely invited into the planning stages because that was creative, and we were just SEO. We were usually brought in only after launch, tasked with fixing what had already been buried behind a visually appealing design.
Despite fighting for it, I was never a complete fan of this model; it made sense in the early days of search, when websites were simple, and ranking factors were more forgiving.
SEO practitioners identified crawl issues, adjusted metadata, optimized titles, fixed broken links, and retrofitted pages with keywords and internal links.
That said, I have long advocated for eliminating the need for most SEO actions by integrating the fixes into the roles and workflows that initially broke them.
Through education, process change, and content management system (CMS) innovation, much of what SEO fixes could, and should, become standard practice.
However, this has been a challenging sell, as SEO has often been viewed as less important than design, development, or content creation.
It was easier to assign SEO the role of cleanup crew rather than bake best practices into upstream systems and roles. We worked around CMS limitations, cleaned up after redesigns, and tried to reverse-engineer what Google wanted from the outside in.
But that role of identifying and fixing defects is no longer enough. And in the AI-driven search environment, it’s becoming obsolete.
Search Has Changed. Our Role Must Too.
Search engines today do far more than index and rank webpages. They extract answers, synthesize responses, and generate real-time content previews.
What used to be a linear search journey (query > list of links > website) has become a multi-layered ecosystem of zero-click answers, AI summaries, featured snippets, and voice responses.
Traditional SEO tactics, indexability, content relevance, and backlinks still matter in this environment, but only as part of a larger system.
The new currency of visibility is semantic clarity, machine-readability, and multi-system integration. SEO is no longer about optimizing a page. It’s about orchestrating a system.
This complexity requires us to transition from being just an inspector to becoming the Commissioning Authority (CxA) to meet the demands of this shift.
What Is A Commissioning Authority?
In modern architecture and construction, a Commissioning Authority is a specialized professional who ensures that all building systems, including HVAC, electrical, plumbing, safety, and lighting, function as intended in combination.
They are brought in not just to inspect but also to validate, test, and orchestrate performance.
They work on behalf of the building owner, aligning the construction output with the original design intent and operational goals. They look at interoperability, performance efficiency, long-term sustainability, and documentation.
They are not passive checkers. They are active enablers of success.
Why SEO Needs Commissioning Authorities
The modern website is no longer a standalone asset. It is a network of interconnected systems:
Today’s SEO, or whatever the latest alphabet soup acronym du jour is, and especially tomorrow, must be a Commissioning Authority for these systems. That means:
Being involved at the blueprint stage, not just post-launch.
Advocating for search visibility as a performance outcome.
Ensuring that semantic signals, not just visual elements, are embedded in every page.
Testing and validating that the site performs in AI environments, not just traditional search engine results pages (SERPs).
The Rise Of The Relevance Engineer
A key function within this evolved CxA role is that of the Relevance Engineer, a concept and term introduced by Mike King of iPullRank.
Mike has been one of the most vocal and insightful leaders on the transformation of SEO in the AI era, and his view is clear: The discipline must fundamentally evolve, both in practice and in how it is positioned within organizations.
Mike King’s perspective underscores that treating AI-driven search as simply an extension of traditional SEO is dangerously misguided.
Instead, we must embrace a new function, Relevance Engineering, which focuses on optimizing for semantic alignment, passage-level competitiveness, and probabilistic rankings, rather than deterministic keyword-based tactics.
The Relevance Engineer ensures:
Each content element is structured and chunked for generative AI consumption.
Content addresses layered user intent, from informational to transactional.
Schema markup and internal linking reinforce topical authority and entity associations.
The site’s architecture supports passage-level understanding and AI summarization.
In many ways, the Relevance Engineer is the semantic strategist of the SEO team, working hand-in-hand with designers, developers, and content creators to ensure that relevance is not assumed but engineered.
In construction terms, this might resemble a systems integration specialist. This expert ensures that electrical, plumbing, HVAC, and automation systems function individually and operate cohesively within an innovative building environment.
Relevance Engineering is more than a title; it’s a mindset shift. It emphasizes that SEO must now live at the intersection of information science, user experience, and machine interpretability.
From Inspector To CxA: How The Role Shifts
SEO Pillar
Old Role: Building Inspector
New Role: Commissioning Authority
Indexability
Check crawl blocks after build
Design architecture for accessibility and rendering
Relevance
Patch in keywords post-launch
Map content to entity models and query intent upfront, guided by a Relevance Engineer
Authority
Chase links to weak content
Build a structured reputation and concept ownership
Clickability
Tweak titles and meta descriptions
Structure content for AI previews, snippets, and voice answers
User Experience
Flag issues in testing
Embed UX, speed, and clarity into the initial design
Looking Ahead: The Next Generation Of SEO
As AI continues to reshape search behavior, SEO pros must adapt again. We will need to:
Understand how content is deconstructed and repackaged by large language models (LLMs).
Ensure that our information is structured, chunked, and semantically aligned to be eligible for synthesis.
Advocate for knowledge modeling, not just keyword optimization.
Encourage cross-functional integration between content, engineering, design, and analytics.
The next generation of SEO leaders will not be optimization specialists.
They will be systems thinkers, semantic strategists, digital performance architects, storytellers, performance coaches, and importantly, master negotiators to advocate and steer the necessary organizational, infrastructural, and content changes to thrive.
They will also be force multipliers – individuals or teams who amplify the effectiveness of everyone else in the process.
By embedding structured, AI-ready practices into the workflow, they enable content teams, developers, and marketers to do their jobs better and more efficiently.
The Relevance Engineer and Commissioning Authority roles are not just tactical additions but strategic leverage points that unlock exponential impact across the digital organization.
Final Thought
Too much article space has been wasted arguing over what to call this new era – whether SEO is dead, what the acronym should be, or what might or might not be part of the future.
Meanwhile, far too little attention has been devoted to the structural and intellectual shifts organizations must make to remain competitive in a search environment reshaped by AI.
Suppose we, as an industry, do not start changing the rules, roles, and mindset now. In that case, we’ll again be scrambling when the CEO demands to know why the company missed profitability targets, only to realize we’re buying back traffic we should have earned.
We’ve spent 30 years trying to retrofit what others built into something functional for search engines – pushing massive boulders uphill to shift monoliths into integrated digital machines. That era is over.
The brands that will thrive in the AI search era are those that elevate SEO from a reactive function to a strategic discipline with a seat at the planning table.
The professionals who succeed will be those who speak the language of systems, semantics, and sustained performance – and who take an active role in shaping the digital infrastructure.
The future of SEO is not about tweaking; it’s about taking the reins. It’s about stepping into the role of Commissioning Authority, aligning stakeholders, systems, and semantics.
And at its core, it will be driven by the precision of relevance engineering, and amplified by the force multiplier effect of integrated, strategic influence.
For all the noise around keywords, content strategy, and AI-generated summaries, technical SEO still determines whether your content gets seen in the first place.
You can have the most brilliant blog post or perfectly phrased product page, but if your site architecture looks like an episode of “Hoarders” or your crawl budget is wasted on junk pages, you’re invisible.
If you’re still treating it like a one-time setup or a background task for your dev team, you’re leaving visibility (and revenue) on the table.
This isn’t about obsessing over Lighthouse scores or chasing 100s in Core Web Vitals. It’s about making your site easier for search engines to crawl, parse, and prioritize, especially as AI transforms how discovery works.
Crawl Efficiency Is Your SEO Infrastructure
Before we talk tactics, let’s align on a key truth: Your site’s crawl efficiency determines how much of your content gets indexed, updated, and ranked.
Crawl efficiency is equal to how well search engines can access and process the pages that actually matter.
The longer your site’s been around, the more likely it’s accumulated detritus – outdated pages, redirect chains, orphaned content, bloated JavaScript, pagination issues, parameter duplicates, and entire subfolders that no longer serve a purpose. Every one of these gets in Googlebot’s way.
Improving crawl efficiency doesn’t mean “getting more crawled.” It means helping search engines waste less time on garbage so they can focus on what matters.
Technical SEO Areas That Actually Move The Needle
Let’s skip the obvious stuff and get into what’s actually working in 2025, shall we?
1. Optimize For Discovery, Not “Flatness”
There’s a long-standing myth that search engines prefer flat architecture. Let’s be clear: Search engines prefer accessible architecture, not shallow architecture.
A deep, well-organized structure doesn’t hurt your rankings. It helps everything else work better.
Logical nesting supports crawl efficiency, elegant redirects, and robots.txt rules, and makes life significantly easier when it comes to content maintenance, analytics, and reporting.
Fix it: Focus on internal discoverability.
If a critical page is five clicks away from your homepage, that’s the problem, not whether the URL lives at /products/widgets/ or /docs/api/v2/authentication.
Use curated hubs, cross-linking, and HTML sitemaps to elevate key pages. But resist flattening everything into the root – that’s not helping anyone.
Example: A product page like /products/waterproof-jackets/mens/blue-mountain-parkas provides clear topical context, simplifies redirects, and enables smarter segmentation in analytics.
By contrast, dumping everything into the root turns Google Analytics 4 analysis into a nightmare.
Want to measure how your documentation is performing? That’s easy if it all lives under /documentation/. Nearly impossible if it’s scattered across flat, ungrouped URLs.
Pro tip: For blogs, I prefer categories or topical tags in the URL (e.g., /blog/technical-seo/structured-data-guide) instead of timestamps.
Dated URLs make content look stale – even if it’s fresh – and provide no value in understanding performance by topic or theme.
In short: organized ≠ buried. Smart nesting supports clarity, crawlability, and conversion tracking. Flattening everything for the sake of myth-based SEO advice just creates chaos.
2. Eliminate Crawl Waste
Google has a crawl budget for every site. The bigger and more complex your site, the more likely you’re wasting that budget on low-value URLs.
Common offenders:
Calendar pages (hello, faceted navigation).
Internal search results.
Staging or dev environments accidentally left open.
Infinite scroll that generates URLs but not value.
Endless UTM-tagged duplicates.
Fix it: Audit your crawl logs.
Disallow junk in robots.txt. Use canonical tags correctly. Prune unnecessary indexable pages. And yes, finally remove that 20,000-page tag archive that no one – human or robot – has ever wanted to read.
3. Fix Your Redirect Chains
Redirects are often slapped together in emergencies and rarely revisited. But every extra hop adds latency, wastes crawl budget, and can fracture link equity.
Fix it: Run a redirect map quarterly.
Collapse chains into single-step redirects. Wherever possible, update internal links to point directly to the final destination URL instead of bouncing through a series of legacy URLs.
Clean redirect logic makes your site faster, clearer, and far easier to maintain, especially when doing platform migrations or content audits.
And yes, elegant redirect rules require structured URLs. Flat sites make this harder, not easier.
4. Don’t Hide Links Inside JavaScript
Google can render JavaScript, but large language models generally don’t. And even Google doesn’t render every page immediately or consistently.
If your key links are injected via JavaScript or hidden behind search boxes, modals, or interactive elements, you’re choking off both crawl access and AI visibility.
Fix it: Expose your navigation, support content, and product details via crawlable, static HTML wherever possible.
LLMs like those powering AI Overviews, ChatGPT, and Perplexity don’t click or type. If your knowledge base or documentation is only accessible after a user types into a search box, LLMs won’t see it – and won’t cite it.
Real talk: If your official support content isn’t visible to LLMs, they’ll pull answers from Reddit, old blog posts, or someone else’s guesswork. That’s how incorrect or outdated information becomes the default AI response for your product.
Solution: Maintain a static, browsable version of your support center. Use real anchor links, not JavaScript-triggered overlays. Make your help content easy to find and even easier to crawl.
Invisible content doesn’t just miss out on rankings. It gets overwritten by whatever is visible. If you don’t control the narrative, someone else will.
5. Handle Pagination And Parameters With Intention
Infinite scroll, poorly handled pagination, and uncontrolled URL parameters can clutter crawl paths and fragment authority.
It’s not just an indexing issue. It’s a maintenance nightmare and a signal dilution risk.
Fix it: Prioritize crawl clarity and minimize redundant URLs.
While rel=”next”/rel=”prev” still gets thrown around in technical SEO advice, Google retired support years ago, and most content management systems don’t implement it correctly anyway.
Instead, focus on:
Using crawlable, path-based pagination formats (e.g., /blog/page/2/) instead of query parameters like ?page=2. Google often crawls but doesn’t index parameter-based pagination, and LLMs will likely ignore it entirely.
Ensuring paginated pages contain unique or at least additive content, not clones of page one.
Avoiding canonical tags that point every paginated page back to page one that tells search engines to ignore the rest of your content.
Using robots.txt or meta noindex for thin or duplicate parameter combinations (especially in filtered or faceted listings).
Defining parameter behavior in Google Search Console only if you have a clear, deliberate strategy. Otherwise, you’re more likely to shoot yourself in the foot.
Pro tip: Don’t rely on client-side JavaScript to build paginated lists. If your content is only accessible via infinite scroll or rendered after user interaction, it’s likely invisible to both search crawlers and LLMs.
Good pagination quietly supports discovery. Bad pagination quietly destroys it.
Crawl Optimization And AI: Why This Matters More Than Ever
You might be wondering, “With AI Overviews and LLM-powered answers rewriting the SERP, does crawl optimization still matter?”
Yes. More than ever.
Pourquoi? AI-generated summaries still rely on indexed, trusted content. If your content doesn’t get crawled, it doesn’t get indexed. If it’s not indexed, it doesn’t get cited. And if it’s not cited, you don’t exist in the AI-generated answer layer.
AI search agents (Google, Perplexity, ChatGPT with browsing) don’t pull full pages; they extract chunks of information. Paragraphs, sentences, lists. That means your content architecture needs to be extractable. And that starts with crawlability.
If you want to understand how that content gets interpreted – and how to structure yours for maximum visibility – this guide on how LLMs interpret content breaks it down step by step.
Remember, you can’t show up in AI Overviews if Google can’t reliably crawl and understand your content.
Bonus: Crawl Efficiency For Site Health
Efficient crawling is more than an indexing benefit. It’s a canary in the coal mine for technical debt.
If your crawl logs show thousands of pages no longer relevant, or crawlers are spending 80% of their time on pages you don’t care about, it means your site is disorganized. It’s a signal.
Clean it up, and you’ll improve everything from performance to user experience to reporting accuracy.
What To Prioritize This Quarter
If you’re short on time and resources, focus here:
Crawl Budget Triage: Review crawl logs and identify where Googlebot is wasting time.
Internal Link Optimization: Ensure your most important pages are easily discoverable.
Remove Crawl Traps: Close off dead ends, duplicate URLs, and infinite spaces.
JavaScript Rendering Review: Use tools like Google’s URL Inspection Tool to verify what’s visible.
Eliminate Redirect Hops: Especially on money pages and high-traffic sections.
These are not theoretical improvements. They translate directly into better rankings, faster indexing, and more efficient content discovery.
TL;DR: Keywords Matter Less If You’re Not Crawlable
Technical SEO isn’t the sexy part of search, but it’s the part that enables everything else to work.
If you’re not prioritizing crawl efficiency, you’re asking Google to work harder to rank you. And in a world where AI-powered search demands clarity, speed, and trust – that’s a losing bet.
Filing link disavows is generally a futile way to deal with spammy links, but they are useful for dealing with unnatural links an SEO or a publisher is responsible for creating, which can require urgent action. But how long does Google take to process them? Someone asked John Mueller that exact question, and his answer provides insight into how link disavows are handled internally at Google.
Google’s Link Disavow Tool
The link disavow tool is a way for publishers and SEOs to manage unwanted backlinks that they don’t want Google to count against them. It literally means that the publisher disavows the links.
The tool was created by Google in response to requests by SEOs for an easy way to disavow paid links they were responsible for obtaining and were unable to remove from the websites in which they were placed. The link disavow tool is accessible via the Google Search Console and enables users to upload a spreadsheet with a list of URLs or domains from which they want links to not count against them in Google’s index.
Google’s official guidance for the disavow tool has always been that it’s for use by SEOs and publishers who want to disavow paid or otherwise unnatural links that they are responsible for obtaining and are unable to have removed. Google expressly says that the vast majority of sites do not need to use the tool, especially for low quality links for which they have nothing to do with.
How Google Processes The Link Disavow Tool
A person asked Mueller on Blue Sky for details about how Google processed the newly added links.
“When we add domains to the disavow, i.e top up the list. Can I assume the new domains are treated separately as new additions.
You don’t reprocess the whole thing?”
John Mueller answered that the order of the domains and URLs on the list didn’t matter.
His response:
“The order in the disavow file doesn’t matter. We don’t process the file per-se (it’s not an immediate filter of “the index”), we take it into account when we recrawl other sites naturally.”
The answer is interesting because he says that Google doesn’t process the link disavow file “per-se” and what he likely means is that it’s not acted on in that moment. The “filtering” of that disavowed link happens at the time when a subsequent crawling happens.
So another way to look at it is that the link disavow file doesn’t trigger anything, but the data contained in the file is acted upon during the normal course of crawling.
If you were told that the odds of something were 3.1%, it really wouldn’t seem like much. But for the people charged with protecting our planet, it was huge.
On February 18, astronomers determined that a 130- to 300-foot-long asteroid had a 3.1% chance of crashing into Earth in 2032. Never had an asteroid of such dangerous dimensions stood such a high chance of striking the planet. For those following this developing story in the news, the revelation was unnerving. For many scientists and engineers, though, it turned out to be—despite its seriousness—a little bit exciting.
While possible impact locations included patches of empty ocean, the space rock, called 2024 YR4, also had several densely populated cities in its possible crosshairs, including Mumbai, Lagos, and Bogotá. If the asteroid did in fact hit such a metropolis, the best-case scenario was severe damage; the worst case was outright, total ruin. And for the first time, a group of United Nations–backed researchers began to have high-level discussions about the fate of the world: If this asteroid was going to hit the planet, what sort of spaceflight mission might be able to stop it? Would they ram a spacecraft into it to deflect it? Would they use nuclear weapons to try to swat it away or obliterate it completely?
At the same time, planetary defenders all over the world crewed their battle stations to see if we could avoid that fate—and despite the sometimes taxing new demands on their psyches and schedules, they remained some of the coolest customers in the galaxy. “I’ve had to cancel an appointment saying, I cannot come—I have to save the planet,” says Olivier Hainaut, an astronomer at the European Southern Observatory and one of those who tracked down 2024 YR4.
Then, just as quickly as history was made, experts declared that the danger had passed. On February 24, asteroid trackers issued the all-clear: Earth would be spared, just as many planetary defense researchers had felt assured it would.
How did they do it? What was it like to track the rising (and rising and rising) danger of this asteroid, and to ultimately determine that it’d miss us?
This is the inside story of how, over a span of just two months, a sprawling network of global astronomers found, followed, mapped, planned for, and finally dismissed 2024 YR4, the most dangerous asteroid ever found—all under the tightest of timelines and, for just a moment, with the highest of stakes.
“It was not an exercise,” says Hainaut. This was the real thing: “We really [had] to get it right.”
IN THE BEGINNING
December 27, 2024
THE ASTEROID TERRESTRIAL-IMPACT LAST ALERT SYSTEM, HAWAII
Long ago, an asteroid in the space-rock highway between Mars and Jupiter felt a disturbance in the force: the gravitational pull of Jupiter itself, king of the planets. After some wobbling back and forth, this asteroid was thrown out of the belt, skipped around the sun, and found itself on an orbit that overlapped with Earth’s own.
“I was the first one to see the detections of it,” Larry Denneau, of the University of Hawai‘i, recalls. “A tiny white pixel on a black background.”
Denneau is one of the principal investigators at the NASA-funded Asteroid Terrestrial-impact Last Alert System (ATLAS) telescopic network. It may have been just two days after Christmas, but he followed procedure as if it were any other day of the year and sent the observations of the tiny pixel onward to another NASA-funded facility, the Minor Planet Center (MPC) in Cambridge, Massachusetts.
There’s an alternate reality in which none of this happened. Fortunately, in our timeline, various space agencies—chiefly NASA, but also the European Space Agency and the Japan Aerospace Exploration Agency—invest millions of dollars every year in asteroid-spotting efforts.
And while multiple nations host observatories capable of performing this work, the US clearly leads the way: Its planetary defense program provides funding to a suite of telescopic facilities solely dedicated to identifying potentially hazardous space rocks. (At least, it leads the way for the moment. The White House’s proposal for draconian budget cuts to NASA and the National Science Foundation mean that several observatories and space missions linked to planetary defense are facing funding losses or outright terminations.)
Astronomers working at these observatories are tasked with finding threatening asteroids before they find us—because you can’t fight what you can’t see. “They are the first line of planetary defense,” says Kelly Fast, the acting planetary defense officer at NASA’s Planetary Defense Coordination Office in Washington, DC.
ATLAS is one part of this skywatching project, and it consists of four telescopes: two in Hawaii, one in Chile, and another in South Africa. They don’t operate the way you’d think, with astronomers peering through them all night. Instead, they operate “completely robotically and automatically,” says Denneau. Driven by coding scripts that he and his colleagues have developed, these mechanical eyes work in harmony to watch out for any suspicious space rocks. Astronomers usually monitor their survey of the sky from a remote location.
ATLAS telescopes are small, so they can’t see particularly distant objects. But they have a wide field of view, allowing them to see large patches of space at any one moment. “As long as the weather is good, we’re constantly monitoring the night sky, from the North Pole to the South Pole,” says Denneau.
Larry Denneau is a principal investigator at the Asteroid Terrestrial-impact Last Alert System telescopic network.
COURTESY PHOTO
If they detect the starlight reflecting off a moving object, an operator, such as Denneau, gets an alert and visually verifies that the object is real and not some sort of imaging artifact. When a suspected asteroid (or comet) is identified, the observations are sent to the MPC, which is home to a bulletin board featuring (among other things) orbital data on all known asteroids and comets.
If the object isn’t already listed, a new discovery is announced, and other astronomers can perform follow-up observations.
In just the past few years, ATLAS has detected more than 1,200 asteroids with near-Earth orbits. Finding ultimately harmless space rocks is routine work—so much so that when the new near-Earth asteroid was spotted by ATLAS’s Chilean telescope that December day, it didn’t even raise any eyebrows.
Denneau had simply been sitting at home,doing some late-night work on his computer. At the time, of course, he didn’t know that his telescope had just spied what would soon become a history-making asteroid—one that could alter the future of the planet.
The MPC quickly confirmed the new space rock hadn’t already been “found,” and astronomers gave it a provisional designation: 2024 YR4.
CATALINA SKY SURVEY, ARIZONA
Around the same time, the discovery was shared with another NASA-funded facility: the Catalina Sky Survey, a nest of three telescopes in the Santa Catalina Mountains north of Tucson that works out of the University of Arizona. “We run a very tight operation,” says Kacper Wierzchoś, one of its comet and asteroid spotters. Unlike ATLAS, these telescopes (although aided by automation) often have an in-person astronomer available to quickly alter the surveys in real time.
“We run a very tight operation,” says Kacper Wierzchoś, one of the comet and asteroid spotters at the Catalina Sky Survey north of Tucson, Arizona.
COURTESY PHOTO
So when Catalina was alerted about what its peers at ATLAS had spotted, staff deployed its Schmidt telescope—a smaller one that excels at seeing bright objects moving extremely quickly. As they fed their own observations of 2024 YR4 to the MPC, Catalina engineer David Rankin looked back over imagery from the previous days and found the new asteroid lurking in a night-sky image taken on December 26. Around then, ATLAS also realized that it had caught sight of 2024 YR4 in a photograph from December 25.
The combined observations confirmed it: The asteroid had made its closest approach to Earth on Christmas Day, meaning it was already heading back out into space. But where, exactly, was this space rock going? Where would it end up after it swung around the sun?
CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA
If the answer to that question was Earth, Davide Farnocchia would be one of the first to know. You could say he’s one of NASA’s watchers on the wall.
And he’s remarkably calm about his duties. When he first heard about 2024 YR4, he barely flinched. It was just another asteroid drifting through space not terribly far from Earth. It was another box to be ticked.
Once it was logged by the MPC, it was Farnocchia’s job to try to plot out 2024 YR4’s possible paths through space, checking to see if any of them overlapped with our planet’s. He works at NASA’s Center for Near-Earth Object Studies (CNEOS) in California, where he’s partly responsible for keeping track of all the known asteroids and comets in the solar system. “We have 1.4 million objects to deal with,” he says, matter-of-factly.
In the past, astronomers would have had to stitch together multiple images of this asteroid and plot out its possible trajectories. Today, fortunately, Farnocchia has some help: He oversees the digital brain Sentry, an autonomous system he helped code. (Two other facilities in Italy perform similar work: the European Space Agency’s Near-Earth Object Coordination Centre, or NEOCC, and the privately owned Near-Earth Objects Dynamics Site, or NEODyS.)
To chart their courses, Sentry uses every new observation of every known asteroid or comet listed on the MPC to continuously refine the orbits of all those objects, using the immutable laws of gravity and the gravitational influences of any planets, moons, or other sizable asteroids they pass. A recent update to the software means that even the ever-so-gentle push afforded by sunlight is accounted for. That allows Sentry to confidently project the motions of all these objects at least a century into the future.
Davide Farnocchia helps track all the known asteroids and comets in the solar system at NASA’s Center for Near-Earth Object Studies.
COURTESY PHOTO
Almost all newly discovered asteroids are quickly found to pose no impact risk. But those that stand even an infinitesimally small chance of smashing into our planet within the next 100 years are placed on the Sentry Risk List until additional observations can rule out those awful possibilities. Better safe than sorry.
In late December, with just a limited set of data, Sentry concluded that there was a non-negligible chance 2024 YR4 would strike Earth in 2032. Aegis, the equivalent software at Europe’s NEOCC site, agreed. No bother. More observations would very likely remove 2024 YR4 from the Risk List. Just another day at the office for Farnocchia.
It’s worth noting that an asteroid heading toward Earth isn’t always a problem. Small rocks burn up in the planet’s atmosphere several times a day; you’ve probably seen one already this year, on a moonless night. But above a certain size, these rocks turn from innocuous shooting stars into nuclear-esque explosions.
Reflected starlight is great for initially spotting asteroids, but it’s a terrible way to determine how big they are. A large, dull rock reflects as much light as a bright, tiny rock, making them appear the same to many telescopes. And that’s a problem, considering that a rock around 30 feet long will explode loudly but inconsequentially in Earth’s atmosphere, while a 3,000-foot-long asteroid would slam into the ground and cause devastation on a global scale, imperiling all of civilization. Roughly speaking, if you double the size of an asteroid, it becomes eight times more energetic upon impact—so finding out the size of an Earthbound asteroid is of paramount importance.
In those first few hours after it was discovered, and before anyone knew how shiny or dull its surface was, 2024 YR4 was estimated by astronomers to be as small as 65 feet across or as large as 500 feet. An object of the former size would blow up in mid-air, shattering windows over many miles and likely injuring thousands of people. At the latter size it would vaporize the heart of any city it struck, turning solid rock and metal into liquid and vapor, while its blast wave would devastate the rest of it, killing hundreds of thousands or even millions in the process.
So now the question was: Just how big was 2024 YR4?
REFINING THE PICTURE
Mid-January 2025
VERY LARGE TELESCOPE, CHILE
Understandably dissatisfied with that level of imprecision, the European Southern Observatory’s Very Large Telescope (VLT), high up on the Cerro Paranal mountain in Chile’s Atacama Desert, entered the chat. As the name suggests, this flagship facility is vast, and it’s capable of really zooming in on distant objects. Or to put it another way: “The VLT is the largest, biggest, best telescope in the world,” says Hainaut, one of the facility’s operators, who usually commands it from half a world away in Germany.
In reality, the VLT—which lends a hand to the European Space Agency in its asteroid-hunting duties—is actually made up of four massive telescopes, each fixed on four separate corners of the sky. They can be combined to act as a huge light bucket, allowing astronomers to see very faint asteroids. Four additional, smaller, movable telescopes can also team up with their bigger siblings to provide remarkably high-resolution images of even the stealthiest space rocks.
In this sequence of infrared images taken by ESO’s VLT, the individual image frames have been aligned so that the asteroid remains in the center as other stars appear to move around it.
ESO/O. HAINAUT ET AL.
With so much tech to oversee, the control room of the VLT looks a bit like the inside of the Death Star. “You have eight consoles, each of them with a dozen screens. It’s big, it’s large, it’s spectacular,” says Hainaut.
In mid-January, the European Space Agency asked the VLT to study several asteroids that had somewhat suspicious near-Earth orbits—including 2024 YR4. With just a few lines of code, the VLT could easily train its sharp eyes on an asteroid like 2024 YR4, allowing astronomers to narrow down its size range. It was found to be at least 130 feet long (big enough to cause major damage in a city) and as much as 300 feet (able to annihilate one).
January 29, 2025
INTERNATIONAL ASTEROID WARNING NETWORK
Marco Fenucci is a near-Earth-object dynamicist at the European Space Agency’s Near-Earth Object Coordination Centre.
COURTESY PHOTO
By the end of the month, there was no mistaking it: 2024 YR4 stood a greater than 1% chance of impacting Earth on December 22, 2032.
“It’s not something you see very often,” says Marco Fenucci, a near-Earth-object dynamicist at NEOCC. He admits that although it was “a serious thing,” this escalation was also “exciting to see”—something straight out of a sci-fi flick.
Sentry and Aegis, along with the systems at NEODyS, had been checking one another’s calculations. “There was a lot of care,” says Farnocchia, who explains that even though their programs worked wonders, their predictions were manually verified by multiple experts. When a rarity like 2024 YR4 comes along, he says, “you kind of switch gears, and you start being more cautious. You start screening everything that comes in.”
At this point, the klaxon emanating from these three data centers pushed the International Asteroid Warning Network (IAWN), a UN-backed planetary defense awareness group, to issue a public alert to the world’s governments: The planet may be in peril. For the most part, it was at this moment that the media—and the wider public—became aware of the threat. Earth, we may have a problem.
Denneau, along with plenty of other astronomers, received an urgent email from Fast at NASA’s Planetary Defense Coordination Office, requesting that all capable observatories track this hazardous asteroid. But there was one glaring problem. When 2024 YR4 was discovered on December 27, it was already two days after it had made its closest approach to Earth. And since it was heading back out into the shadows of space, it was quickly fading from sight.
Once it gets too faint, “there’s not much ATLAS can do,” Denneau says. By the time of IAWN’s warning, planetary defenders had just weeks to try to track 2024 YR4 and refine the odds of its hitting Earth before they’d lose it to the darkness.
And if their scopes failed, the odds of an Earth impact would have stayed uncomfortably high until 2028, when the asteroid was due to make another flyby of the planet. That’d be just four short years before the space rock might actually hit.
“In that situation, we would have been … in trouble,” says NEOCC’s Fenucci.
The hunt was on.
PREPARING FOR THE WORST
February 5 and February 6, 2025
SPACE MISSION PLANNING ADVISORY GROUP, AUSTRIA
In early February, spaceflight mission specialists, including those at the UN-supported Space Mission Planning Advisory Group in Vienna, began high-level talks designed to sketch out ways in which 2024 YR4 could be either deflected away from Earth or obliterated—you know, just in case.
A range of options were available—including ramming it with several uncrewed spacecraft or assaulting it with nuclear weapons—but there was no silver bullet in this situation. Nobody had ever launched a nuclear explosive device into deep space before, and the geopolitical ramifications of any nuclear-armed nations doing so in the present day would prove deeply unwelcome. Asteroids are also extremely odd objects; some, perhaps including 2024 YR4, are less like single chunks of rock and more akin to multiple cliffs flying in formation. Hit an asteroid like that too hard and you could fail to deflect it—and instead turn an Earthbound cannonball into a spray of shotgun pellets.
It’s safe to say that early on, experts were concerned about whether they could prevent a potential disaster. Crucially, eight years was not actually much time to plan something of this scale. So they were keen to better pinpoint how likely, or unlikely, it was that 2024 YR4 was going to collide with the planet before any complex space mission planning began in earnest.
The people involved with these talks—from physicists at some of America’s most secretive nuclear weapons research laboratories to spaceflight researchers over in Europe—were not feeling close to anything resembling panic. But “the timeline was really short,” admits Hainaut. So there was an unprecedented tempo to their discussions. This wasn’t a drill. This was the real deal. What would they do to defend the planet if an asteroid impact couldn’t be ruled out?
Luckily, over the next few days, a handful of new observations came in. Each helped Sentry, Aegis, and the system at NEODyS rule out more of 2024 YR4’s possible future orbits. Unluckily, Earth remained a potential port of call for this pesky asteroid—and over time, our planet made up a higher proportion of those remaining possibilities. That meant that the odds of an Earth impact “started bubbling up,” says Denneau.
EVA REDAMONTI
By February 6, they jumped to 2.3%—a one-in-43 chance of an impact.
“How much anxiety someone should feel over that—it’s hard to say,” Denneau says, with a slight shrug.
In the past, several elephantine asteroids have been found to stand a small chance of careening unceremoniously into the planet. Such incidents tend to follow a pattern. As more observations come in and the asteroid’s orbit becomes better known, an Earth impact trajectory remains a possibility while other outlying orbits are removed from the calculations—so for a time, the odds of an impact rise. Finally, with enough observations in hand, it becomes clear that the space rock will miss our world entirely, and the impact odds plummet to zero.
Astronomers expected this to repeat itself with 2024 YR4. But there was no guarantee. There’s no escaping the fact that one day, sooner or later, scientists will discover a dangerous asteroid that will punch Earth in the face—and raze a city in the process.
After all, asteroids capable of trashing a city have found their way to Earth plenty of times before, and not just in the very distant past. In 1908, an 800-square-mile patch of forest in Siberia—one that was, fortunately, very sparsely populated—was decimated by a space rock just 180 feet long. It didn’t even hit the ground; it exploded in midair with the force of a 15-megaton blast.
But only one other asteroid comparable in size to 2024 YR4 had its 2.3% figure beat: in 2004, Apophis—capable of causing continental-scale damage—had (briefly) stood a 2.7% chance of impacting Earth in 2029.
Rapidly approaching uncharted waters, the powers that be at NASA decided to play a space-based wild card: the James Webb Space Telescope, or JWST.
THE JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH
A large dull asteroid reflects the same amount of light as a small shiny one, but that doesn’t mean astronomers sizing up an asteroid are helpless. If you view both asteroids in the infrared, the larger one glows brighter than the smaller one no matter the surface coating—making infrared, or the thermal part of the electromagnetic spectrum, a much better gauge of a space rock’s proportions.
Observatories on Earth do have infrared capabilities, but our planet’s atmosphere gets in their way, making it hard for them to offer highly accurate readings of an asteroid’s size.
But the James Webb Space Telescope (JWST), hanging out in space, doesn’t have that problem.
Asteroid 2024 YR4 is the smallest object targeted by JWST to date, and one of the smallest objects to have its size directly measured. Observations were taken using both its NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) to study the thermal properties of the asteroid.
NASA, ESA, CSA, A. RIVKIN (APL), A. PAGAN (STSCI)
This observatory, which sits at a gravitationally stable point about a million miles from Earth, is polymathic. Its sniper-like scope can see in the infrared and allows it to peer at the edge of the observable universe, meaning it can study galaxies that formed not long after the Big Bang. It can even look at the light passing through the atmospheres of distant planets to ascertain their chemical makeups. And its remarkably sharp eye means it can also track the thermal glow of an asteroid long after all ground-based telescopes lose sight of it.
In a fortuitous bit of timing, by the moment 2024 YR4 came along, planetary defenders had recently reasoned that JWST could theoretically be used to track ominous asteroids using its own infrared scope, should the need arise. So after IAWN’s warning went out, operators of JWST ran an analysis: Though the asteroid would vanish from most scopes by late March, this one might be able to see the rock until sometime in May, which would allow researchers to greatly refine their assessment of the asteroid’s orbit and its odds of making Earth impact.
Understanding 2024 YR4’s trajectory was important, but “the size was the main motivator,” says Andy Rivkin, an astronomer at Johns Hopkins University’s Applied Physics Laboratory, who led the proposal to use JWST to observe the asteroid. The hope was that even if the impact odds remained high until 2028, JWST would find that 2024 YR4 was on the smaller side of the 130-to-300-feet size range—meaning it would still be a danger, but a far less catastrophic one.
The JWST proposal was accepted by NASA on February 5. But the earliest it could conduct its observations was early March. And time really wasn’t on Earth’s side.
February 7, 2025
GEMINI SOUTH TELESCOPE, CHILE
“At this point, [2024 YR4] was too faint for the Catalina telescopes,” says Catalina’s Wierzchoś. “In our opinion, this was a big deal.”
So Wierzchoś and his colleagues put in a rare emergency request to commandeer the Gemini Observatory, an internationally funded and run facility featuring two large, eagle-eyed telescopes—one in Chile and one atop Hawaii’s Mauna Kea volcano. Their request was granted, and on February 7, they trained the Chile-based Gemini South telescope onto 2024 YR4.
This composite image was captured by a team of astronomers using the Gemini Multi-Object Spectrograph (GMOS). The hazy dot at the center is asteroid 2024 YR4.
INTERNATIONAL GEMINI OBSERVATORY/NOIRLAB/NSF/AURA/M. ZAMANI
The odds of Earth impact dropped ever so slightly, to 2.2%—a minor, but still welcome, development.
Mid-February 2025
MAGDALENA RIDGE OBSERVATORY, NEW MEXICO
By this point, the roster of 2024 YR4 hunters also included the tiny team operating the Magdalena Ridge Observatory (MRO), which sits atop a tranquil mountain in New Mexico.
“It’s myself and my husband,” says Eileen Ryan, the MRO director. “We’re the only two astronomers running the telescope. We have a daytime technician. It’s kind of a mom-and-pop organization.”
Still, the scope shouldn’t be underestimated. “We can see maybe a cell-phone-size object that’s illuminated at geosynchronous orbit,” Ryan says, referring to objects 22,000 miles away. But its most impressive feature is its mobility. While other observatories have slowly swiveling telescopes, MRO’s scope can move like the wind. “We can track the fastest objects,” she says, with a grin—noting that the telescope was built in part to watch missiles for the US Air Force. Its agility and long-distance vision explain why the Space Force is one of MRO’s major clients: It can be used to spy on satellites and spacecraft anywhere from low Earth orbit right out to the lunar regions. And that meant spying on the super-speedy, super-sneaky 2024 YR4 wasn’t a problem for MRO, whose own observations were vital in refining the asteroid’s impact odds.
Eileen Ryan is the director of the Magdalena Ridge Observatory in New Mexico.
COURTESY PHOTO
Then, in mid-February, MRO and all ground-based observatories came up against an unsolvable problem: The full moon was out, shining so brightly that it blinded any telescope that dared point at the night sky. “During the full moon, the observatories couldn’t observe for a week or so,” says NEOCC’s Fenucci. To most of us, the moon is a beautiful silvery orb. But to astronomers, it’s a hostile actor. “We abhor the moon,” says Denneau.
All any of them could do was wait. Those tracking 2024 YR4 vacillated between being animated and slightly trepidatious. The thought that the asteroid could still stand a decent chance of impacting Earth after it faded from view did weigh a little on their minds.
Nevertheless, Farnocchia maintained his characteristic sangfroid throughout. “I try to stress about the things I can control,” he says. “All we can do is to explain what the situation is, and that we need new data to say more.”
February 18, 2025
CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA
As the full moon finally faded into a crescent of light, the world’s largest telescopes sprang back into action for one last shot at glory. “The dark time came again,” says Hainaut, with a smile.
New observations finally began to trickle in, and Sentry, Aegis, and NEODyS readjusted their forecasts. It wasn’t great news: The odds of an Earth impact in 2032 jumped up to 3.1%, officially making 2024 YR4 the most dangerous asteroid ever discovered.
This declaration made headlines across the world—and certainly made the curious public sit up and wonder if they had yet another apocalyptic concern to fret about. But, as ever, the asteroid hunters held fast in their prediction that sooner or later—ideally sooner—more observations would cause those impact odds to drop.
“We kept at it,” says Ryan. But time was running short; they started to “search for out-of-the-box ways to image this asteroid,” says Fenucci.
Planetary defense researchers soon realized that 2024 YR4 wasn’t too far away from NASA’s Lucy spacecraft, a planetary science mission making a series of flybys of various asteroids. If Lucy could be redirected to catch up to 2024 YR4 instead, it would give humanity its best look at the rock, allowing Sentry and company to confirm or dispel our worst fears.
Sadly, NASA ran the numbers, and it proved to be a nonstarter: 2024 YR4 was too speedy and too far for Lucy to pursue.
It was really starting to look as if JWST would be the last, best hope to track 2024 YR4.
A CHANGE OF FATE
February 19, 2025
VERY LARGE TELESCOPE, CHILE and MAGDALENA RIDGE OBSERVATORY, NEW MEXICO
Just one day after 2024 YR made history, the VLT, MRO, and others caught sight of it again, and Sentry, Aegis, and NEODyS voraciously consumed their new data.
The odds of an Earth impact suddenly dropped to 1.5%.
Astronomers didn’t really have time to react to the possibility that this was a good sign—they just kept sending their observations onward.
February 20, 2025
SUBARU TELESCOPE, HAWAII
Yet another observatory had been itching to get into the game for weeks, but it wasn’t until February 20 that Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, sitting atop Mauna Kea, finally caught 2024 YR4 shifting between the stars. He added his data to the stream.
And all of a sudden, the asteroid lost its lethal luster. The odds of its hitting Earth were now just 0.3%.
At this point, you might expect that all those tracking it would be in a single control room somewhere, eyes glued to their screens, watching the odds drop before bursting into cheers and rapturous applause. But no—the astronomers who had spent so long observing this asteroid remained scattered across the globe. And instead of erupting into cheers, they exchanged modestly worded emails of congratulations—the digital equivalent of a nod or a handshake.
In late February, data from Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, which sits atop Mauna Kea, confirmed that 2024 YR4 was not so lethal after all.
NAOJ
“It was a relief,” says Terai. “I was very pleased that our data contributed to put an end to the risk of 2024 YR4.”
February 24, 2025
INTERNATIONAL ASTEROID WARNING NETWORK
Just a few days later, and thanks to a litany of observations continuing to flood in, IAWN issued the all-clear. This once-ominous asteroid’s odds of inconveniencing our planet were at 0.004%—one in 25,000. Today, the odds of an Earth impact in 2032 are exactly zero.
“It was kinda fun while it lasted,” says Denneau.
Planetary defenders may have had a blast defending the world, but these astronomers still took the cosmic threat deeply seriously—and never once took their eyes off the prize. “In my mind, the observers and orbit teams were the stars of this story,” says Fast, NASA’s acting planetary defense officer.
Farnocchia shrugs off the entire thing. “It was the expected outcome,” he says. “We just didn’t know when that would happen.”
Looking back on it now, though, some 2024 YR4 trackers are allowing themselves to dwell on just how close this asteroid came to being a major danger. “It’s wild to watch it all play out,” says Denneau. “We were weeks away from having to spin up some serious mitigation planning.” But there was no need to work out how the save the world. It turned out that 2024 YR4 was never a threat to begin with—it just took a while to check.
And these experiences in handling a dicey space rock will only serve to make the world a safer place to live. One day, an asteroid very much like 2024 YR4 will be seen heading straight for Earth. And those tasked with tracking it will be officially battle-tested, and better prepared than ever to do what needs to be done.
A POSTSCRIPT
March 27, 2025
JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH
But the story of 2024 YR4 is not quite over—in fact, if this were a movie, it would have an after-credits scene.
After the Earth-impact odds fell off a cliff, JWST went ahead with its observations in March anyway. It found out that 2024 YR4 was 200 feet across—so large that a direct strike on a city would have proved horrifically lethal. Earth just didn’t have to worry about it anymore.
But the moon might. Thanks in part to JWST, astronomers calculated a 3.8% chance that 2024 YR4 will impact the lunar surface in 2032. Additional JWST observations in May bumped those odds up slightly, to 4.3%, and the orbit can no longer be refined until the asteroid’s next Earth flyby in 2028.
“It may hit the moon!” says Denneau. “Everybody’s still very excited about that.”
A lunar collision would give astronomers a wonderful opportunity not only to study the physics of an asteroid impact, but also to demonstrate to the public just how good they are at precisely predicting the future motions of potentially lethal space rocks. “It’s a thing we can plan for without having to defend the Earth,” says Denneau.
If 2024 YR4 is truly going to smash into the moon, the impact—likely on the side facing Earth—would unleash an explosion equivalent to hundreds of nuclear bombs. An expansive crater would be carved out in the blink of an eye, and a shower of debris would erupt in all directions.
None of this supersonic wreckage would pose any danger to Earth, but it would look spectacular: You’d be able to see the bright flash of the impact from terra firma with the naked eye.
“If that does happen, it’ll be amazing,” says Denneau. It will be a spectacular way to see the saga of 2024 YR4—once a mere speck on his computer screen—come to an explosive end, from a front-row seat.
Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How to Kill An Asteroid (2024).
Today’s AI landscape is defined by the ways in which neural networks are unlike human brains. A toddler learns how to communicate effectively with only a thousand calories a day and regular conversation; meanwhile, tech companies are reopening nuclear power plants, polluting marginalized communities, and pirating terabytes of books in order to train and run their LLMs.
But neural networks are, after all, neural—they’re inspired by brains. Despite their vastly different appetites for energy and data, large language models and human brains do share a good deal in common. They’re both made up of millions of subcomponents: biological neurons in the case of the brain, simulated “neurons” in the case of networks. They’re the only two things on Earth that can fluently and flexibly produce language. And scientists barely understand how either of them works.
I can testify to those similarities: I came to journalism, and to AI, by way of six years of neuroscience graduate school. It’s a common view among neuroscientists that building brainlike neural networks is one of the most promising paths for the field, and that attitude has started to spread to psychology. Last week, the prestigious journal Nature published a pair of studies showcasing the use of neural networks for predicting how humans and other animals behave in psychological experiments. Both studies propose that these trained networks could help scientists advance their understanding of the human mind. But predicting a behavior and explaining how it came about are two very different things.
In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. They called the resulting model Centaur.
Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.
But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.
Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.
One alternative approach is to go small. The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. And while there’s no guarantee that these models function like the brains they were trained to mimic, they can, at the very least, generate testable hypotheses about human and animal cognition.
There’s a cost to comprehensibility. Unlike Centaur, which was trained to mimic human behavior in dozens of different tasks, each tiny network can only predict behavior in one specific task. One network, for example, is specialized for making predictions about how people choose among different slot machines. “If the behavior is really complex, you need a large network,” says Marcelo Mattar, an assistant professor of psychology and neural science at New York University who led the tiny-network study and also contributed to Centaur. “The compromise, of course, is that now understanding it is very, very difficult.”
This trade-off between prediction and understanding is a key feature of neural-network-driven science. (I also happen to be writing a book about it.) Studies like Mattar’s are making some progress toward closing that gap—as tiny as his networks are, they can predict behavior more accurately than traditional psychological models. So is the research into LLM interpretability happening at places like Anthropic. For now, however, our understanding of complex systems—from humans to climate systems to proteins—is lagging farther and farther behind our ability to make predictions about them.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Fusion energy holds the potential to shift a geopolitical landscape that is currently configured around fossil fuels. Harnessing fusion will deliver the energy resilience, security, and abundance needed for all modern industrial and service sectors. But these benefits will be controlled by the nation that leads in both developing the complex supply chains required and building fusion power plants at scales large enough to drive down economic costs.
The US and other Western countries will have to build strong supply chains across a range of technologies in addition to creating the fundamental technology behind practical fusion power plants. Investing in supply chains and scaling up complex production processes has increasingly been a strength of China’s and a weakness of the West, resulting in the migration of many critical industries from the West to China. With fusion, we run the risk that history will repeat itself. But it does not have to go that way.
The US and Europe were the dominant public funders of fusion energy research and are home to many of the world’s pioneering private fusion efforts. The West has consequently developed many of the basic technologies that will make fusion power work. But in the past five years China’s support of fusion energy has surged, threatening to allow the country to dominate the industry.
The industrial base available to support China’s nascent fusion energy industry could enable it to climb the learning curve much faster and more effectively than the West. Commercialization requires know-how, capabilities, and complementary assets, including supply chains and workforces in adjacent industries. And especially in comparison with China, the US and Europe have significantly under-supported the industrial assets needed for a fusion industry, such as thin-film processing and power electronics.
To compete, the US, allies, and partners must invest more heavily not only in fusion itself—which is already happening—but also in those adjacent technologies that are critical to the fusion industrial base.
China’s trajectory to dominating fusion and the West’s potential route to competing can be understood by looking at today’s most promising scientific and engineering pathway to achieve grid-relevant fusion energy. That pathway relies on the tokamak, a technology that uses a magnetic field to confine ionized gas—called plasma—and ultimately fuse nuclei. This process releases energy that is converted from heat to electricity. Tokamaks consist of several critical systems, including plasma confinement and heating, fuel production and processing, blankets and heat flux management, and power conversion.
A close look at the adjacent industries needed to build these critical systems clearly shows China’s advantage while also providing a glimpse into the challenges of building a fusion industrial base in the US or Europe. China has leadership in three of these six key industries, and the West is at risk of losing leadership in two more. China’s industrial might in thin-film processing, large metal-alloy structures, and power electronics provides a strong foundation to establish the upstream supply chain for fusion.
The importance of thin-film processing is evident in the plasma confinement system. Tokamaks use strong electromagnets to keep the fusion plasma in place, and the magnetic coils must be made from superconducting materials. Rare-earth barium copper oxide (REBCO) superconductors are the highest-performing materials available in sufficient quantity to be viable for use in fusion.
The REBCO industry, which relies on thin-film processing technologies, currently has low production volumes spanning globally distributed manufacturers. However, as the fusion industry grows, the manufacturing base for REBCO will likely consolidate among the industry players who are able to rapidly take advantage of economies of scale. China is today’s world leader in thin-film, high-volume manufacturing for solar panels and flat-panel displays, with the associated expert workforce, tooling sector, infrastructure, and upstream materials supply chain. Without significant attention and investment on the part of the West, China is well positioned to dominate REBCO thin-film processing for fusion magnets.
The electromagnets in a full-scale tokamak are as tall as a three-story building. Structures made using strong metal alloys are needed to hold these electromagnets around the large vacuum vessel that physically contains the magnetically confined plasma. Similar large-scale, complex metal structures are required for shipbuilding, aerospace, oil and gas infrastructure, and turbines. But fusion plants will require new versions of the alloys that are radiation-tolerant, able to withstand cryogenic temperatures, and corrosion-resistant. China’s manufacturing capacity and its metallurgical research efforts position it well to outcompete other global suppliers in making the necessary specialty metal alloys and machining them into the complex structures needed for fusion.
A tokamak also requires large-scale power electronics. Here again China dominates. Similar systems are found in the high-speed rail (HSR) industry, renewable microgrids, and arc furnaces. As of 2024, China had deployed over 48,000 kilometers of HSR. That is three times the length of Europe’s HSR network and 55 times as long as the Acela network in the US, which is slower than HSR. While other nations have a presence, China’s expertise is more recent and is being applied on a larger scale.
But this is not the end of the story. The West still has an opportunity to lead the other three adjacent industries important to the fusion supply chain: cryo-plants, fuel processing, and blankets.
The electromagnets in an operational tokamak need to be kept at cryogenic temperatures of around 20 Kelvin to remain superconducting. This requires large-scale, multi-megawatt cryogenic cooling plants. Here, the country best set up to lead the industry is less clear. The two major global suppliers of cryo-plants are Europe-based Linde Engineering and Air Liquide Engineering; the US has Air Products and Chemicals and Chart Industries. But they are not alone: China’s domestic champions in the cryogenic sector include Hangyang Group, SASPG, Kaifeng Air Separation, and SOPC. Each of these regions already has an industrial base that could scale up to meet the demands of fusion.
Fuel production for fusion is a nascent part of the industrial base requiring processing technologies for light-isotope gases—hydrogen, deuterium, and tritium. Some processing of light-isotope gases is already done at small scale in medicine, hydrogen weapons production, and scientific research in the US, Europe, and China. But the scale needed for the fusion industry does not exist in today’s industrial base, presenting a major opportunity to develop the needed capabilities.
Similarly, blankets and heat flux management are an opportunity for the West. The blanket is the medium used to absorb energy from the fusion reaction and to breed tritium. Commercial-scale blankets will require entirely novel technology. To date, no adjacent industries have relevant commercial expertise in liquid lithium, lead-lithium eutectic, or fusion-specific molten salts that are required for blanket technology. Some overlapping blanket technologies are in early-stage development by the nuclear fission industry. As the largest producer of beryllium in the world, the US has an opportunity to capture leadership because that element is a key material in leading fusion blanket concepts. But the use of beryllium must be coupled with technology development programs for the other specialty blanket components.
These six industries will prove critical to scaling fusion energy. In some, such as thin-film processing and large metal-alloy structures, China already has a sizable advantage. Crucially, China recognizes the importance of these adjacent industries and is actively harnessing them in its fusion efforts. For example, China launched a fusion consortium that consists of industrial giants spanning the steel, machine tooling, electric grid, power generation, and aerospace sectors. It will be extremely difficult for the West to catch up in these areas, but policymakers and business leaders must pay attention and try to create robust alternative supply chains.
As the industrial area of greatest strength, cryo-plants could continue to be an opportunity for leadership in the West. Bolstering Western cryo-plant production by creating demand for natural-gas liquefaction will be a major boon to the future cryo-plant supply chain that will support fusion energy.
The US and European countries also have an opportunity to lead in the emerging industrial areas of fuel processing and blanket technologies. Doing so will require policymakers to work with companies to ensure that public and private funding is allocated to these critical emerging supply chains. Governments may well need to serve as early customers and provide debt financing for significant capital investment. Governments can also do better to incentivize private capital and equity financing—for example, through favorable capital-gains taxation. In lagging areas of thin-film and alloy production, the US and Europe will likely need partners, such as South Korea and Japan, that have the industrial bases to compete globally with China.
The need to connect and capitalize multiple industries and supply chains will require long-term thinking and clear leadership. A focus on the demand side of these complementary industries is essential. Fusion is a decade away from maturation, so its supplier base must be derisked and made profitable in the near term by focusing on other primary demand markets that contribute to our economic vitality. To name a few, policymakers can support modernization of the grid to bolster domestic demand for power electronics and domestic semiconductor manufacturing to support thin-film processing.
The West must also focus on the demand for energy production itself. As the world’s largest energy consumer, China will leverage demand from its massive domestic market to climb the learning curve and bolster national champions. This is a strategy that China has wielded with tremendous success to dominate global manufacturing, most recently in the electric-vehicle industry. Taken together, supply- and demand-side investment have been a winning strategy for China.
The competition to lead the future of fusion energy is here. Now is the moment for the US and its Western allies to start investing in the foundational innovation ecosystem needed for a vibrant and resilient industrial base to support it.
Daniel F. Brunner is a co-founder of Commonwealth Fusion Systems and a Partner at Future Tech Partners.
Edlyn V. Levine is the co-founder of a stealth-mode technology start up and an affiliate of the MIT Sloan School of Management.
Fiona E. Murray is a professor of entrepreneurship at the MIT School of Management and Vice Chair of the NATO Innovation Fund.
Rory Burke is a graduate of MIT Sloan and a former summer scholar with ARPA-E.