Finix, a payment processing company, has launched a new WooCommerce plugin that enables WordPress merchants to integrate embedded payments directly into their stores. The new plugin enables WooCommerce merchants to accept all major credit cards, as well as Apple Pay and bank transfers. Setting up via the WooCommerce plugin is easy and is said to take only ten minutes to set up and start accepting payments.
“Flexible Payment Methods: Accept major credit and debit cards, Apple Pay, and bank transfers. Offer flexibility customers expect and reduce checkout friction.
Transparent Pricing: Finix uses interchange-plus pricing for clear, detailed fee breakdowns, ideal for high-volume merchants.
Apple Pay Integration: Enable Apple Pay on supported browsers like Safari and Chrome, with customizable button styles and types that blend seamlessly into your storefront.
Customizable Checkout Display: Match your brand’s voice by tailoring the look and language of each payment method for a more intuitive customer experience.
WooCommerce Blocks Checkout Compatible Fully supports WooCommerce’s new block-based checkout and the classic flow, keeping your store aligned with the latest updates.
Automated Dispute & Bank Return Handling Reduce operational overhead with automatic order status updates triggered by webhook events.”
Finix is a payment processor that was founded in San Francisco in 2015. It has received funding from major Silicon Valley venture capitalists and is regarded as a rising competitor to companies like Stripe.
Finix claims that merchants report faster payouts using its systems and that it offers a streamlined checkout flow.
An investigation by SEO professional James Brockbank reveals that ChatGPT may be recommending businesses based on content from hacked websites and expired domains.
The findings aren’t a comprehensive study but the result of personal testing and observations. Brockbank, who serves as Managing Director at Digitaloft, says his report emerged from exploring how brands gain visibility in ChatGPT’s responses.
His analysis suggests that some actors are successfully gaming the system by publishing content on compromised or repurposed domains that retain high authority signals.
This content, despite being irrelevant or deceptive, can surface in ChatGPT-generated business recommendations.
“I believe that the more we understand about why certain citations get surfaced, even if these are spammy and manipulative, the better we understand how these new platforms work.”
How Manipulated Content Appears In ChatGPT Responses
Brockbank identified two main tactics that appear to influence ChatGPT’s business recommendations:
1. Hacked Websites
In multiple examples, ChatGPT surfaced gambling recommendations that traced back to legitimate websites that had been compromised.
One case involved a California-based domestic violence attorney whose site was found hosting a listicle about online slots.
Other examples included a United Nations youth coalition website and a U.S. summer camp site. They were both seemingly hijacked to host gambling-related content, including pages using white text on a white background to evade detection.
2. Expired Domains
The second tactic involves acquiring expired domains with strong backlink profiles and rebuilding them to promote unrelated content.
In one case, Brockbank discovered a site with over 9,000 referring domains from sources like BBC, CNN, and Bloomberg. The domain, once owned by a UK arts charity, had been repurposed to promote gambling.
Brockbank explained:
“There’s no question that it’s the site’s authority that’s causing it to be used as a source. The issue is that the domain changed hands and the site totally switched up.”
He also found domains that previously belonged to charities and retailers now being used to publish casino recommendations.
Why This Content Is Surfacing
Brockbank suggests that ChatGPT favors domains with perceived authority and recent publication dates.
Additionally, he finds ChatGPT’s recommendation system may not sufficiently evaluate whether the content aligns with the original site’s purpose.
Brockbank observed:
“ChatGPT prefers recent sources, and the fact that these listicles aren’t topically relevant to what the domain is (or should be) about doesn’t seem to matter.”
Brockbank acknowledges that being featured in authentic “best of” listicles or media placements can help businesses gain visibility in AI-generated results.
However, leveraging hacked or expired domains to manipulate source credibility crosses an ethical line.
Brockbank writes:
“Injecting your brand or content into a hacked site or rebuilding an expired domain solely to fool a language model into citing it? That’s manipulation, and it undermines the credibility of the platform.”
What This Means
While Brockbank’s findings are based on individual testing rather than a formal study, they surface a real concern: ChatGPT may be citing manipulated sources without fully understanding their origins or context.
The takeaway isn’t just about risk, it’s also about responsibility. As platforms like ChatGPT become more influential in how users discover businesses, building legitimate authority through trustworthy content and earned media will matter more than ever.
At the same time, the investigation highlights an urgent need for companies to improve how these systems detect and filter deceptive content. Until that happens, both users and businesses should approach AI-generated recommendations with a dose of skepticism.
Brockbank concluded:
“We’re not yet at the stage where we can trust ChatGPT recommendations without considering where it’s sourced these from.”
Yoast SEO announced a new feature that enables SEO and readability analysis within Google Docs, allowing publishers and teams to integrate search marketing best practices at the moment content is created instead of as an editing activity that comes after the fact.
Two Functionalities Carry Over To Google Docs
Yoast SEO is providing SEO optimization and readability feedback within the Google Docs editing environment.
SEO feedback consists of the familiar traffic light system that offers visual confirmation that the content is search optimized according to Yoast SEO’s content metrics on keywords, structure and optimization.
The readability analysis offers feedback on paragraph structure, sentence length, and headings to help the writer create engaging content, which is increasingly important in today’s content-first search engines that prioritize high quality content.
According to Yoast SEO:
“The Google Docs add-on tool is available to all Yoast SEO Premium subscribers, offering them a range of advanced optimization tools. For those not yet subscribed to Yoast Premium, the add-on is also available as a single purchase, making it accessible to a broader audience.
For those managing multiple team members, additional Google accounts can be linked for just $5 a month per account or annually for a 10% discount ($54). This flexibility ensures that anyone who writes content and in-house marketing teams managing multiple projects can benefit from high-quality SEO guidance.”
This new offering is an interesting step for Yoast SEO. Previously known as the developer of the Yoast SEO WordPress plugin, it’s expanded to Shopify and now it’s breaking out of the CMS paradigm to encompass the optimization process that happens before the content gets into the CMS.
Most brands don’t know they’re wasting money on branded ads. Are you one of them?
What if your Google Ads strategy is quietly draining your budget? Many advertisers are paying high CPCs even when there’s no real competition. It’s often because they’re unknowingly bidding against themselves.
Join BrandPilot AI on July 17, 2025 for a live session with Jenn Paterson and John Beresford, as they explain The Uncontested Paid Search Problem and how to stop it before it eats into your performance.
In this data-backed session, you’ll learn:
Why CPCs rise even without competitor bidding
How to detect branded ad waste in your own account
What this hidden flaw is costing your brand
Tactical strategies to reclaim lost budget and improve your results
Why this matters:
Brands are overspending on Google Ads without knowing the real reason. If you’re running branded search campaigns, this session will show you how to identify and fix what’s costing you the most.
Register today to protect your spend and improve performance. If you can’t attend live, sign up anyway and we’ll send you the full recording after the event.
Internet Marketing Ninjas has been acquired by SEO consultancy Previsible, an industry leader co-founded by a former head of SEO at eBay. The acquisition brings link building and digital PR expertise to Previsible. While both companies are now under shared ownership, they will continue to operate as separate brands.
Internet Marketing Ninjas
Founded in 1999 by Jim Boykin as We Build Pages, the Internet Marketing Ninjas consultancy story is one of steady innovation and pivoting in response to changes brought by Google. In my opinion, Jim’s talent was his ability to scale the latest tactics in order to offer the services to a large number of clients, and his ability to nimbly ramp up new strategies at scale in response to changes at Google. The names of the people he employed are a who’s who of legendary marketers.
In the early days of SEO, when reciprocal linking was the rage, it was Jim Boykin who became known as a bulk provider of that service, and when directories became a hot service, he was able to scale that tactic and make it easy for business owners to pick up links fast. Over time, the ability to provide links became increasingly harder, and yet Jim Boykin kept on innovating with strategies that made it easy for customers to attain links. I’ve long been an admirer of Boykin because he is the rare individual who can be both a brilliant SEO strategizer and a savvy business person.
Jordan Koene, CEO and co-founder at Previsible, commented:
“Previsible believes that the future of discovery and search lies at the intersection of trust and visibility. Our acquisition of Internet Marketing Ninjas brings one of the most experienced trusted-link and digital PR teams into our ecosystem. As search continues to evolve beyond keywords into authority, reputation, and real-world relevance, link strategies are essential for brands to stand out.”
Previsible and Internet Marketing Ninjas will continue to operate as separate brands, leveraging Boykin’s existing team for their expertise.
Jim Boykin explained:
“Combining forces with Previsible kicks off an incredibly exciting new chapter for Internet Marketing Ninjas. We’re not just an SEO company anymore, we’re at the forefront of the future of digital visibility. Together with Previsible, we’re leading the charge in both search and AI-driven discovery.
By merging decades of deep SEO expertise with bold, forward-thinking innovation, we’re meeting the future of online marketing head-on. From Google’s AI Overviews to ChatGPT and whatever comes next, our newly united team is perfectly positioned to help brands get found, build trust, and be talked about across the entire digital landscape. I’m absolutely stoked about what we’re building together and how we’re going to shape the next era of internet marketing.”
Previsible’s acquisition of Internet Marketing Ninjas merges long-standing experience in link building while retaining the distinct brands and teams that make each consultancy a search marketing leader. The partnership will enable clients to increase visibility by bringing the expertise of both companies together.
YouTube has responded to concerns surrounding its upcoming monetization policy update, clarifying that the July 15 changes are aimed at improving the detection of inauthentic content.
The update isn’t a crackdown on popular formats like reaction videos or clip compilations.
The clarification comes from Renee Richie, a creator liaison at YouTube, after a wave of confusion and concern followed the initial announcement.
“If you’re seeing posts about a July 2025 update to the YouTube Partner Program monetization policies and you’re concerned it’ll affect your reaction or clips or other type of channel. This is a minor update to YouTube’s long-standing YPP policies to help better identify when content is mass-produced or repetitive.”
Clarifying What’s Changing
Richie explained that the types of content targeted by the update, mass-produced and repetitious material, have already been ineligible for monetization under the YouTube Partner Program (YPP).
The update doesn’t change the rules but is intended to enhance how YouTube enforces them.
That distinction is important: while the policy itself isn’t new, enforcement may reach creators who were previously flying under the radar.
Why Creators Were Concerned
YouTube’s original announcement said the platform would “better identify mass-produced and repetitious content,” but didn’t clearly define those terms or how the update would be applied.
This vagueness led to speculation that reaction videos, clip compilations, or commentary content might be targeted, especially if those formats reuse footage or follow repetitive structures.
Richie’s clarification helps narrow the scope of the update, but it doesn’t explicitly exempt all reaction or clips channels. Channels relying on recycled content without significant added value may run into issues.
Understanding The Policy Context
YouTube’s Partner Program has always required creators to produce “original” and “authentic” content to qualify for monetization.
The July 15 update reiterates that standard, while providing more clarity around what the platform considers inauthentic today.
According to the July 2 announcement:
“On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content. This update better reflects what ‘inauthentic’ content looks like today.”
YouTube emphasized two patterns in particular:
Mass-produced content
Repetitious content
While some reaction or commentary videos could fall under these categories, Richie’s statement suggests that the update is not meant to penalize formats that include meaningful creative input.
What This Means
Transformative content, such as reactions, commentary, and curated clips with original insights or editing, is still eligible for monetization.
But creators using these formats should ensure they’re offering something new or valuable in each upload.
The update appears aimed at:
Auto-generated or templated videos with minimal variation
Reposted or duplicated content with little editing or context
Channels that publish near-identical videos in large quantities
For creators who invest in original scripting, commentary, editing, or creative structure, this update likely won’t require changes. But those leaning on low-effort or highly repetitive content strategies may be at increased risk of losing monetization.
Looking Ahead
The updated policy will take effect on July 15. Channels that continue to publish content flagged as mass-produced or repetitive after this date may face removal from the Partner Program.
While Richie’s clarification aims to calm fears, it doesn’t override the enforcement language in the original announcement. Creators still have time to review their libraries and adjust strategies to ensure compliance.
For most of its history, SEO has been a reactive discipline, being asked to “make it rank” once a site is built, with little input into the process.
Even crazier, most SEO professionals are assigned a set of key performance indicators (KPIs) for which they are accountable, metrics tied to visibility, engagement, and revenue.
Still, they have no real control over the underlying systems that affect them. These metrics often rely on the performance of disconnected teams, including content, engineering, brand, and product, which don’t always share the same objectives.
When my previous agency, Global Strategies, was acquired by Ogilvy, I recommended that our team be viewed as building inspectors, not just an SEO package upsell added at the end, but involved at key phases when architects, engineers, and tradespeople had laid out the structural components.
Ideally, we’d come in after the site framing (wireframes) was complete, reviewing the plumbing (information architecture), electrical (navigation and links), and foundation (technical performance), but before the drywall and paint obscured what lies beneath.
We’d validate that the right materials were used and that construction followed a standard fit for long-term performance.
However, in reality, we were rarely invited into the planning stages because that was creative, and we were just SEO. We were usually brought in only after launch, tasked with fixing what had already been buried behind a visually appealing design.
Despite fighting for it, I was never a complete fan of this model; it made sense in the early days of search, when websites were simple, and ranking factors were more forgiving.
SEO practitioners identified crawl issues, adjusted metadata, optimized titles, fixed broken links, and retrofitted pages with keywords and internal links.
That said, I have long advocated for eliminating the need for most SEO actions by integrating the fixes into the roles and workflows that initially broke them.
Through education, process change, and content management system (CMS) innovation, much of what SEO fixes could, and should, become standard practice.
However, this has been a challenging sell, as SEO has often been viewed as less important than design, development, or content creation.
It was easier to assign SEO the role of cleanup crew rather than bake best practices into upstream systems and roles. We worked around CMS limitations, cleaned up after redesigns, and tried to reverse-engineer what Google wanted from the outside in.
But that role of identifying and fixing defects is no longer enough. And in the AI-driven search environment, it’s becoming obsolete.
Search Has Changed. Our Role Must Too.
Search engines today do far more than index and rank webpages. They extract answers, synthesize responses, and generate real-time content previews.
What used to be a linear search journey (query > list of links > website) has become a multi-layered ecosystem of zero-click answers, AI summaries, featured snippets, and voice responses.
Traditional SEO tactics, indexability, content relevance, and backlinks still matter in this environment, but only as part of a larger system.
The new currency of visibility is semantic clarity, machine-readability, and multi-system integration. SEO is no longer about optimizing a page. It’s about orchestrating a system.
This complexity requires us to transition from being just an inspector to becoming the Commissioning Authority (CxA) to meet the demands of this shift.
What Is A Commissioning Authority?
In modern architecture and construction, a Commissioning Authority is a specialized professional who ensures that all building systems, including HVAC, electrical, plumbing, safety, and lighting, function as intended in combination.
They are brought in not just to inspect but also to validate, test, and orchestrate performance.
They work on behalf of the building owner, aligning the construction output with the original design intent and operational goals. They look at interoperability, performance efficiency, long-term sustainability, and documentation.
They are not passive checkers. They are active enablers of success.
Why SEO Needs Commissioning Authorities
The modern website is no longer a standalone asset. It is a network of interconnected systems:
Today’s SEO, or whatever the latest alphabet soup acronym du jour is, and especially tomorrow, must be a Commissioning Authority for these systems. That means:
Being involved at the blueprint stage, not just post-launch.
Advocating for search visibility as a performance outcome.
Ensuring that semantic signals, not just visual elements, are embedded in every page.
Testing and validating that the site performs in AI environments, not just traditional search engine results pages (SERPs).
The Rise Of The Relevance Engineer
A key function within this evolved CxA role is that of the Relevance Engineer, a concept and term introduced by Mike King of iPullRank.
Mike has been one of the most vocal and insightful leaders on the transformation of SEO in the AI era, and his view is clear: The discipline must fundamentally evolve, both in practice and in how it is positioned within organizations.
Mike King’s perspective underscores that treating AI-driven search as simply an extension of traditional SEO is dangerously misguided.
Instead, we must embrace a new function, Relevance Engineering, which focuses on optimizing for semantic alignment, passage-level competitiveness, and probabilistic rankings, rather than deterministic keyword-based tactics.
The Relevance Engineer ensures:
Each content element is structured and chunked for generative AI consumption.
Content addresses layered user intent, from informational to transactional.
Schema markup and internal linking reinforce topical authority and entity associations.
The site’s architecture supports passage-level understanding and AI summarization.
In many ways, the Relevance Engineer is the semantic strategist of the SEO team, working hand-in-hand with designers, developers, and content creators to ensure that relevance is not assumed but engineered.
In construction terms, this might resemble a systems integration specialist. This expert ensures that electrical, plumbing, HVAC, and automation systems function individually and operate cohesively within an innovative building environment.
Relevance Engineering is more than a title; it’s a mindset shift. It emphasizes that SEO must now live at the intersection of information science, user experience, and machine interpretability.
From Inspector To CxA: How The Role Shifts
SEO Pillar
Old Role: Building Inspector
New Role: Commissioning Authority
Indexability
Check crawl blocks after build
Design architecture for accessibility and rendering
Relevance
Patch in keywords post-launch
Map content to entity models and query intent upfront, guided by a Relevance Engineer
Authority
Chase links to weak content
Build a structured reputation and concept ownership
Clickability
Tweak titles and meta descriptions
Structure content for AI previews, snippets, and voice answers
User Experience
Flag issues in testing
Embed UX, speed, and clarity into the initial design
Looking Ahead: The Next Generation Of SEO
As AI continues to reshape search behavior, SEO pros must adapt again. We will need to:
Understand how content is deconstructed and repackaged by large language models (LLMs).
Ensure that our information is structured, chunked, and semantically aligned to be eligible for synthesis.
Advocate for knowledge modeling, not just keyword optimization.
Encourage cross-functional integration between content, engineering, design, and analytics.
The next generation of SEO leaders will not be optimization specialists.
They will be systems thinkers, semantic strategists, digital performance architects, storytellers, performance coaches, and importantly, master negotiators to advocate and steer the necessary organizational, infrastructural, and content changes to thrive.
They will also be force multipliers – individuals or teams who amplify the effectiveness of everyone else in the process.
By embedding structured, AI-ready practices into the workflow, they enable content teams, developers, and marketers to do their jobs better and more efficiently.
The Relevance Engineer and Commissioning Authority roles are not just tactical additions but strategic leverage points that unlock exponential impact across the digital organization.
Final Thought
Too much article space has been wasted arguing over what to call this new era – whether SEO is dead, what the acronym should be, or what might or might not be part of the future.
Meanwhile, far too little attention has been devoted to the structural and intellectual shifts organizations must make to remain competitive in a search environment reshaped by AI.
Suppose we, as an industry, do not start changing the rules, roles, and mindset now. In that case, we’ll again be scrambling when the CEO demands to know why the company missed profitability targets, only to realize we’re buying back traffic we should have earned.
We’ve spent 30 years trying to retrofit what others built into something functional for search engines – pushing massive boulders uphill to shift monoliths into integrated digital machines. That era is over.
The brands that will thrive in the AI search era are those that elevate SEO from a reactive function to a strategic discipline with a seat at the planning table.
The professionals who succeed will be those who speak the language of systems, semantics, and sustained performance – and who take an active role in shaping the digital infrastructure.
The future of SEO is not about tweaking; it’s about taking the reins. It’s about stepping into the role of Commissioning Authority, aligning stakeholders, systems, and semantics.
And at its core, it will be driven by the precision of relevance engineering, and amplified by the force multiplier effect of integrated, strategic influence.
For all the noise around keywords, content strategy, and AI-generated summaries, technical SEO still determines whether your content gets seen in the first place.
You can have the most brilliant blog post or perfectly phrased product page, but if your site architecture looks like an episode of “Hoarders” or your crawl budget is wasted on junk pages, you’re invisible.
If you’re still treating it like a one-time setup or a background task for your dev team, you’re leaving visibility (and revenue) on the table.
This isn’t about obsessing over Lighthouse scores or chasing 100s in Core Web Vitals. It’s about making your site easier for search engines to crawl, parse, and prioritize, especially as AI transforms how discovery works.
Crawl Efficiency Is Your SEO Infrastructure
Before we talk tactics, let’s align on a key truth: Your site’s crawl efficiency determines how much of your content gets indexed, updated, and ranked.
Crawl efficiency is equal to how well search engines can access and process the pages that actually matter.
The longer your site’s been around, the more likely it’s accumulated detritus – outdated pages, redirect chains, orphaned content, bloated JavaScript, pagination issues, parameter duplicates, and entire subfolders that no longer serve a purpose. Every one of these gets in Googlebot’s way.
Improving crawl efficiency doesn’t mean “getting more crawled.” It means helping search engines waste less time on garbage so they can focus on what matters.
Technical SEO Areas That Actually Move The Needle
Let’s skip the obvious stuff and get into what’s actually working in 2025, shall we?
1. Optimize For Discovery, Not “Flatness”
There’s a long-standing myth that search engines prefer flat architecture. Let’s be clear: Search engines prefer accessible architecture, not shallow architecture.
A deep, well-organized structure doesn’t hurt your rankings. It helps everything else work better.
Logical nesting supports crawl efficiency, elegant redirects, and robots.txt rules, and makes life significantly easier when it comes to content maintenance, analytics, and reporting.
Fix it: Focus on internal discoverability.
If a critical page is five clicks away from your homepage, that’s the problem, not whether the URL lives at /products/widgets/ or /docs/api/v2/authentication.
Use curated hubs, cross-linking, and HTML sitemaps to elevate key pages. But resist flattening everything into the root – that’s not helping anyone.
Example: A product page like /products/waterproof-jackets/mens/blue-mountain-parkas provides clear topical context, simplifies redirects, and enables smarter segmentation in analytics.
By contrast, dumping everything into the root turns Google Analytics 4 analysis into a nightmare.
Want to measure how your documentation is performing? That’s easy if it all lives under /documentation/. Nearly impossible if it’s scattered across flat, ungrouped URLs.
Pro tip: For blogs, I prefer categories or topical tags in the URL (e.g., /blog/technical-seo/structured-data-guide) instead of timestamps.
Dated URLs make content look stale – even if it’s fresh – and provide no value in understanding performance by topic or theme.
In short: organized ≠ buried. Smart nesting supports clarity, crawlability, and conversion tracking. Flattening everything for the sake of myth-based SEO advice just creates chaos.
2. Eliminate Crawl Waste
Google has a crawl budget for every site. The bigger and more complex your site, the more likely you’re wasting that budget on low-value URLs.
Common offenders:
Calendar pages (hello, faceted navigation).
Internal search results.
Staging or dev environments accidentally left open.
Infinite scroll that generates URLs but not value.
Endless UTM-tagged duplicates.
Fix it: Audit your crawl logs.
Disallow junk in robots.txt. Use canonical tags correctly. Prune unnecessary indexable pages. And yes, finally remove that 20,000-page tag archive that no one – human or robot – has ever wanted to read.
3. Fix Your Redirect Chains
Redirects are often slapped together in emergencies and rarely revisited. But every extra hop adds latency, wastes crawl budget, and can fracture link equity.
Fix it: Run a redirect map quarterly.
Collapse chains into single-step redirects. Wherever possible, update internal links to point directly to the final destination URL instead of bouncing through a series of legacy URLs.
Clean redirect logic makes your site faster, clearer, and far easier to maintain, especially when doing platform migrations or content audits.
And yes, elegant redirect rules require structured URLs. Flat sites make this harder, not easier.
4. Don’t Hide Links Inside JavaScript
Google can render JavaScript, but large language models generally don’t. And even Google doesn’t render every page immediately or consistently.
If your key links are injected via JavaScript or hidden behind search boxes, modals, or interactive elements, you’re choking off both crawl access and AI visibility.
Fix it: Expose your navigation, support content, and product details via crawlable, static HTML wherever possible.
LLMs like those powering AI Overviews, ChatGPT, and Perplexity don’t click or type. If your knowledge base or documentation is only accessible after a user types into a search box, LLMs won’t see it – and won’t cite it.
Real talk: If your official support content isn’t visible to LLMs, they’ll pull answers from Reddit, old blog posts, or someone else’s guesswork. That’s how incorrect or outdated information becomes the default AI response for your product.
Solution: Maintain a static, browsable version of your support center. Use real anchor links, not JavaScript-triggered overlays. Make your help content easy to find and even easier to crawl.
Invisible content doesn’t just miss out on rankings. It gets overwritten by whatever is visible. If you don’t control the narrative, someone else will.
5. Handle Pagination And Parameters With Intention
Infinite scroll, poorly handled pagination, and uncontrolled URL parameters can clutter crawl paths and fragment authority.
It’s not just an indexing issue. It’s a maintenance nightmare and a signal dilution risk.
Fix it: Prioritize crawl clarity and minimize redundant URLs.
While rel=”next”/rel=”prev” still gets thrown around in technical SEO advice, Google retired support years ago, and most content management systems don’t implement it correctly anyway.
Instead, focus on:
Using crawlable, path-based pagination formats (e.g., /blog/page/2/) instead of query parameters like ?page=2. Google often crawls but doesn’t index parameter-based pagination, and LLMs will likely ignore it entirely.
Ensuring paginated pages contain unique or at least additive content, not clones of page one.
Avoiding canonical tags that point every paginated page back to page one that tells search engines to ignore the rest of your content.
Using robots.txt or meta noindex for thin or duplicate parameter combinations (especially in filtered or faceted listings).
Defining parameter behavior in Google Search Console only if you have a clear, deliberate strategy. Otherwise, you’re more likely to shoot yourself in the foot.
Pro tip: Don’t rely on client-side JavaScript to build paginated lists. If your content is only accessible via infinite scroll or rendered after user interaction, it’s likely invisible to both search crawlers and LLMs.
Good pagination quietly supports discovery. Bad pagination quietly destroys it.
Crawl Optimization And AI: Why This Matters More Than Ever
You might be wondering, “With AI Overviews and LLM-powered answers rewriting the SERP, does crawl optimization still matter?”
Yes. More than ever.
Pourquoi? AI-generated summaries still rely on indexed, trusted content. If your content doesn’t get crawled, it doesn’t get indexed. If it’s not indexed, it doesn’t get cited. And if it’s not cited, you don’t exist in the AI-generated answer layer.
AI search agents (Google, Perplexity, ChatGPT with browsing) don’t pull full pages; they extract chunks of information. Paragraphs, sentences, lists. That means your content architecture needs to be extractable. And that starts with crawlability.
If you want to understand how that content gets interpreted – and how to structure yours for maximum visibility – this guide on how LLMs interpret content breaks it down step by step.
Remember, you can’t show up in AI Overviews if Google can’t reliably crawl and understand your content.
Bonus: Crawl Efficiency For Site Health
Efficient crawling is more than an indexing benefit. It’s a canary in the coal mine for technical debt.
If your crawl logs show thousands of pages no longer relevant, or crawlers are spending 80% of their time on pages you don’t care about, it means your site is disorganized. It’s a signal.
Clean it up, and you’ll improve everything from performance to user experience to reporting accuracy.
What To Prioritize This Quarter
If you’re short on time and resources, focus here:
Crawl Budget Triage: Review crawl logs and identify where Googlebot is wasting time.
Internal Link Optimization: Ensure your most important pages are easily discoverable.
Remove Crawl Traps: Close off dead ends, duplicate URLs, and infinite spaces.
JavaScript Rendering Review: Use tools like Google’s URL Inspection Tool to verify what’s visible.
Eliminate Redirect Hops: Especially on money pages and high-traffic sections.
These are not theoretical improvements. They translate directly into better rankings, faster indexing, and more efficient content discovery.
TL;DR: Keywords Matter Less If You’re Not Crawlable
Technical SEO isn’t the sexy part of search, but it’s the part that enables everything else to work.
If you’re not prioritizing crawl efficiency, you’re asking Google to work harder to rank you. And in a world where AI-powered search demands clarity, speed, and trust – that’s a losing bet.
Filing link disavows is generally a futile way to deal with spammy links, but they are useful for dealing with unnatural links an SEO or a publisher is responsible for creating, which can require urgent action. But how long does Google take to process them? Someone asked John Mueller that exact question, and his answer provides insight into how link disavows are handled internally at Google.
Google’s Link Disavow Tool
The link disavow tool is a way for publishers and SEOs to manage unwanted backlinks that they don’t want Google to count against them. It literally means that the publisher disavows the links.
The tool was created by Google in response to requests by SEOs for an easy way to disavow paid links they were responsible for obtaining and were unable to remove from the websites in which they were placed. The link disavow tool is accessible via the Google Search Console and enables users to upload a spreadsheet with a list of URLs or domains from which they want links to not count against them in Google’s index.
Google’s official guidance for the disavow tool has always been that it’s for use by SEOs and publishers who want to disavow paid or otherwise unnatural links that they are responsible for obtaining and are unable to have removed. Google expressly says that the vast majority of sites do not need to use the tool, especially for low quality links for which they have nothing to do with.
How Google Processes The Link Disavow Tool
A person asked Mueller on Blue Sky for details about how Google processed the newly added links.
“When we add domains to the disavow, i.e top up the list. Can I assume the new domains are treated separately as new additions.
You don’t reprocess the whole thing?”
John Mueller answered that the order of the domains and URLs on the list didn’t matter.
His response:
“The order in the disavow file doesn’t matter. We don’t process the file per-se (it’s not an immediate filter of “the index”), we take it into account when we recrawl other sites naturally.”
The answer is interesting because he says that Google doesn’t process the link disavow file “per-se” and what he likely means is that it’s not acted on in that moment. The “filtering” of that disavowed link happens at the time when a subsequent crawling happens.
So another way to look at it is that the link disavow file doesn’t trigger anything, but the data contained in the file is acted upon during the normal course of crawling.
If you were told that the odds of something were 3.1%, it really wouldn’t seem like much. But for the people charged with protecting our planet, it was huge.
On February 18, astronomers determined that a 130- to 300-foot-long asteroid had a 3.1% chance of crashing into Earth in 2032. Never had an asteroid of such dangerous dimensions stood such a high chance of striking the planet. For those following this developing story in the news, the revelation was unnerving. For many scientists and engineers, though, it turned out to be—despite its seriousness—a little bit exciting.
While possible impact locations included patches of empty ocean, the space rock, called 2024 YR4, also had several densely populated cities in its possible crosshairs, including Mumbai, Lagos, and Bogotá. If the asteroid did in fact hit such a metropolis, the best-case scenario was severe damage; the worst case was outright, total ruin. And for the first time, a group of United Nations–backed researchers began to have high-level discussions about the fate of the world: If this asteroid was going to hit the planet, what sort of spaceflight mission might be able to stop it? Would they ram a spacecraft into it to deflect it? Would they use nuclear weapons to try to swat it away or obliterate it completely?
At the same time, planetary defenders all over the world crewed their battle stations to see if we could avoid that fate—and despite the sometimes taxing new demands on their psyches and schedules, they remained some of the coolest customers in the galaxy. “I’ve had to cancel an appointment saying, I cannot come—I have to save the planet,” says Olivier Hainaut, an astronomer at the European Southern Observatory and one of those who tracked down 2024 YR4.
Then, just as quickly as history was made, experts declared that the danger had passed. On February 24, asteroid trackers issued the all-clear: Earth would be spared, just as many planetary defense researchers had felt assured it would.
How did they do it? What was it like to track the rising (and rising and rising) danger of this asteroid, and to ultimately determine that it’d miss us?
This is the inside story of how, over a span of just two months, a sprawling network of global astronomers found, followed, mapped, planned for, and finally dismissed 2024 YR4, the most dangerous asteroid ever found—all under the tightest of timelines and, for just a moment, with the highest of stakes.
“It was not an exercise,” says Hainaut. This was the real thing: “We really [had] to get it right.”
IN THE BEGINNING
December 27, 2024
THE ASTEROID TERRESTRIAL-IMPACT LAST ALERT SYSTEM, HAWAII
Long ago, an asteroid in the space-rock highway between Mars and Jupiter felt a disturbance in the force: the gravitational pull of Jupiter itself, king of the planets. After some wobbling back and forth, this asteroid was thrown out of the belt, skipped around the sun, and found itself on an orbit that overlapped with Earth’s own.
“I was the first one to see the detections of it,” Larry Denneau, of the University of Hawai‘i, recalls. “A tiny white pixel on a black background.”
Denneau is one of the principal investigators at the NASA-funded Asteroid Terrestrial-impact Last Alert System (ATLAS) telescopic network. It may have been just two days after Christmas, but he followed procedure as if it were any other day of the year and sent the observations of the tiny pixel onward to another NASA-funded facility, the Minor Planet Center (MPC) in Cambridge, Massachusetts.
There’s an alternate reality in which none of this happened. Fortunately, in our timeline, various space agencies—chiefly NASA, but also the European Space Agency and the Japan Aerospace Exploration Agency—invest millions of dollars every year in asteroid-spotting efforts.
And while multiple nations host observatories capable of performing this work, the US clearly leads the way: Its planetary defense program provides funding to a suite of telescopic facilities solely dedicated to identifying potentially hazardous space rocks. (At least, it leads the way for the moment. The White House’s proposal for draconian budget cuts to NASA and the National Science Foundation mean that several observatories and space missions linked to planetary defense are facing funding losses or outright terminations.)
Astronomers working at these observatories are tasked with finding threatening asteroids before they find us—because you can’t fight what you can’t see. “They are the first line of planetary defense,” says Kelly Fast, the acting planetary defense officer at NASA’s Planetary Defense Coordination Office in Washington, DC.
ATLAS is one part of this skywatching project, and it consists of four telescopes: two in Hawaii, one in Chile, and another in South Africa. They don’t operate the way you’d think, with astronomers peering through them all night. Instead, they operate “completely robotically and automatically,” says Denneau. Driven by coding scripts that he and his colleagues have developed, these mechanical eyes work in harmony to watch out for any suspicious space rocks. Astronomers usually monitor their survey of the sky from a remote location.
ATLAS telescopes are small, so they can’t see particularly distant objects. But they have a wide field of view, allowing them to see large patches of space at any one moment. “As long as the weather is good, we’re constantly monitoring the night sky, from the North Pole to the South Pole,” says Denneau.
Larry Denneau is a principal investigator at the Asteroid Terrestrial-impact Last Alert System telescopic network.
COURTESY PHOTO
If they detect the starlight reflecting off a moving object, an operator, such as Denneau, gets an alert and visually verifies that the object is real and not some sort of imaging artifact. When a suspected asteroid (or comet) is identified, the observations are sent to the MPC, which is home to a bulletin board featuring (among other things) orbital data on all known asteroids and comets.
If the object isn’t already listed, a new discovery is announced, and other astronomers can perform follow-up observations.
In just the past few years, ATLAS has detected more than 1,200 asteroids with near-Earth orbits. Finding ultimately harmless space rocks is routine work—so much so that when the new near-Earth asteroid was spotted by ATLAS’s Chilean telescope that December day, it didn’t even raise any eyebrows.
Denneau had simply been sitting at home,doing some late-night work on his computer. At the time, of course, he didn’t know that his telescope had just spied what would soon become a history-making asteroid—one that could alter the future of the planet.
The MPC quickly confirmed the new space rock hadn’t already been “found,” and astronomers gave it a provisional designation: 2024 YR4.
CATALINA SKY SURVEY, ARIZONA
Around the same time, the discovery was shared with another NASA-funded facility: the Catalina Sky Survey, a nest of three telescopes in the Santa Catalina Mountains north of Tucson that works out of the University of Arizona. “We run a very tight operation,” says Kacper Wierzchoś, one of its comet and asteroid spotters. Unlike ATLAS, these telescopes (although aided by automation) often have an in-person astronomer available to quickly alter the surveys in real time.
“We run a very tight operation,” says Kacper Wierzchoś, one of the comet and asteroid spotters at the Catalina Sky Survey north of Tucson, Arizona.
COURTESY PHOTO
So when Catalina was alerted about what its peers at ATLAS had spotted, staff deployed its Schmidt telescope—a smaller one that excels at seeing bright objects moving extremely quickly. As they fed their own observations of 2024 YR4 to the MPC, Catalina engineer David Rankin looked back over imagery from the previous days and found the new asteroid lurking in a night-sky image taken on December 26. Around then, ATLAS also realized that it had caught sight of 2024 YR4 in a photograph from December 25.
The combined observations confirmed it: The asteroid had made its closest approach to Earth on Christmas Day, meaning it was already heading back out into space. But where, exactly, was this space rock going? Where would it end up after it swung around the sun?
CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA
If the answer to that question was Earth, Davide Farnocchia would be one of the first to know. You could say he’s one of NASA’s watchers on the wall.
And he’s remarkably calm about his duties. When he first heard about 2024 YR4, he barely flinched. It was just another asteroid drifting through space not terribly far from Earth. It was another box to be ticked.
Once it was logged by the MPC, it was Farnocchia’s job to try to plot out 2024 YR4’s possible paths through space, checking to see if any of them overlapped with our planet’s. He works at NASA’s Center for Near-Earth Object Studies (CNEOS) in California, where he’s partly responsible for keeping track of all the known asteroids and comets in the solar system. “We have 1.4 million objects to deal with,” he says, matter-of-factly.
In the past, astronomers would have had to stitch together multiple images of this asteroid and plot out its possible trajectories. Today, fortunately, Farnocchia has some help: He oversees the digital brain Sentry, an autonomous system he helped code. (Two other facilities in Italy perform similar work: the European Space Agency’s Near-Earth Object Coordination Centre, or NEOCC, and the privately owned Near-Earth Objects Dynamics Site, or NEODyS.)
To chart their courses, Sentry uses every new observation of every known asteroid or comet listed on the MPC to continuously refine the orbits of all those objects, using the immutable laws of gravity and the gravitational influences of any planets, moons, or other sizable asteroids they pass. A recent update to the software means that even the ever-so-gentle push afforded by sunlight is accounted for. That allows Sentry to confidently project the motions of all these objects at least a century into the future.
Davide Farnocchia helps track all the known asteroids and comets in the solar system at NASA’s Center for Near-Earth Object Studies.
COURTESY PHOTO
Almost all newly discovered asteroids are quickly found to pose no impact risk. But those that stand even an infinitesimally small chance of smashing into our planet within the next 100 years are placed on the Sentry Risk List until additional observations can rule out those awful possibilities. Better safe than sorry.
In late December, with just a limited set of data, Sentry concluded that there was a non-negligible chance 2024 YR4 would strike Earth in 2032. Aegis, the equivalent software at Europe’s NEOCC site, agreed. No bother. More observations would very likely remove 2024 YR4 from the Risk List. Just another day at the office for Farnocchia.
It’s worth noting that an asteroid heading toward Earth isn’t always a problem. Small rocks burn up in the planet’s atmosphere several times a day; you’ve probably seen one already this year, on a moonless night. But above a certain size, these rocks turn from innocuous shooting stars into nuclear-esque explosions.
Reflected starlight is great for initially spotting asteroids, but it’s a terrible way to determine how big they are. A large, dull rock reflects as much light as a bright, tiny rock, making them appear the same to many telescopes. And that’s a problem, considering that a rock around 30 feet long will explode loudly but inconsequentially in Earth’s atmosphere, while a 3,000-foot-long asteroid would slam into the ground and cause devastation on a global scale, imperiling all of civilization. Roughly speaking, if you double the size of an asteroid, it becomes eight times more energetic upon impact—so finding out the size of an Earthbound asteroid is of paramount importance.
In those first few hours after it was discovered, and before anyone knew how shiny or dull its surface was, 2024 YR4 was estimated by astronomers to be as small as 65 feet across or as large as 500 feet. An object of the former size would blow up in mid-air, shattering windows over many miles and likely injuring thousands of people. At the latter size it would vaporize the heart of any city it struck, turning solid rock and metal into liquid and vapor, while its blast wave would devastate the rest of it, killing hundreds of thousands or even millions in the process.
So now the question was: Just how big was 2024 YR4?
REFINING THE PICTURE
Mid-January 2025
VERY LARGE TELESCOPE, CHILE
Understandably dissatisfied with that level of imprecision, the European Southern Observatory’s Very Large Telescope (VLT), high up on the Cerro Paranal mountain in Chile’s Atacama Desert, entered the chat. As the name suggests, this flagship facility is vast, and it’s capable of really zooming in on distant objects. Or to put it another way: “The VLT is the largest, biggest, best telescope in the world,” says Hainaut, one of the facility’s operators, who usually commands it from half a world away in Germany.
In reality, the VLT—which lends a hand to the European Space Agency in its asteroid-hunting duties—is actually made up of four massive telescopes, each fixed on four separate corners of the sky. They can be combined to act as a huge light bucket, allowing astronomers to see very faint asteroids. Four additional, smaller, movable telescopes can also team up with their bigger siblings to provide remarkably high-resolution images of even the stealthiest space rocks.
In this sequence of infrared images taken by ESO’s VLT, the individual image frames have been aligned so that the asteroid remains in the center as other stars appear to move around it.
ESO/O. HAINAUT ET AL.
With so much tech to oversee, the control room of the VLT looks a bit like the inside of the Death Star. “You have eight consoles, each of them with a dozen screens. It’s big, it’s large, it’s spectacular,” says Hainaut.
In mid-January, the European Space Agency asked the VLT to study several asteroids that had somewhat suspicious near-Earth orbits—including 2024 YR4. With just a few lines of code, the VLT could easily train its sharp eyes on an asteroid like 2024 YR4, allowing astronomers to narrow down its size range. It was found to be at least 130 feet long (big enough to cause major damage in a city) and as much as 300 feet (able to annihilate one).
January 29, 2025
INTERNATIONAL ASTEROID WARNING NETWORK
Marco Fenucci is a near-Earth-object dynamicist at the European Space Agency’s Near-Earth Object Coordination Centre.
COURTESY PHOTO
By the end of the month, there was no mistaking it: 2024 YR4 stood a greater than 1% chance of impacting Earth on December 22, 2032.
“It’s not something you see very often,” says Marco Fenucci, a near-Earth-object dynamicist at NEOCC. He admits that although it was “a serious thing,” this escalation was also “exciting to see”—something straight out of a sci-fi flick.
Sentry and Aegis, along with the systems at NEODyS, had been checking one another’s calculations. “There was a lot of care,” says Farnocchia, who explains that even though their programs worked wonders, their predictions were manually verified by multiple experts. When a rarity like 2024 YR4 comes along, he says, “you kind of switch gears, and you start being more cautious. You start screening everything that comes in.”
At this point, the klaxon emanating from these three data centers pushed the International Asteroid Warning Network (IAWN), a UN-backed planetary defense awareness group, to issue a public alert to the world’s governments: The planet may be in peril. For the most part, it was at this moment that the media—and the wider public—became aware of the threat. Earth, we may have a problem.
Denneau, along with plenty of other astronomers, received an urgent email from Fast at NASA’s Planetary Defense Coordination Office, requesting that all capable observatories track this hazardous asteroid. But there was one glaring problem. When 2024 YR4 was discovered on December 27, it was already two days after it had made its closest approach to Earth. And since it was heading back out into the shadows of space, it was quickly fading from sight.
Once it gets too faint, “there’s not much ATLAS can do,” Denneau says. By the time of IAWN’s warning, planetary defenders had just weeks to try to track 2024 YR4 and refine the odds of its hitting Earth before they’d lose it to the darkness.
And if their scopes failed, the odds of an Earth impact would have stayed uncomfortably high until 2028, when the asteroid was due to make another flyby of the planet. That’d be just four short years before the space rock might actually hit.
“In that situation, we would have been … in trouble,” says NEOCC’s Fenucci.
The hunt was on.
PREPARING FOR THE WORST
February 5 and February 6, 2025
SPACE MISSION PLANNING ADVISORY GROUP, AUSTRIA
In early February, spaceflight mission specialists, including those at the UN-supported Space Mission Planning Advisory Group in Vienna, began high-level talks designed to sketch out ways in which 2024 YR4 could be either deflected away from Earth or obliterated—you know, just in case.
A range of options were available—including ramming it with several uncrewed spacecraft or assaulting it with nuclear weapons—but there was no silver bullet in this situation. Nobody had ever launched a nuclear explosive device into deep space before, and the geopolitical ramifications of any nuclear-armed nations doing so in the present day would prove deeply unwelcome. Asteroids are also extremely odd objects; some, perhaps including 2024 YR4, are less like single chunks of rock and more akin to multiple cliffs flying in formation. Hit an asteroid like that too hard and you could fail to deflect it—and instead turn an Earthbound cannonball into a spray of shotgun pellets.
It’s safe to say that early on, experts were concerned about whether they could prevent a potential disaster. Crucially, eight years was not actually much time to plan something of this scale. So they were keen to better pinpoint how likely, or unlikely, it was that 2024 YR4 was going to collide with the planet before any complex space mission planning began in earnest.
The people involved with these talks—from physicists at some of America’s most secretive nuclear weapons research laboratories to spaceflight researchers over in Europe—were not feeling close to anything resembling panic. But “the timeline was really short,” admits Hainaut. So there was an unprecedented tempo to their discussions. This wasn’t a drill. This was the real deal. What would they do to defend the planet if an asteroid impact couldn’t be ruled out?
Luckily, over the next few days, a handful of new observations came in. Each helped Sentry, Aegis, and the system at NEODyS rule out more of 2024 YR4’s possible future orbits. Unluckily, Earth remained a potential port of call for this pesky asteroid—and over time, our planet made up a higher proportion of those remaining possibilities. That meant that the odds of an Earth impact “started bubbling up,” says Denneau.
EVA REDAMONTI
By February 6, they jumped to 2.3%—a one-in-43 chance of an impact.
“How much anxiety someone should feel over that—it’s hard to say,” Denneau says, with a slight shrug.
In the past, several elephantine asteroids have been found to stand a small chance of careening unceremoniously into the planet. Such incidents tend to follow a pattern. As more observations come in and the asteroid’s orbit becomes better known, an Earth impact trajectory remains a possibility while other outlying orbits are removed from the calculations—so for a time, the odds of an impact rise. Finally, with enough observations in hand, it becomes clear that the space rock will miss our world entirely, and the impact odds plummet to zero.
Astronomers expected this to repeat itself with 2024 YR4. But there was no guarantee. There’s no escaping the fact that one day, sooner or later, scientists will discover a dangerous asteroid that will punch Earth in the face—and raze a city in the process.
After all, asteroids capable of trashing a city have found their way to Earth plenty of times before, and not just in the very distant past. In 1908, an 800-square-mile patch of forest in Siberia—one that was, fortunately, very sparsely populated—was decimated by a space rock just 180 feet long. It didn’t even hit the ground; it exploded in midair with the force of a 15-megaton blast.
But only one other asteroid comparable in size to 2024 YR4 had its 2.3% figure beat: in 2004, Apophis—capable of causing continental-scale damage—had (briefly) stood a 2.7% chance of impacting Earth in 2029.
Rapidly approaching uncharted waters, the powers that be at NASA decided to play a space-based wild card: the James Webb Space Telescope, or JWST.
THE JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH
A large dull asteroid reflects the same amount of light as a small shiny one, but that doesn’t mean astronomers sizing up an asteroid are helpless. If you view both asteroids in the infrared, the larger one glows brighter than the smaller one no matter the surface coating—making infrared, or the thermal part of the electromagnetic spectrum, a much better gauge of a space rock’s proportions.
Observatories on Earth do have infrared capabilities, but our planet’s atmosphere gets in their way, making it hard for them to offer highly accurate readings of an asteroid’s size.
But the James Webb Space Telescope (JWST), hanging out in space, doesn’t have that problem.
Asteroid 2024 YR4 is the smallest object targeted by JWST to date, and one of the smallest objects to have its size directly measured. Observations were taken using both its NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) to study the thermal properties of the asteroid.
NASA, ESA, CSA, A. RIVKIN (APL), A. PAGAN (STSCI)
This observatory, which sits at a gravitationally stable point about a million miles from Earth, is polymathic. Its sniper-like scope can see in the infrared and allows it to peer at the edge of the observable universe, meaning it can study galaxies that formed not long after the Big Bang. It can even look at the light passing through the atmospheres of distant planets to ascertain their chemical makeups. And its remarkably sharp eye means it can also track the thermal glow of an asteroid long after all ground-based telescopes lose sight of it.
In a fortuitous bit of timing, by the moment 2024 YR4 came along, planetary defenders had recently reasoned that JWST could theoretically be used to track ominous asteroids using its own infrared scope, should the need arise. So after IAWN’s warning went out, operators of JWST ran an analysis: Though the asteroid would vanish from most scopes by late March, this one might be able to see the rock until sometime in May, which would allow researchers to greatly refine their assessment of the asteroid’s orbit and its odds of making Earth impact.
Understanding 2024 YR4’s trajectory was important, but “the size was the main motivator,” says Andy Rivkin, an astronomer at Johns Hopkins University’s Applied Physics Laboratory, who led the proposal to use JWST to observe the asteroid. The hope was that even if the impact odds remained high until 2028, JWST would find that 2024 YR4 was on the smaller side of the 130-to-300-feet size range—meaning it would still be a danger, but a far less catastrophic one.
The JWST proposal was accepted by NASA on February 5. But the earliest it could conduct its observations was early March. And time really wasn’t on Earth’s side.
February 7, 2025
GEMINI SOUTH TELESCOPE, CHILE
“At this point, [2024 YR4] was too faint for the Catalina telescopes,” says Catalina’s Wierzchoś. “In our opinion, this was a big deal.”
So Wierzchoś and his colleagues put in a rare emergency request to commandeer the Gemini Observatory, an internationally funded and run facility featuring two large, eagle-eyed telescopes—one in Chile and one atop Hawaii’s Mauna Kea volcano. Their request was granted, and on February 7, they trained the Chile-based Gemini South telescope onto 2024 YR4.
This composite image was captured by a team of astronomers using the Gemini Multi-Object Spectrograph (GMOS). The hazy dot at the center is asteroid 2024 YR4.
INTERNATIONAL GEMINI OBSERVATORY/NOIRLAB/NSF/AURA/M. ZAMANI
The odds of Earth impact dropped ever so slightly, to 2.2%—a minor, but still welcome, development.
Mid-February 2025
MAGDALENA RIDGE OBSERVATORY, NEW MEXICO
By this point, the roster of 2024 YR4 hunters also included the tiny team operating the Magdalena Ridge Observatory (MRO), which sits atop a tranquil mountain in New Mexico.
“It’s myself and my husband,” says Eileen Ryan, the MRO director. “We’re the only two astronomers running the telescope. We have a daytime technician. It’s kind of a mom-and-pop organization.”
Still, the scope shouldn’t be underestimated. “We can see maybe a cell-phone-size object that’s illuminated at geosynchronous orbit,” Ryan says, referring to objects 22,000 miles away. But its most impressive feature is its mobility. While other observatories have slowly swiveling telescopes, MRO’s scope can move like the wind. “We can track the fastest objects,” she says, with a grin—noting that the telescope was built in part to watch missiles for the US Air Force. Its agility and long-distance vision explain why the Space Force is one of MRO’s major clients: It can be used to spy on satellites and spacecraft anywhere from low Earth orbit right out to the lunar regions. And that meant spying on the super-speedy, super-sneaky 2024 YR4 wasn’t a problem for MRO, whose own observations were vital in refining the asteroid’s impact odds.
Eileen Ryan is the director of the Magdalena Ridge Observatory in New Mexico.
COURTESY PHOTO
Then, in mid-February, MRO and all ground-based observatories came up against an unsolvable problem: The full moon was out, shining so brightly that it blinded any telescope that dared point at the night sky. “During the full moon, the observatories couldn’t observe for a week or so,” says NEOCC’s Fenucci. To most of us, the moon is a beautiful silvery orb. But to astronomers, it’s a hostile actor. “We abhor the moon,” says Denneau.
All any of them could do was wait. Those tracking 2024 YR4 vacillated between being animated and slightly trepidatious. The thought that the asteroid could still stand a decent chance of impacting Earth after it faded from view did weigh a little on their minds.
Nevertheless, Farnocchia maintained his characteristic sangfroid throughout. “I try to stress about the things I can control,” he says. “All we can do is to explain what the situation is, and that we need new data to say more.”
February 18, 2025
CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA
As the full moon finally faded into a crescent of light, the world’s largest telescopes sprang back into action for one last shot at glory. “The dark time came again,” says Hainaut, with a smile.
New observations finally began to trickle in, and Sentry, Aegis, and NEODyS readjusted their forecasts. It wasn’t great news: The odds of an Earth impact in 2032 jumped up to 3.1%, officially making 2024 YR4 the most dangerous asteroid ever discovered.
This declaration made headlines across the world—and certainly made the curious public sit up and wonder if they had yet another apocalyptic concern to fret about. But, as ever, the asteroid hunters held fast in their prediction that sooner or later—ideally sooner—more observations would cause those impact odds to drop.
“We kept at it,” says Ryan. But time was running short; they started to “search for out-of-the-box ways to image this asteroid,” says Fenucci.
Planetary defense researchers soon realized that 2024 YR4 wasn’t too far away from NASA’s Lucy spacecraft, a planetary science mission making a series of flybys of various asteroids. If Lucy could be redirected to catch up to 2024 YR4 instead, it would give humanity its best look at the rock, allowing Sentry and company to confirm or dispel our worst fears.
Sadly, NASA ran the numbers, and it proved to be a nonstarter: 2024 YR4 was too speedy and too far for Lucy to pursue.
It was really starting to look as if JWST would be the last, best hope to track 2024 YR4.
A CHANGE OF FATE
February 19, 2025
VERY LARGE TELESCOPE, CHILE and MAGDALENA RIDGE OBSERVATORY, NEW MEXICO
Just one day after 2024 YR made history, the VLT, MRO, and others caught sight of it again, and Sentry, Aegis, and NEODyS voraciously consumed their new data.
The odds of an Earth impact suddenly dropped to 1.5%.
Astronomers didn’t really have time to react to the possibility that this was a good sign—they just kept sending their observations onward.
February 20, 2025
SUBARU TELESCOPE, HAWAII
Yet another observatory had been itching to get into the game for weeks, but it wasn’t until February 20 that Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, sitting atop Mauna Kea, finally caught 2024 YR4 shifting between the stars. He added his data to the stream.
And all of a sudden, the asteroid lost its lethal luster. The odds of its hitting Earth were now just 0.3%.
At this point, you might expect that all those tracking it would be in a single control room somewhere, eyes glued to their screens, watching the odds drop before bursting into cheers and rapturous applause. But no—the astronomers who had spent so long observing this asteroid remained scattered across the globe. And instead of erupting into cheers, they exchanged modestly worded emails of congratulations—the digital equivalent of a nod or a handshake.
In late February, data from Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, which sits atop Mauna Kea, confirmed that 2024 YR4 was not so lethal after all.
NAOJ
“It was a relief,” says Terai. “I was very pleased that our data contributed to put an end to the risk of 2024 YR4.”
February 24, 2025
INTERNATIONAL ASTEROID WARNING NETWORK
Just a few days later, and thanks to a litany of observations continuing to flood in, IAWN issued the all-clear. This once-ominous asteroid’s odds of inconveniencing our planet were at 0.004%—one in 25,000. Today, the odds of an Earth impact in 2032 are exactly zero.
“It was kinda fun while it lasted,” says Denneau.
Planetary defenders may have had a blast defending the world, but these astronomers still took the cosmic threat deeply seriously—and never once took their eyes off the prize. “In my mind, the observers and orbit teams were the stars of this story,” says Fast, NASA’s acting planetary defense officer.
Farnocchia shrugs off the entire thing. “It was the expected outcome,” he says. “We just didn’t know when that would happen.”
Looking back on it now, though, some 2024 YR4 trackers are allowing themselves to dwell on just how close this asteroid came to being a major danger. “It’s wild to watch it all play out,” says Denneau. “We were weeks away from having to spin up some serious mitigation planning.” But there was no need to work out how the save the world. It turned out that 2024 YR4 was never a threat to begin with—it just took a while to check.
And these experiences in handling a dicey space rock will only serve to make the world a safer place to live. One day, an asteroid very much like 2024 YR4 will be seen heading straight for Earth. And those tasked with tracking it will be officially battle-tested, and better prepared than ever to do what needs to be done.
A POSTSCRIPT
March 27, 2025
JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH
But the story of 2024 YR4 is not quite over—in fact, if this were a movie, it would have an after-credits scene.
After the Earth-impact odds fell off a cliff, JWST went ahead with its observations in March anyway. It found out that 2024 YR4 was 200 feet across—so large that a direct strike on a city would have proved horrifically lethal. Earth just didn’t have to worry about it anymore.
But the moon might. Thanks in part to JWST, astronomers calculated a 3.8% chance that 2024 YR4 will impact the lunar surface in 2032. Additional JWST observations in May bumped those odds up slightly, to 4.3%, and the orbit can no longer be refined until the asteroid’s next Earth flyby in 2028.
“It may hit the moon!” says Denneau. “Everybody’s still very excited about that.”
A lunar collision would give astronomers a wonderful opportunity not only to study the physics of an asteroid impact, but also to demonstrate to the public just how good they are at precisely predicting the future motions of potentially lethal space rocks. “It’s a thing we can plan for without having to defend the Earth,” says Denneau.
If 2024 YR4 is truly going to smash into the moon, the impact—likely on the side facing Earth—would unleash an explosion equivalent to hundreds of nuclear bombs. An expansive crater would be carved out in the blink of an eye, and a shower of debris would erupt in all directions.
None of this supersonic wreckage would pose any danger to Earth, but it would look spectacular: You’d be able to see the bright flash of the impact from terra firma with the naked eye.
“If that does happen, it’ll be amazing,” says Denneau. It will be a spectacular way to see the saga of 2024 YR4—once a mere speck on his computer screen—come to an explosive end, from a front-row seat.
Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How to Kill An Asteroid (2024).