A Smarter SEO Content Audit: Aligning For Performance, Purpose & LLM Visibility via @sejournal, @coreydmorris

A major category, focus, or pillar (as I have defined it for decades) of SEO is content. Influencing a range of on-page factors, but more so to develop authentic context and authority status over the years, content has been an engine of so much SEO and is a focal point in the shift from keyword-focused to visibility in the era of LLMs, AI search results, and organic search results in integrated thinking.

With a focus on content needs of today, combined with those from the past few years, a popular way to understand content’s effectiveness is to conduct SEO content audits. As we look at content auditing in a more versatile way for broader visibility, I believe it is important to address the fact that audits often fall into one of two extremes:

  • Too shallow to be useful – using an automated tool and lacking data and a point of view.
  • Too deep and detailed to be usable – so much data, so much crawling, and so many topics that it’s difficult for search engines and LLMs to understand the actual focus.

With AI and LLMs changing how content is discovered and interacted with, we can’t afford to rest on the content we have created in the past and to assume past performance will provide future positive results. I believe a better model is a performance and purpose-driven audit that prioritizes actions based on business impact and newer visibility models.

SEO content audits, which evolve to stay relevant in today’s search and AI environment, need to account for the fact that search behavior is shifting. I’m not going to unpack the stats or talk about search market share in this article, but trust that you’re seeing the impact in your stats and dashboards. As we shift with the market, we do have to think more about answers and authority signals.

Even if we have a finely tuned content machine that has every possible AI-driven efficiency built into it, we can’t afford wasted efforts and content bloat. Flooding search engines and LLMs with bloat, whether human-generated or AI-generated (or some combo), is wasted if it isn’t working for us. This is especially true for B2B and lead-generation-focused companies that have longer customer journeys and sales cycles.

Marketing and corporate executives expect performance and find out too late that outdated or ineffective content didn’t translate from keyword rankings to AI visibility. Leveraging a content audit that balances having enough depth, but being actionable and focused on business value, is as important as ever.

How To Conduct A Performance-Driven, LLM-Aware Content Audit

I’m advocating a modern and repeatable framework that replaces traditional SEO content audits with one that is more useful and aligned to how things work today.

1. Define Purpose

We have to start off by getting on the same page with what spurred us to do an audit and what our ultimate goal for the effort is. Whether we’re trying to clean up legacy content overall, to shift focus to LLM visibility that we want to improve, seeking to get more conversions out of existing content, or other noble goals.

It is important to understand what “good” looks like. Whether it is visibility, traffic, authority, engagement, or some other measurable outcome.

2. Segment By Type And Funnel Stage

A challenge of content reviews and analysis is how specific content is prioritized. We want to avoid a one-size-fits-all approach.

That means we need to break down the categories of content for the audit by type. That can include blog posts vs. core landing pages vs. gated assets. However you look and classify the types of content on your site and that your team creates, you’ll want to use this as a filter.

Additionally, you want to look at your content in the same way that you consider your funnel. Whether it is top, middle, and bottom-of-funnel content, or if you look in a different way at customer journeys and classifications, use this as a second important filter and prioritize what you want to analyze and why (going back to the defined purpose of the content audit).

3. Score Content 3P’s (Purpose, Performance, Potential)

This is where our audits and processes start to take a more custom approach based on the steps we’ve completed so far. You’ll need your own custom scoring system. It could be as simple as a 1-3 scale for the categories of Purpose, Performance, and Potential.

Purpose:

  • What is this content meant to do?
  • Is it aligned with:
    • Brand?
    • Positioning?
    • Goals?

Performance:

  • How does it drive:
    • Traffic?
    • Conversions?
    • Citations?
    • Engagement?
  • Does it actually:
    • Bring people in?
    • Move them forward?

Potential:

  • Could it rank or be rendered in answers in AI with updates?
  • Could it be:
    • Repurposed?
    • Repositioned?

As third-party tools continue to add to their data sets and measurement capabilities, you could do your own checks, combining Google Analytics 4, Google Search Console, and ChatGPT to see what content feels useful for LLMs.

4. Determine What Stays

At this juncture, it is time to add a business-focused or aligned lens. Considering content for things like it helps us get found for the right reasons, if it would resonate with our primary audience, and if it would be prominently perceived as expert and authoritative by further stakeholders (current client, journalist, industry colleagues).

For each piece of content that is reviewed within the audit and analysis, arrive at a final decision:

  • Remove: With no performance, future, or purpose, this content can be removed.
  • Combine: This category is typically for topics that are competing or have cannibalization.
  • Update: Whether it is a topic that isn’t optimized, is misaligned in the current iteration, or needs some other type of identified improvement. LLMs prefer sources that are timely, so refreshing content on a regular basis to stay as up-to-date as possible can help improve the longevity of a piece being sourced by AI.
  • Keep: This category is for content that needs no change and that you’ll keep as-is currently.

5. Optimize For Search & LLM Visibility

For the content you have determined that stays or gets updated, you’ll want to consider both search and LLMs and what they reward for your content and brand to be found.

For search engines, starting with intent can often help to not get bogged down in old-school thinking about keywords and help with thinking of topics and the opportunity that exists for visibility in organic search results.

For AI, while this article isn’t a primer for what matters for being found in LLMs, there are things like content structure, clear and authoritative answers, brand signals, and external validation (PR, etc.) that are important here, too, in the edits and updates that you make.

6. Create Prioritized Action Plan

While it might feel like, at this point, the heavy lifting is done and that you’ve got a solid spreadsheet, list, or way that you’ve organized the work so far, this is where the follow-through and implementation can get derailed quickly.

You need to work at this juncture to score or plan out what is required for implementation based on effort vs. impact. Additionally, you need to layer in your team’s capacity, skill sets, and cost (or opportunity cost) of resources. Lastly, you need to organize the effort into sprints or milestones to do over time so it doesn’t become a never-ending project or one that is too big to accomplish.

7. Track Business (Not Search) Metrics

As the content audit work wraps up and turns to implementation of the action plan, you need to make sure you’re set up to look beyond rankings and traffic.

Deeper business-aligned metrics include conversions, form submissions, and demo requests as the bridge from online to sales processes. Quality metrics and key performance indicators (KPIs) still apply as you weave in conversion rate optimization (CRO) efforts and mapping to expected aspects of the customer journey or funnel.

And, as you evolve from SEO metrics to visibility, third-party tools or your own qualification and quantification efforts in customizing GA4 or other data capture and analysis work will be important in understanding the impact of your content auditing and update efforts.

Final Thoughts

Content audits aren’t dead. However, the way we’ve done them in the past likely does need to change. There’s no such thing as a perfect process, tool, or spreadsheet, but we can leverage solid practices that integrate our own goals, potential, and value to our target audiences.

SEO this year and beyond is about visibility, usefulness, and what we can impact across search engines and LLMs.

Remembering that the right audit balances depth with being actionable, the steps I outlined and your team’s dedication and focus can help you see it through to measurable success.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google Announces A New Era For Voice Search via @sejournal, @martinibuster

Google announced an update to its voice search, which changes how voice search queries are processed and then ranked. The new AI model uses speech as input for the search and ranking process, completely bypassing the stage where voice is converted to text.

The old system was called Cascade ASR, where a voice query is converted into text and then put through the normal ranking process. The problem with that method is that it’s prone to mistakes. The audio-to-text conversion process can lose some of the contextual cues, which can then introduce an error.

The new system is called Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries and documents. This training enables it to process spoken search queries (without converting them into text) and match them directly to relevant documents.

Dual-Encoder Model: Two Neural Networks

The system uses two neural networks:

  1. One of the neural networks, called the audio encoder, converts spoken queries into a vector-space representation of their meaning.
  2. The second network, the document encoder, represents written information in the same kind of vector format.

The two encoders learn to map spoken queries and text documents into a shared semantic space so that related audio and text documents end up close together according to their semantic similarity.

Audio Encoder

Speech-to-Retrieval (S2R) takes the audio of someone’s voice query and transforms it into a vector (numbers) that represents the semantic meaning of what the person is asking for.

The announcement uses the example of the famous painting The Scream by Edvard Munch. In this example, the spoken phrase “the scream painting” becomes a point in the vector space near information about Edvard Munch’s The Scream (such as the museum it’s at, etc.).

Document Encoder

The document encoder does a similar thing with text documents like web pages, turning them into their own vectors that represent what those documents are about.

During model training, both encoders learn together so that vectors for matching audio queries and documents end up near each other, while unrelated ones are far apart in the vector space.

Rich Vector Representation

Google’s announcement says that the encoders transform the audio and text into “rich vector representations.” A rich vector representation is an embedding that encodes meaning and context from the audio and the text. It’s called “rich” because it contains the intent and context.

For S2R, this means the system doesn’t rely on keyword matching; it “understands” conceptually what the user is asking for. So even if someone says “show me Munch’s screaming face painting,” the vector representation of that query will still end up near documents about The Scream.

According to Google’s announcement:

“The key to this model is how it is trained. Using a large dataset of paired audio queries and relevant documents, the system learns to adjust the parameters of both encoders simultaneously.

The training objective ensures that the vector for an audio query is geometrically close to the vectors of its corresponding documents in the representation space. This architecture allows the model to learn something closer to the essential intent required for retrieval directly from the audio, bypassing the fragile intermediate step of transcribing every word, which is the principal weakness of the cascade design.”

Ranking Layer

S2R has a ranking process, just like regular text-based search. When someone speaks a query, the audio is first processed by the pre-trained audio encoder, which converts it into a numerical form (vector) that captures what the person means. That vector is then compared to Google’s index to find pages whose meanings are most similar to the spoken request.

For example, if someone says “the scream painting,” the model turns that phrase into a vector that represents its meaning. The system then looks through its document index and finds pages that have vectors with a close match, such as information about Edvard Munch’s The Scream.

Once those likely matches are identified, a separate ranking stage takes over. This part of the system combines the similarity scores from the first stage with hundreds of other ranking signals for relevance and quality in order to decide which pages should be ranked first.

Benchmarking

Google tested the new system against Cascade ASR and against a perfect-scoring version of Cascade ASR called Cascade Groundtruth. S2R beat Cascade ASR and very nearly matched Cascade Groundtruth. Google concluded that the performance is promising but that there is room for additional improvement.

Voice Search Is Live

Although the benchmarking revealed that there is some room for improvement, Google announced that the new system is live and in use in multiple languages, calling it a new era in search. The system is presumably used in English.

Google explains:

“Voice Search is now powered by our new Speech-to-Retrieval engine, which gets answers straight from your spoken query without having to convert it to text first, resulting in a faster, more reliable search for everyone.”

Read more:

​​Speech-to-Retrieval (S2R): A new approach to voice search

Featured Image by Shutterstock/ViDI Studio

AI could predict who will have a heart attack

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups like Bunkerhill Health, Nanox.AI, and HeartLung Technologies are applying AI algorithms to screen millions of CT scans for early signs of heart disease. This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale while raising thorny questions about implementation and even how we define disease. 

Last year, an estimated 20 million Americans had chest CT scans done, after an event like a car accident or to screen for lung cancer. Frequently, they show evidence of coronary artery calcium (CAC), a marker for heart attack risk, that is buried or not mentioned in a radiology report focusing on ruling out bony injuries, life-threatening internal trauma, or cancer.

Dedicated testing for CAC remains an underutilized method of predicting heart attack risk. Over decades, plaque in heart arteries moves through its own life cycle, hardening from lipid-rich residue into calcium. Heart attacks themselves typically occur when younger, lipid-rich plaque unpredictably ruptures, kicking off a clotting cascade of inflammation that ultimately blocks the heart’s blood supply. Calcified plaque is generally stable, but finding CAC suggests that younger, more rupture-prone plaque is likely present too. 

Coronary artery calcium can often be spotted on chest CTs, and its concentration can be subjectively described. Normally, quantifying a person’s CAC score involves obtaining a heart-specific CT scan. Algorithms that calculate CAC scores from routine chest CTs, however, could massively expand access to this metric. In practice, these algorithms could then be deployed to alert patients and their doctors about abnormally high scores, encouraging them to seek further care. Today, the footprint of the startups offering AI-derived CAC scores is not large, but it is growing quickly. As their use grows, these algorithms may identify high-risk patients who are traditionally missed or who are on the margins of care. 

Historically, CAC scans were believed to have marginal benefit and were marketed to the worried well. Even today, most insurers won’t cover them. Attitudes, though, may be shifting. More expert groups are endorsing CAC scores as a way to refine cardiovascular risk estimates and persuade skeptical patients to start taking statins. 

The promise of AI-derived CAC scores is part of a broader trend toward mining troves of medical data to spot otherwise undetected disease. But while it seems promising, the practice raises plenty of questions. For example, CAC scores ­haven’t proved useful as a blunt instrument for universal screening. A 2022 Danish study evaluating a population-based program, for example, showed no benefit in mortality rates for patients who had undergone CAC screening tests. If AI delivered this information automatically, would the calculus really shift? 

And with widespread adoption, abnormal CAC scores will become common. Who follows up on these findings? “Many health systems aren’t yet set up to act on incidental calcium findings at scale,” says Nishith Khandwala, the cofounder of Bunkerhill Health. Without a standard procedure for doing so, he says, “you risk creating more work than value.” 

There’s also the question of whether these AI-generated scores would actually improve patient care. For a symptomatic patient, a CAC score of zero may offer false reassurance. For the asymptomatic patient with a high CAC score, the next steps remain uncertain. Beyond statins, it isn’t clear if these patients would benefit from starting costly cholesterol-lowering drugs such as Repatha or other PCSK9-inhibitors. It may encourage some to pursue unnecessary but costly downstream procedures that could even end up doing harm. Currently, AI-derived CAC scoring is not reimbursed as a separate service by Medicare or most insurers. The business case for this technology today, effectively, lies in these potentially perverse incentives. 

At a fundamental level, this approach could actually change how we define disease. Adam Rodman, a hospitalist and AI expert at Beth Israel Deaconess Medical Center in Boston, has observed that AI-derived CAC scores share similarities with the “incidentaloma,” a term coined in the 1980s to describe unexpected findings on CT scans. In both cases, the normal pattern of diagnosis—in which doctors and patients deliberately embark on testing to figure out what’s causing a specific problem—were fundamentally disrupted. But, as Rodman notes, incidentalomas were still found by humans reviewing the scans. 

Now, he says, we are entering an era of “machine-based nosology,” where algorithms define diseases on their own terms. As machines make more diagnoses, they may catch things we miss. But Rodman and I began to wonder if a two-tiered diagnostic future may emerge, where “haves” pay for brand-name algorithms while “have-nots” settle for lesser alternatives. 

For patients who have no risk factors or are detached from regular medical care, an AI-derived CAC score could potentially catch problems earlier and rewrite the script. But how these scores reach people, what is done about them, and whether they can ultimately improve patient outcomes at scale remain open questions. For now—holding the pen as they toggle between patients and algorithmic outputs—clinicians still matter. 

Vishal Khetpal is a fellow in cardiovascular disease. The views expressed in this article do not represent those of his employers. 

Flowers of the future

Flowers play a key role in most landscapes, from urban to rural areas. There might be dandelions poking through the cracks in the pavement, wildflowers on the highway median, or poppies covering a hillside. We might notice the time of year they bloom and connect that to our changing climate. Perhaps we are familiar with their cycles: bud, bloom, wilt, seed. Yet flowers have much more to tell in their bright blooms: The very shape they take is formed by local and global climate conditions. 

The form of a flower is a visual display of its climate, if you know what to look for. In a dry year, its petals’ pigmentation may change. In a warm year, the flower might grow bigger. The flower’s ultraviolet-absorbing pigment increases with higher ozone levels. As the climate changes in the future, how might flowers change? 

white flower and a purple flower
Anthocyanins are red or indigo pigments that supply antioxidants and photoprotectants, which help a plant tolerate climate-related stresses such as droughts.
© 2021 SULLIVAN CN, KOSKI MH

An artistic research project called Plant Futures imagines how a single species of flower might evolve in response to climate change between 2023 and 2100—and invites us to reflect on the complex, long-term impacts of our warming world. The project has created one flower for every year from 2023 to 2100. The form of each one is data-driven, based on climate projections and research into how climate influences flowers’ visual attributes. 

two rows of flowers that are both yellow and purple
More ultraviolet pigment protects flowers’ pollen against increasing ozone levels.
MARCO TODESCO
a white flower with a yellow center
Under unpredictable weather conditions, the speculative flowers grow a second layer of petals. In botany, a second layer is called a “double bloom” and arises from random mutations.
COURTESY OF ANNELIE BERNER

Plant Futures began during an artist residency in Helsinki, where I worked closely with the biologist Aku Korhonen to understand how climate change affected the local ecosystem. While exploring the primeval Haltiala forest, I learned of the Circaea alpina, a tiny flower that was once rare in that area but has become more common as temperatures have risen in recent years. Yet its habitat is delicate: The plant requires shade and a moist environment, and the spruce population that provides those conditions is declining in the face of new forest pathogens. I wondered: What if the Circaea alpina could survive in spite of climate uncertainty? If the dark, shaded bogs turn into bright meadows and the wet ground dries out, how might the flower adapt in order to survive? This flower’s potential became the project’s grounding point. 

The author studying historical Circaea samples in the Luomus Botanical Collections.
COURTESY OF ANNELIE BERNER

Outside the forest, I worked with botanical experts in the Luomus Botanical Collections. I studied samples of Circaea flowers from as far back as 1906, and I researched historical climate conditions in an attempt to understand how flower size and color related to a year’s temperature and precipitation patterns. 

I researched how other flowering plants respond to changes to their climate conditions and wondered how the Circaea would need to adapt to thrive in a future world. If such changes happened, what would the Circaea look like in 2100? 

We designed the future flowers through a combination of data-driven algorithmic mapping and artistic control. I worked with the data artist Marcin Ignac from Variable Studio to create 3D flowers whose appearance was connected to climate data. Using Nodes.io, we made a 3D model of the Circaea alpina based on its current morphology and then mapped how those physical parameters might shift as the climate changes. For example, as the temperature rises and precipitation decreases in the data set, the petal color shifts toward red, reflecting how flowers protect themselves with an increase in anthocyanins. Changes in temperature, carbon dioxide levels, and precipitation rates combine to affect the flowers’ size, density of veins, UV pigments, color, and tendency toward double bloom.
2025: Circaea alpina is ever so slightly larger than usual owing to a warmer summer, but it is otherwise close to the typical Circaea flower in size, color, and other attributes.
2064: We see a bigger flower with more petals, given an increase in carbon dioxide levels and temperature. The bull’s-eye pattern, composed of UV pigment, is bigger and messier because of an increase in ozone and solar radiation. A second tier of petals reflects uncertainty in the climate model.
2074: The flower becomes pinker, an antioxidative response to the stress of consecutive dry days and higher temperatures. Its size increases, primarily because of higher levels of carbon dioxide. The double bloom of petals persists as the climate model’s projections increase in uncertainty.
2100: The flower’s veins are densely packed, which could signal appropriation of a technique leaves use to improve water transport during droughts. It could also be part of a strategy to attract pollinators in the face of worsening air quality that degrades the transmission of scents.
2023—2100: Each year, the speculative flower changes. Size, color, and form shift in accordance with the increased temperature and carbon dioxide levels and the changes in precipitation patterns.
In this 10-centimeter cube of plexiglass, the future flowers are “preserved,” allowing the viewer to see them in a comparative, layered view.
COURTESY OF ANNELIE BERNER

Based in Copenhagen, Annelie Berner is a designer, researcher, teacher, and artist specializing in data visualization.

This retina implant lets people with vision loss do a crossword puzzle

Science Corporation—a competitor to Neuralink founded by the former president of Elon Musk’s brain-interface venture—has leapfrogged its rival after acquiring, at a fire-sale price, a vision implant that’s in advanced testing,.

The implant produces a form of “artificial vision” that lets some patients read text and do crosswords, according to a report published in the New England Journal of Medicine today.

The implant is a microelectronic chip placed under the retina. Using signals from a camera mounted on a pair of glasses, the chip emits bursts of electricity in order to bypass photoreceptor cells damaged by macular degeneration, the leading cause of vision loss in elderly people.

“The magnitude of the effect is what’s notable,” says José-Alain Sahel, a University of Pittsburgh vision scientist who led testing of the system, which is called PRIMA. “There’s a patient in the UK and she is reading the pages of a regular book, which is unprecedented.”  

Until last year, the device was being developed by Pixium Vision, a French startup cofounded by Sahel, which faced bankruptcy after it couldn’t raise more cash.  

That’s when Science Corporation swept in to purchase the company’s assets for about €4 million ($4.7 million), according to court filings.

“Science was able to buy it for very cheap just when the study was coming out, so it was good timing for them,” says Sahel. “They could quickly access very advanced technology that’s closer to the market, which is good for a company to have.”

Science was founded in 2021 by Max Hodak, the first president of Neuralink, after his sudden departure from that company. Since its founding, Science has raised around $290 million, according to the venture capital database Pitchbook, and used the money to launch broad-ranging exploratory research on brain interfaces and new types of vision treatments.

“The ambition here is to build a big, standalone medical technology company that would fit in with an Apple, Samsung, or an Alphabet,” Hodak said in an interview at Science’s labs in Alameda, California in September. “The goal is to change the world in important ways … but we need to make money in order to invest in these programs.”

By acquiring the PRIMA implant program, Science effectively vaulted past years of development and testing. The company has requested approval to sell the eye chip in Europe and is in discussions with regulators in the US.

Unlike Neuralink’s implant, which records brain signals so paralyzed recipients can use their thoughts to move a computer mouse, the retina chip sends information into the brain to produce vision. Because the retina is an outgrowth of the brain, the chip qualifies as a type of brain-computer interface.

Artificial vision systems have been studied for years and one, called the Argus II, even reached the market and was installed in the eyes of about 400 people. But that product was later withdrawn after it proved to be a money-loser, according to Cortigent, the company that now owns that technology.

Thirty-eight patients in Europe received a PRIMA implant in one eye. On average, the study found, they were able to read five additional lines on a vision chart—the kind with rows of letters, each smaller than the last. Some of that improvement was due to what Sahel calls “various tricks” like using a zoom function, which allows patients to zero in on text they want to read.

The type of vision loss being treated with the new implant is called geographic atrophy, in which patients have peripheral vision but can’t make out objects directly in front of them, like words or faces. According to Prevent Blindness, an advocacy organization, this type of central vision loss affects around one in 10 people over 80.  

The implant was originally designed starting 20 years ago by Daniel Palanker, a laser expert and now a professor at Stanford University, who says his breakthrough was realizing that light beams could supply both energy and information to a chip placed under the retina. Other implants, like Argus II, use a wire, which adds complexity.

“The chip has no brains at all. It just turns light into electrical current that flows into the tissue,” says Palanker. “Patients describe the color they see as yellowish blue or sun color.”

The system works using a wearable camera that records a scene and then blasts bright infrared light into the eye, using a wavelength humans can’t see. That light hits the chip, which is covered by “what are basically tiny solar panels,” says Palanker. “We just try to replace the photoreceptors with a photo-array.”

A diagram of how a visual scene could be represented by a retinal implant.
COURTESY SCIENCE CORPORATION

The current system produces about 400 spots of vision, which lets users make out the outlines of words and objects. Palanaker says a next-generation device will have five times as many “pixels” and should let people see more: “What we discovered in the trial is that even though you stimulate individual pixels, patients perceive it as continuous. The patient says ‘I see a line,’ “I see a letter.’”

Palanker says it will be important to keep improving the system because “the market size depends on the quality of the vision produced.”

When Pixium teetered on insolvency, Palanker says, he helped search for a buyer, meeting with Hodak. “It was a fire sale, not a celebration,” he says. “But for me it’s a very lucky outcome, because it means the product is going forward. And the purchase price doesn’t really matter, because there’s a big investment needed to bring it to market. It’s going to cost money.”  

Photo of the PRIMA Glasses and Pocket Processor.
The PRIMA artificial vision system has a battery pack/controller and an eye-mounted camera.
COURTESY SCIENCE CORPORATION

During a visit to Science’s headquarters, Hodak described the company’s effort to redesign the system into something sleeker and more user-friendly. In the original design, in addition to the wearable camera, the patient has to carry around a bulky controller containing a battery and laser, as well as buttons to zoom in and out. 

But Science has already prototyped a version in which those electronics are squeezed into what look like an extra-large pair of sunglasses.

“The implant is great, but we’ll have new glasses on patients fairly shortly,” Hodak says. “This will substantially improve their ability to have it with them all day.” 

Other companies also want to treat blindness with brain-computer interfaces, but some think it might be better to send signals directly into the brain. This year, Neuralink has been touting plans for “Blindsight,” a project to send electrical signals directly into the brain’s visual cortex, bypassing the retina entirely. It has yet to test the approach in a person.

The Download: a promising retina implant, and how climate change affects flowers

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This retina implant lets people with vision loss do a crossword puzzle

The news: Science Corporation—a competitor to Neuralink founded by the former president of Elon Musk’s brain-interface venture—has leapfrogged its rival after acquiring a vision implant in advanced testing for a fire-sale price. The implant produces a form of “artificial vision” that lets some patients read text and do crosswords, according to a report published in The New England Journal of Medicine today.

How it works: The implant is a microelectronic chip placed under the retina. Using signals from a camera mounted on a pair of glasses, the chip emits bursts of electricity in order to bypass photoreceptor cells damaged by macular degeneration, the leading cause of vision loss in the elderly. Read the full story.

—Antonio Regalado

How will flowers respond to climate change?

Flowers play a key role in most landscapes, from urban to rural areas. Yet flowers have much more to tell in their bright blooms: The very shape they take is formed by local and global climate conditions. 

The form of a flower is a visual display of its climate, if you know what to look for. In a dry year, its petals’ pigmentation may change. In a warm year, the flower might grow bigger. The flower’s ultraviolet-absorbing pigment increases with higher ozone levels.

Now, a new artistic project sets out to answer the question: As the climate changes in the future, how might flowers change? Read the full story.

—Annelie Berner

This story is from our forthcoming print issue, which is all about the body. If you haven’t already, subscribe now to receive future issues once they land.

2025 climate tech companies to watch: Redwood Materials and its new AI microgrids

Over the past few years, Redwood Materials has become one of the top US battery recyclers, joining forces with the likes of Volkswagen, BMW, and Toyota to process old electric-vehicle batteries and recover materials that can be used to make new ones.

Now it’s moving into reuse as well. Redwood Energy, a new branch of the company, incorporates used EV batteries into microgrids to power energy-hungry AI data centers. Read the full story.

—Peter Hall

Redwood Materials is one of our 10 climate tech companies to watch—our annual list of some of the most promising climate tech firms on the planet. Check out the rest of the list here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 AWS is recovering from a major outage 
It’s racing to get hundreds of apps and services back online. (The Verge)
+ Snapchat, Roblox and banking services are among those affected. (The Guardian)

2 OpenAI made—then retracted—a claim it had made a major math breakthrough
After math experts and rival AI firms ridiculed its poorly-worded declaration. (TechCrunch)
+ What’s next for AI and math. (MIT Technology Review)

3 The grave costs of Trump’s war on climate science
It’s affecting the accuracy of forecasting systems globally, not just in the US. (FT $)
+ Trump himself led an effort to derail plans to tax shipping pollution. (Politico $)
+ How to make clean energy progress under Trump in the states. (MIT Technology Review)

4 China claims the US is behind a cyberattack on its national time center
It says it has years’ worth of irrefutable evidence of data stealing. (Reuters)
+ US experts allegedly exploited vulnerabilities in mobile phones belonging to National Time Service Center workers. (Bloomberg $)

5 Is AI-generated art real art?
It’s a question gallery and museum curators across the world are debating. (NYT $)
+ Artisan craftmakers are happy to resist the pull of AI. (FT $)
+ This tool claims to trace how much of an AI image has been drawn from existing material. (The Guardian)
+ From slop to Sotheby’s? AI art enters a new phase. (MIT Technology Review)

6 Chipmaker Nexperia has accused its ousted CEO of spreading falsehoods
Zhang Xuezheng reportedly claimed it was operating independently in China. (Bloomberg $)

7 This whistleblower raised concerns about the safety of US data under DOGE
And says the hostile reception to his complaint led to him leaving his dream job. (WP $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

8 Aid agencies have been criticized for using AI “poverty porn”
But the NGOs say its use protects the identities of real people in social media campaigns. (The Guardian)

9 EVs lose their value much faster than gas-powered cars
Which isn’t exactly an incentive for prospective first-time buyers. (Rest of World)

10 What happens to our brains when we dream 🧠
We’re learning more about the many liminal states they can slip through. (Quanta Magazine)

Quote of the day

“Hoisted by their own GPTards.”

—Meta’s chief AI scientist Yann LeCun pokes fun at OpenAI after the company walked back its claim it had made a major math breakthrough in a post on X.

One more thing

One option for electric vehicle fires? Let them burn.

Although there isn’t solid data on the frequency of EV battery fires, it’s no secret that these fires are happening.

Despite that, manufacturers offer no standardized steps on how to fight them or avoid them in the first place. What’s more, with EVs, it’s never entirely clear whether the fire is truly out.

Patrick Durham, the owner of one of a growing number of private companies helping first responders learn how to deal with lithium-ion battery safety, has a solution. He believes that the best way to manage EV fires right now is to let them burn. But such an approach not only goes against firefighters’ instincts—it’d require a significant cultural shift. Read the full story.

—Maya L. Kapoor

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It looks as though the sumo wrestlers who visited London last week had the best time.
+ The Chicago rat hole may not have been made by a rat after all.
+ Finally, a good use for AI—to help me pick a perfectly ripe avocado 🥑
+ Keith Richards, we love you!

How to Remove a Web Page from Google

The reasons for removing a page from Google’s search results haven’t much changed since I first published this article in 2023. Examples include pages with confidential, premium, or outdated info. Yet the tools and tactics have evolved.

Here’s my updated version.

Temporary Removal

The need to remove URLs from Google is urgent when a site is (i) hacked with malware or illicit content while indexed (even ranking) or (ii) inadvertently exposes private information that the search giant then indexes.

The quickest way to hide URLs from searchers is via Google’s URL removal tool in the “Indexing” section of Search Console. There, you can remove a single URL or an entire category.

Google processes these requests quickly in my experience, but it doesn’t permanently deindex them. It instead hides the URLs from search results for roughly six months.

Screenshot of Google Search Console’s “New Request” dialog under the “Temporarily Remove URL” tab. The interface allows users to block URLs from Google Search results for about six months. Options include entering a URL, choosing to remove only that URL or all URLs with the same prefix, and proceeding with a “Next” button.

Search Console’s tool removes URLs from search results for “about six months.” Click image to enlarge.

A similar feature in Bing Webmaster Tools, called “Block URLs,” hides pages from Bing search for approximately 90 days.

Screenshot of Bing Webmaster Tools’ “Add URL to block” dialog. The form allows entry of a URL and selection of options for page or directory, and block type (URL & Cache or Cache only). A note at the bottom says the block will remain in place for a maximum of 90 days.

“Block URLs” in Bing Webmaster Tools hides pages from Bing search for approximately 90 days. Click image to enlarge.

Permanent

Several options remove URLs permanently from Google’s index.

Delete the page from your site

Deleting a page from your web server will permanently deindex it. After deleting, set up a 410 HTTP status code of “gone” instead of 404 “not found.” Allow a few days for Google to recrawl the site, discover the 410 code, and remove the page from its index.

Note that Google discourages the use of redirects to remove low-value pages, as the practice sends poor signals to the successor.

As an aside, Google provides a form to remove personal info from search results.

Add the noindex tag

Search engines nearly always honor the noindex meta tag. Search bots will crawl a noindex page, but will not include it in search results.

In my experience, Google will immediately recognize a noindex meta tag once it crawls the page. Note that the tag removes the page from search results, not the site. The page remains accessible through other links, internal and external.

Noindex tags will not likely remove pages from LLMs such as ChatGPT, Claude, and Perplexity, as those platforms do not always honor them or even robots.txt exclusions. Deleting pages from a site is the surefire removal tactic.

Password protect

Consider adding a password to a published page to prevent it from becoming publicly accessible. Google cannot crawl pages requiring passwords or user names.

Adding a password will not remove an indexed page. A noindex tag will, however.

Remove internal links

Remove all internal links to pages you don’t want indexed. And do not link to password-protected or deleted pages; both hurt the user experience. Always focus on human visitors — not search engines alone.

Robots.txt

Robots.txt files can prevent Google (and other bots) from crawling a page (or category). Pages blocked via robots.txt could still be indexed and ranked if included in a site map or otherwise linked. Google will not encounter a noindex tag on blocked pages since it cannot crawl them.

A robots.txt file can instruct web crawlers to ignore, for instance, login pages, personal archives, or pages resulting from unique sorts and filters. Preserve search bots’ crawl time on the parts you want to rank.

Wikipedia Traffic Down As AI Answers Rise via @sejournal, @MattGSouthern

The Wikimedia Foundation (WMF) reported a decline in human pageviews on Wikipedia compared with the same months last year.

Marshall Miller, Senior Director of Product, Core Experiences at Wikimedia Foundation, wrote that the organization believes the decline reflects changes in how people access information, particularly through AI search and social platforms.

What Changed In The Data

Wikimedia observed unusually high traffic around May. The traffic appeared human but investigation revealed bots designed to evade detection.

WMF updated its bot detection systems and applied the new logic to reclassify traffic from March through August.

Miller noted the revised data shows “a decrease of roughly 8% as compared to the same months in 2024.”

WMF cautions that comparisons require careful interpretation because bot detection rules changed over time.

The Role Of AI Search

Miller attributed the decline to generative AI and social platforms reshaping information discovery.

He wrote that search engines are “providing answers directly to searchers, often based on Wikipedia content.”

This creates a scenario where Wikipedia serves as source material for AI-powered search features without generating traffic to the site itself.

Wikipedia’s Role In AI Systems

The traffic decline comes as AI systems increasingly depend on Wikipedia as source material.

Research from Profound analyzing 680 million AI citations finds that within ChatGPT’s top 10 most-cited sources, Wikipedia accounts for 47.9% of the top-10 share. For Google AI Overviews, Wikipedia’s top-10 share is 5.7%, with Reddit 21.0% and YouTube 18.8%.

WMF also reported a 50% surge in bandwidth from AI bots since January 2024. These bots scrape content primarily for training computer vision models.

Wikipedia launched Wikimedia Enterprise in 2021, offering commercial, SLA-backed data access for high-volume reusers, including search and AI companies.

Why This Matters

If Wikipedia loses traffic while serving as ChatGPT’s most-cited source, the model that sustains content creation is breaking. You can produce authoritative content that AI systems depend on and still see referral traffic decline.

The incentive structure assumes publishers benefit from creating material that powers AI answers, but Wikipedia’s data shows that assumption doesn’t hold.

Track how AI features affect your traffic and whether being cited translates to meaningful engagement.

Looking Ahead

WMF says it will continue updating bot detection systems and monitoring how generative AI and social media shape information access.

Wikipedia remains a core dataset for modern search and AI systems, even when users don’t visit the site directly. Publishers should expect similar dynamics as AI search features expand across platforms.


Featured Image: Ahyan Stock Studios/Shutterstock

Review Of AEO/GEO Tactics Leads To A Surprising SEO Insight via @sejournal, @martinibuster

GEO/AEO is criticized by SEOs who claim that it’s just SEO at best and unsupported lies at worst. Are SEOs right, or are they just defending their turf? Bing recently published a guide to AI search visibility that provides a perfect opportunity to test whether optimization for AI answers recommendations is distinct from traditional SEO practices.

Chunking Content

Some AEO/GEO optimizers are saying that it’s important to write content in chunks because that’s how AI and LLMs break up a pages of content, into chunks of content. Bing’s guide to answer engine optimization, written by Krishna Madhavan, Principal Product Manager at Bing, echoes the concept of chunking.

Bing’s Madhavan writes:

“AI assistants don’t read a page top to bottom like a person would. They break content into smaller, usable pieces — a process called parsing. These modular pieces are what get ranked and assembled into answers.”

The thing that some SEOs tend to forget is that chunking content is not new. It’s been around for at least five years. Google introduced their passage ranking algorithm back in 2020. The passages algorithm breaks up a web page into sections to understand how the page and a section of it is relevant to a search query.

Google says:

“Passage ranking is an AI system we use to identify individual sections or “passages” of a web page to better understand how relevant a page is to a search.”

Google’s 2020 announcement described passage ranking in these terms:

“Very specific searches can be the hardest to get right, since sometimes the single sentence that answers your question might be buried deep in a web page. We’ve recently made a breakthrough in ranking and are now able to better understand the relevancy of specific passages. By understanding passages in addition to the relevancy of the overall page, we can find that needle-in-a-haystack information you’re looking for. This technology will improve 7 percent of search queries across all languages as we roll it out globally.”

As far as chunking is concerned, any SEO who has optimized content for Google’s Featured Snippets can attest to the importance of creating passages that directly answer questions. It’s been a fundamental part of SEO since at least 2014, when Google introduced Featured Snippets.

Titles, Descriptions, and H1s

The Bing guide to ranking in AI also states that descriptions, headings, and titles are important signals to AI systems.

I don’t think I need to belabor the point that descriptions, headings, and titles are fundamental elements of SEO. So again, there is nothing her to differentiate AEO/GEO from SEO.

Lists and Tables

Bing recommends bulleted lists and tables as a way to easily communicate complex information to users and search engines. This approach to organizing data is similar to an advanced SEO method called disambiguation. Disambiguation is about making the meaning and purpose of a web page as clear as possible, to make it less ambiguous.

Making a page less ambiguous can incorporate semantic HTML to clearly delineate which part of a web page is the main content (MC in the parlance of Google’s third-party quality rater guidelines) and which part of the web page is just advertisements, navigation, a sidebar, or the footer.

Another form of disambiguation is through the proper use of HTML elements like ordered lists (OL) and the use of tables to communicate tabular data such as product comparisons or a schedule of dates and times for an event.

The use of HTML elements (like H, OL, and UL) give structure to on-page information, which is why it’s called structured information. Structured information and structured data are two different things. Structured information is on the page and is seen in the browser and by crawlers. Structured data is meta data that only a bot will see.

There are studies that structured information helps AI Agents make sense of a web page, so I have to concede that structured information is something that is particularly helpful to AI Agents in a unique way.

Question And Answer Pairs

Bing recommends Q&A’s, which are question and answer pairs that an AI can use directly. Bing’s Madhavan writes:

“Direct questions with clear answers mirror the way people search. Assistants can often lift these pairs word for word into AI-generated responses.”

This is a mix of passage ranking and the SEO practice of writing for featured snippets, where you pose a question and give the answer. It’s a risky approach to create an entire page of questions and answers but if it feels useful and helpful then it may be worth doing.

Something to keep in mind is that Google’s systems consider content lacking in unique insight on the same level of spam. Google also considers content created specifically for search engines as low quality as well.

Anyone considering writing questions and answers on a web page for the purpose of AI SEO should first consider the whether it’s useful for people and think deeply about the quality of the question and answer pairs. Otherwise it’s just a page of rote made for search engine content.

Be Precise With Semantic Clarity

Bing also recommends semantic clarity. This is also important for SEO. Madhavan writes:

  • “Write for intent, not just keywords. Use phrasing that directly answers the questions users ask.
  • Avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.
  • Add context. A product page should say “42 dB dishwasher designed for open-concept kitchens” instead of just “quiet dishwasher.”
  • Use synonyms and related terms. This reinforces meaning and helps AI connect concepts (quiet, noise level, sound rating).”

They also advise to not use abstract words like “next-gen” or “cutting edge” because it doesn’t really say anything. This is a big, big issue with AI-generated content because it tends to use abstract words that can completely be removed and not change the meaning of the sentence or paragraph.

Lastly, they advise to not use decorative symbols, which is good a tip. Decorative symbols like the arrow → symbol don’t really communicate anything semantically.

All of this advice is good. It’s good for SEO, good for AI, and like all the other AI SEO practices, there is nothing about it that is specific to AI.

Bing Acknowledges Traditional SEO

The funny thing about Bing’s guide to ranking better for AI is that it explicitly acknowledges that traditional SEO is what matters.

Bing’s Madhavan writes:

“Whether you call it GEO, AIO, or SEO, one thing hasn’t changed: visibility is everything. In today’s world of AI search, it’s not just about being found, it’s about being selected. And that starts with content.

…traditional SEO fundamentals still matter.”

AI Search Optimization = SEO

Google and Bing have incorporated AI into traditional search for about a decade. AI Search ranking is not new. So it should not be surprising that SEO best practices align with ranking for AI answers. The same considerations also parallel with considerations about users and how they interact with content.

Many SEOs are still stuck in the decades-old keyword optimization paradigm and maybe for them these methods of disambiguation and precision are new to them. So perhaps it’s a good thing that the broader SEO industry catches up with many of these concepts for optimizing content and to recognize that there is no AEO/GEO, it’s still just SEO.

Featured Image by Shutterstock/Roman Samborskyi

Raptive Drops Traffic Requirement By 75% To 25,000 Views via @sejournal, @MattGSouthern

Raptive lowered its minimum traffic requirement to 25,000 monthly pageviews from 100,000.

The ad network announced the new threshold represents a 75% reduction from the previous standard.

Raptive retired its Rise pilot program and consolidated all entry-level publishers into its Insider tier.

What Changed At Raptive

Sites generating between 25,000 and 99,999 monthly pageviews can now apply. These publishers need at least 50% of traffic from the United States, United Kingdom, Canada, Australia, or New Zealand.

Sites with 100,000 or more pageviews need only 40% traffic from those markets.

Raptive’s announcement stated:

“We’re living in a moment where AI drives inflated pageviews for low-quality websites and where algorithms can shift a site’s pageviews overnight. What truly matters—more than ever—is original, high-quality content that audiences trust.”

The Rise program launched in 2024 for sites between 50,000 and 100,000 monthly pageviews. That tier is being eliminated.

Current Insider-level publishers can now add additional sites once they reach 25,000 monthly pageviews.

Referral Program Expansion

Raptive expanded its referral program through January 31.

Publishers receive $1,000 when referring creators with sites generating 100,000 or more monthly pageviews.

For sites between 25,000 and 100,000 pageviews, the referral bonus is $250 during the limited promotion period.

Access Widening At Some Networks

Other networks have adjusted entry requirements in recent years, though changes vary.

Mediavine launched Journey in March 2024 for sites starting around 10,000 sessions. Ezoic removed pageview minimums for its Access Now monetization program. SHE Media lists an entry point around 20,000 pageviews.

These moves don’t necessarily represent an industry-wide pattern but show expanded options for smaller publishers at select networks.

Why This Matters

If you’re managing a site between 25,000 and 100,000 monthly pageviews with strong tier-one traffic, you now have access to Raptive’s managed monetization. You’ll still need to meet quality standards around original content, proper analytics setup, and advertiser compatibility.

The lower threshold acknowledges that traffic volatility from algorithm changes has made consistent pageview growth less predictable.

Looking Ahead

The new 25,000 pageview minimum takes effect immediately for new applications. Raptive continues requiring original content and proper site setup alongside the reduced traffic threshold.

Other networks may adjust their requirements as traffic patterns continue shifting, but each provider sets criteria independently.


Featured Image: Song_about_summer/Shutterstock