WordPress Delays Release Of Version 7.0 To Focus On Stability via @sejournal, @martinibuster

WordPress 7.0, previously scheduled for an April 9th release, will be delayed in order to stabilize the Real-Time Collaboration feature and assure that the release, a major milestone, will “target extreme stability.” Much is riding on WordPress 7.0 as it will ship with features that will usher in the age of AI-driven content management systems.

Prioritization Of Stability

Matt Mullenweg, co-founder of WordPress, commenting in the official Making WordPress Slack workspace, said the release should step back from its current trajectory and prioritize stability, calling for a longer pre-release phase to get the real-time collaboration (RTC) feature working correctly. The delay is expected to last weeks, not days, and is described as a one-off deviation from WordPress’s planned date-driven schedule.

Mullenweg posted:

“Given the scope and status of 7.0, I think we should go back to beta releases, get the new tables right, lock in everything we want for 7.0, and then start RCs again. Date-driven is still our default, but for this milestone release we want to target extreme stability and exciting updates, especially as AI-accelerated development is increasing people’s expectations for software.

This is a one-off, I think for future we should get back on the scheduled train, with an aim for 4-a-year in 2027, to hopefully reflect our AI-enabled ability to move faster.”

Extended Release Candidate Phase Replaces Beta Reversion

To avoid technical compatibility issues, the project will remain in the release candidate phase, extending the testing period through additional RC builds as needed.

The proposal to return to beta releases was rejected because it would break PHP version comparison behavior, plugin update logic, and tooling that depends on standard version sequencing. Continuing with RC builds preserves compatibility while allowing more time for testing and fixes.

Real-Time Collaboration

The delay is largely due to the Real-Time Collaboration feature, which introduces new database tables and changes how WordPress handles editing sessions. Contributors identified risks related to performance, data handling, and interactions with existing systems.

A primary concern is that real-time editing currently disables persistent post caches during active sessions, a performance issue the team is working to resolve before the final release.

Database Design Raises Performance Concerns

A key part of the discussion focused on how to structure the database for Real-Time Collaboration (RTC).  A proposed single RTC table would support 1. real-time editing updates and 2. synchronization. But some contributors noted that the workloads for real-time editing and synchronization are fundamentally different.

Real-time collaboration generates high-frequency, bursty writes that require low latency (meaning updates happen with very little delay).

While synchronization between environments involves slower, structured updates that may include full-table scans.

Combining both patterns within one table risks performance issues and added complexity. Contributors discussed separating these workloads into separate tables optimized for each use case, but no decision has been made.

Gap In Release Candidate Testing Raises Concern

The discussion in the WordPress Slack workspace also raised concern over whether there was enough real-world release candidate testing, and database schema changes increase the risk of failures during upgrades. The solution of using the Gutenberg plugin for testing was rejected because database changes could affect production sites and require complex migration logic. Instead, the project will use an extended RC phase to increase testing exposure and gather feedback from a wider group of users.

Versioning Constraints

The proposal to delay version 7.0 led to additional issues. PHP version comparison rules and related tooling complicated returning to beta versions. It was agreed that staying within the release candidate sequence (ergo RC1, RC2, RC3) avoids these issues while allowing continued iteration, so it was decided to continue with release candidates.

Future Release Cadence Remains

The delay is described as a temporary exception. Matt Mullenweg said the project intends to return to a regular release schedule, with a goal of delivering roughly four releases per year by 2027 as development speeds increase with AI-assisted workflows.

Implications For Developers And Users

Developers should expect continued changes to the Real-Time Collaboration feature and its supporting database structures during the extended release candidate phase. The longer testing period provides more time to identify issues before release. For site owners and hosts, the delay shows that WordPress is prioritizing stability over schedule while introducing more complex real-time and synchronization features.

Impact Of RTC On Hosting Environments

Something that wasn’t discussed but is a real issue is how real-time collaboration might affect web hosting providers. They need to test that feature to see if it introduces issues on shared hosting environments. While RTC will be shipping with the feature turned off by default, the impact of it being used by customers in a shared hosting environment is currently unknown. A spokesperson for managed WordPress hosting provider Kinsta told Search Engine Journal they are still testing. Given how the feature is still evolving, Kinsta and other web hosts will have to continue testing the upcoming WordPress release candidates.

I think most people will agree that the decision to delay the release of WordPress 7.0 is the right call.

Introducing llms.txt to Shopify: Give AI a map to your best products 

You’ve worked hard to build your product catalog. The last thing you want is AI tools like ChatGPT or Google Gemini describing your products inaccurately to potential customers. 

AI tools don’t browse your whole store the way a search engine does. They grab what they can find, quickly, and fill in the gaps. For a store with a large catalog, that means incomplete answers, outdated information, or worse, sending shoppers to a competitor. 

The new llms.txt feature, available in Yoast SEO for Shopify bridges that gap. 

What does it actually do? 

It creates a file that tells AI tools which parts of your store matter most: your top products, your collections, your policies, and your key pages. Think of it as handing AI a well-organized store guide instead of letting it wander around on its own. 

You switch it on once. We handle the rest. 

Two ways to use it 

Let Yoast handle it automatically 

Turn it on and we’ll build and update the file each week based on your Shopify data. No decisions needed. The file automatically highlights: 

  • Your 10 most-sold products over time
  • Up to 5 of your largest collections, plus a link to your full product range 
  • Your store policies, including shipping, returns, and privacy 
  • Your homepage, latest blog posts, and most recently updated pages 
  • Any pages you’ve already marked as cornerstone content 

Or choose exactly what’s included 

If you’d rather have full control, switch to manual selection. You can hand-pick the products and pages you want to feature, and there’s a dedicated spot to add your “About us” page so AI knows the story behind your brand. 

Either way, the file updates weekly and removes deleted products automatically. 

No technical knowledge needed

Setting this up from scratch would normally mean editing code. We’ve built it directly into your Yoast SEO for Shopify settings so any member of your team can turn it on in seconds. If you already have a redirect set up for /llms.txt, we’ll respect it and let you know, so nothing breaks. 

You decide when it’s right for your business 

We believe every merchant should have a say in how their content is seen and used as AI plays a bigger role in how people discover products online. That’s why this feature is opt-in. 

Turn on the llms.txt toggle in Yoast SEO for Shopify next time you log in to your store

How To Identify Which LLM Is Actually Working For You [Webinar] via @sejournal, @hethr_campbell

AI search is dominating the strategy conversation right now, and everyone is hearing the same thing from clients and directors: “What’s our AI search plan?”

The instinct is to optimize everywhere, ChatGPT, Perplexity, Gemini, and move fast. But before you reallocate budget or rewrite your GEO roadmap, there’s a more useful question to ask first:

Which LLM is actually driving conversions in your clients’ specific industry?

Join us for an upcoming expert panel webinar where we’ll dive into exactly that.

What You’ll Learn

In this webinar, Danielle Wood, Content & Creative Manager at CallRail, and Natalie Johnson, SEO & AI Visibility Expert & Founder of SweetGlow Marketing, will break down real conversion data by LLM and show how platform-level performance should shape your GEO strategy.

Specifically, you’ll walk away with:

  • Conversion data by LLM platform, so you know where high-intent traffic is actually coming from in each industry
  • A clear AI prioritization framework to stop spreading GEO effort equally and concentrate it where it converts
  • A reporting model that ties AI search activity to real business outcomes clients can see and trust

Why Attend?

You’ll finally be able to justify AI search investment; this session will give you the data and the framework to make that case and to implement the strongest, most successful AI search strategy possible.

Join us live to get your questions answered directly by the expert panel.

Inside the stealthy startup that pitched brainless human clones

After operating in secrecy for years, a startup company called R3 Bio, in Richmond, California, suddenly shared details about its work last week—saying it had raised money to create nonsentient monkey “organ sacks” as an alternative to animal testing.

In an interview with Wired, R3 listed three investors: billionaire Tim Draper, the Singapore-based fund Immortal Dragons, and life-extension investors LongGame Ventures.

But there is more to the story. And R3 doesn’t want that story told.

MIT Technology Review discovered that the stealth startup’s founder John Schloendorn also pitched a startling, medically graphic, and ethically charged vision for what he’s called “brainless clones” to serve the role of backup human bodies.

Imagine it like this: a baby version of yourself with only enough of a brain structure to be alive in case you ever need a new kidney or liver.

Or, alternatively, he has speculated, you might one day get your brain placed into a younger clone. That could be a way to gain a second lifespan through a still hypothetical procedure known as a body transplant.

The fuller context of R3’s proposals, as well as activities of another stealth startup with related goals, have not previously been reported. They’ve been kept secret by a circle of extreme life-extension proponents who fear that their plans for immortality could be derailed by clickbait headlines and public backlash.

And that’s because the idea can sound like something straight from a creepy science fiction film. One person who heard R3’s clone presentation, and spoke on the condition of anonymity, was left reeling by its implications and shaken by Schloendorn’s enthusiastic delivery. The briefing, this person said, was like a “close encounter of the third kind” with “Dr. Strangelove.”

A key inspiration for Schloendorn is a birth defect in which children are born missing most of their cortical hemispheres; he’s shown people medical scans of these kids’ nearly empty skulls as evidence that a body can live without much of a brain. 

And he’s talked about how to grow a clone. Since artificial wombs don’t exist yet, brainless bodies can’t be grown in a lab. So he’s said the first batch of brainless clones would have to be carried by women paid to do the job. In the future, though, one brainless clone could give birth to another.

Last Monday, the same day it announced itself to the world in Wired, R3 sent us a sweeping disavowal of our findings. It said Schloendorn “never made any statement regarding hypothetical ‘non-sentient human clones’ [that] would be carried by surrogates.” The most overarching of these challenges was its insistence that “any allegations of intent or conspiracy to create human clones or humans with brain damage are categorically false.”

But even Schloendorn and his cofounder, Alice Gilman, can’t seem to keep away from the topic. Just last September, the pair presented at Abundance Longevity, a $70,000-per-ticket event in Boston organized by the anti-aging promoter Peter Diamandis. Although the presentation to about 40 people was not recorded and was meant to be confidential, a copy of the agenda for the event shows that Schloendorn was there to outline his “final bid to defeat aging” in a session called “Full Body Replacement.”

According to a person who was there, both animal research and personal clones for spare organs were discussed. During the presentation, Gilman and Schloendorn even stood in front of an image of a cloning needle. Pressed on whether this was a talk about brainless clones, Gilman told us that while R3’s current business is replacing animal models, “the team reserves the right to hold hypothetical futuristic discussions.”

MIT Technology Review found no evidence that R3 has cloned anyone, or even any animal bigger than a rodent. What we did find were documents, additional meeting agendas, and other sources outlining a technical road map for what R3 called “body replacement cloning” in a 2023 letter to supporters. That road map involved improvements to the cloning process and genetic wiring diagrams for how to create animals without complete brains. 

light passing through an infant's skull
A child with hydranencephaly, a rare condition in which most of the brain is missing. Could a human clone also be created without much of a brain as an ethical source of spare organs?
DIMITRI AGAMANOLIS, M.D. VIA WIKIPEDIA

A main purpose of the fundraising, investors say, was to support efforts to try these techniques in monkeys from a base in the Caribbean. That offered a path to a nearer-term business plan for more ethical medical experiments and toxicology testing—if the company could develop what it now calls monkey “organ sacks.” However, this work would clearly inform any possible human version. 

Though he holds a PhD, Schloendorn is a biotech outsider who has published little and is best known for having once outfitted a DIY lab in his Bay Area garage. Still, his ties to the experimental fringe of longevity science have earned him a network in Silicon Valley and allies at a risk-taking US health innovation agency, ARPA-H. Together with his success at raising money from investors, this signals that the brainless-clone concept should be taken seriously by a wider community of scientists, doctors, and ethicists, some of whom expressed grave concerns. 

“It sounds crazy, in my opinion,” said Jose Cibelli, a researcher at Michigan State University, after MIT Technology Review described R3’s brainless-clone idea to him. “How do you demonstrate safety? What is safety when you’re trying to create an abnormal human?”

Twenty-five years ago, Cibelli was among the first scientists to try to clone human embryos, but he was trying to obtain matched stem cells, not make a baby. “There is no limit to human imagination and ways to make money, but there have to be boundaries,” he says. “And this is the boundary of making a human being who is not a human being.” 

“Feasibility research”

Since Dolly the sheep was born in 1996, researchers have cloned dogs, cats, camels, horses, cattle, ferrets, and other species of mammal. Injecting a cell from an existing animal into an egg creates a carbon-copy embryo that can develop, although not always without problems. Defects, deformities, and stillbirths remain common. 

Those grave risks are why we’ve never heard of a human clone, even though it’s theoretically possible to create one. 

But brainless clones flip the script. That’s because the ultimate aim is to create not a healthy person but an unconscious body that would probably need life support, like a feeding tube, to stay alive. Because this body would share the DNA of the person being copied, its organs would be a near-perfect immunological match. 

Backers of this broad concept argue that a nonsentient body would be ethically acceptable to harvest organs from. Some also believe that swapping in fresh, young body parts—known as “replacement”—is the likeliest path to life extension, since so far no drug can reverse aging. 

And then there’s the idea of a complete body transplant. “Certainly, for the cryonics patients, that sounds like something really promising,” says Anders Sandberg, a prominent Swedish transhumanist and expert in the ethics of future technologies. He notes that many people who opt to be stored in cryonic chambers after death choose the less expensive “head only” option, so “there might be a market for having an extra cloned body.”

MIT Technology Review first approached Schloendorn two years ago after learning he’d led a confidential online seminar called the Body Replacement Mini Conference, in which he presented “recent lab progress towards making replacement bodies.” 

According to a copy of the agenda, that 2023 session also included a presentation by a cloning expert, Young Gie Chung. And there was another from Jean Hébert, who was then a professor at the Albert Einstein College of Medicine and is now a program manager at ARPA-H, where he oversees a project to use stem cells to restore damaged brain tissue. Hébert popularized the so-called replacement solution to avoiding death in a 2020 book called Replacing Aging

In an interview prior to joining the government in 2024, Hébert described an informal but “very collaborative” relationship with Schloendorn. The overall idea was that to stop aging, one of them would determine how to repair a brain, while the other would figure out how to create a body without one. “It’s a perfect match, right? Body, brain,” Hébert told MIT Technology Review at the time. 

Schloendorn, by working outside the mainstream, had the huge advantage of “not being bound by getting the next paper out, or the next grant,” Hébert said, adding, “It’s such a wonderful way of doing research. It’s just clean and pure.” R3 now appears on the ARPA-H website on a list of prospective partners for Hébert’s program.

In a LinkedIn message exchanged with Schloendorn that same year, he described his work as “feasibility research in body replacement.”

“We will try to do it in a way that produces defined societal benefits early on, and we need to be prepared to take no for an answer, if it turns out that this cannot be done safely,” Schloendorn wrote at the time. He declined an interview then, saying that before exiting stealth mode, he wants to be sure the benefits are “reasonably grounded in reality.”

That could prove challenging. While body-part replacement sounds logical, like swapping the timing belt on an old car, in reality there’s scant evidence that receiving organs from a younger twin would make you live any longer. 

A complete body transplant, meanwhile, would probably be fatal, at least with current techniques. In the latest test of the concept, published last July, Russian surgeons removed a pig’s head and then sewed it back on. The animal did live—breathing weakly and lapping water from a syringe. But because its spinal cord had been cut, it was otherwise totally paralyzed. (As yet, there’s no proven method to rejoin a severed spinal cord.) In an act of mercy, the doctors ended the pig’s life after about 12 hours. 

Even some of R3’s investors say the endeavor is a risky, low-odds project, on par with colonizing Mars. Boyang Wang, head of Immortal Dragons, has spoken at longevity conferences about body-swapping technology, referring to the chance that “when the time comes, you can transplant your brain into a new body.” Wang confirmed in a January Zoom call that he’d been referring to R3 and that he invested $500,000 in the company during a 2024 fundraising round.

But since making his investment, Wang says, he’s become less bullish. He now views whole-body transplant as “very infeasible, not even very scientific” and “far away from hope for any realistic application.” 

Still, he says, the investment in R3 fits with his philosophy of making unorthodox bets that could be breakthroughs against aging. “What can really move the needle?” he asks. “Because time is running out.”

Stealth mode

Clonal bodies sit at the extreme frontier of an advancing cluster of technologies all aimed at growing spare parts. Researchers are exploring stem cells, synthetic embryos, and blob-like organoids, and some companies are cloning genetically engineered pigs whose kidneys and hearts have already been transplanted into a few patients. Each of these methods seeks to harness development—the process by which animal bodies naturally form in the womb—to grow fully functional organs. 

There’s even a growing cadre of mainstream scientists who say nonsentient bodies could solve the organ shortage, if they could be grown through artificial means. Two Stanford University professors, calling these structures “bodyoids,” published an editorial in favor of manufacturing spare human bodies in MIT Technology Review last year. While that editorial left many details to the imagination, they called the idea “at least plausible—and possibly revolutionary.” 

“There are a lot of variations on this where they’re trying to find a socially acceptable form,” says George Church, a Harvard University professor who advises startups in the field. But Church says gestating an entire body is probably taking things too far, especially since nearly all patients on transplant lists are waiting for just a single organ, like a heart or kidney. 

“There’s almost no scenario where you need a whole body,” he says. “I just think even if it’s someday acceptable, it’s not a good place to start.” For the moment, Church says, brainless human bodies are “not very useful, in addition to being repulsive.”

That’s arguably why body replacement technology still feels risky to talk about, even among life-extension enthusiasts who are otherwise ready to inject Chinese peptides or have their bodies cryogenically frozen. “I think it’s exciting or interesting from a scientific perspective, but I think the world is not fully ready for it yet,” says Emil Kendziorra, CEO of Tomorrow Bio, a company in Berlin that stores bodies at -196 °C in the hope they can be restored to life in the future. 

“Everybody’s like, yeah, you know, cryopreservation makes total sense,” he says. “And then you talk about total body replacement. And then everybody’s like, Whoa, whoa, whoa.”

Even so, “replacement” technology has found a fervent base of support among a group of self-described “hardcore” longevity adherents who follow a philosophy called Vitalism, which holds that society should redirect resources toward achieving unlimited lifespans. The growing influence of this movement, achieved through lobbying, investment, recruiting, and public messaging, was detailed earlier this year in MIT Technology Review.

Last spring, during a meetup for this community, Kendziorra was among the attendees at an invite-only “Replacement Day” gathering that took place off the public schedule. It was where more radical ideas could be discussed freely, since to some in the Vitalist circle, replacing body parts has emerged as the most plausible, least expensive way to beat death. 

At least that was the conclusion of a road map for anti-aging technology produced by one Vitalist group, the Longevity Biotech Fellowship, which reckoned that a proof-of-concept human clone lacking a neocortex would cost $40 million to create—a tiny amount, relatively speaking. 

Its report cited the existence of two stealth companies working on cloning whole nonsentient bodies, although it took care not to name them. If these companies’ activities become public, “there will be a huge backlash—people will hate it,” the entrepreneur Kris Borer said while presenting the road map at a French resort last August. 

“There are a ton of dystopian movies and novels about this kind of stuff. That is why I didn’t talk about any of the companies working on it. They are trying to hide from public attention,” he said. “We have to have the angel investors and other people invest kind of in secret until things are ready.” 

Borer did say what he sees as the best way to go public: first, to slowly ease body replacement into society’s awareness by disclosing more limited aims, which will be palatable. “We are not going to start with Let’s clone you and give you a body. We are going to start with Let’s solve the organ shortage,” he said. “Eventually people will warm up to it, and then we can go to the more hardcore stuff.”

In an interview earlier this month, Borer declined to name the companies involved in his immortality road map, or to say if R3 is one of them. But we did identify one additional stealthy startup, this one focused on replacing a person’s internal organs, not the whole body. Called Kind Biotechnology, it is a New Hampshire–based company headed by the anti-aging researcher Justin Rebo, a sometime collaborator of Schloendorn’s.

Fig 13 from a patent application
A patent image from Kind Biotechnology shows a mouse pup engineered to lack anatomical features (left) next to a normal animal. The company’s goal is to grow organ “sacks” with a “complete lack of ability to feel, think, or sense.”
WO2025260099 VIA WIPO

According to patent applications filed by the company, Rebo’s team is working to create animals with a “complete lack of ability to feel, think, or sense the environment.” Images included in the patents show mice the company produced that lack a complete brain, and others that don’t have faces or limbs. They did that by deleting genes in embryos using the gene-editing technology CRISPR with the goal of creating a “sack of organs that grows mostly on its own,” with only a minimal nervous system. A cartoon rendering submitted to the patent office shows what looks like a fleshy duffel bag connected to life support tubes. 

In an email, Rebo said his company is working on an “ethical and scalable” way to create animal organs for experimental transplant to humans. He notes that “thousands die while waiting” for an organ. 

Some of Kind’s patent applications do cover the possibility of producing these organ sacks from human cells. Rebo says that’s more of a speculative possibility. But he does see his work as part of the “replacement” approach to longevity. Firstly, that’s because a “scalable production of young, high-quality organs” would let surgeons try transplants in more types of patients, including many with heart disease in old age who aren’t candidates for a transplant now. 

“With abundant high-quality organs, replacement could become a direct form of rejuvenation by replacement of failing parts,” he says. 

And Rebo imagines that simultaneously replacing multiple internal organs (grown together in the sack) could have even broader rejuvenating effects. “Ultimately, replacing failing parts is a direct path to extending healthy human lifespan,” he says. 

Church, who agreed earlier this year to advise Kind Bio, sees this work as part of an effort to “nudge” these technologies “toward something that is more useful and more acceptable from the get-go,” he says. “And then let’s see how society responds to that—rather than jumping to the most repulsive and most useless form, which some of them seem to be aiming for.” 

“There’s one way to find out”

People who know Schloendorn describe a dynamo-like presence who is “100% dedicated” to the goal of extreme life extension. In 2006, he penned a paper in a bioethics journal outlining why the “desire to live forever” is rational, and his doctoral research at the University of Arizona was sponsored by a longevity research organization called the SENS Foundation.  

He’s also well connected. In an interview, Aubrey de Grey, the influential and controversial fundraiser and prognosticator who cofounded SENS, called Schloendorn “one of my protégés.” And around 2010, Peter Thiel reportedly invested $1.5 million in ImmunePath, a company started by Schloendorn to develop stem-cell treatments, though it soon failed. (A representative for Thiel did not respond to a request to confirm the figure.)

By 2021, Schloendorn had moved on, founding R3 Biotechnologies. He began to circulate the body replacement idea and discuss a step-by-step scheme to get there: assess techniques in the lab first, then in monkeys, and maybe eventually in humans. 

A 2023 “letter to stakeholders” signed by Schloendorn begins by saying that “body replacement cloning will require multicomponent genetic engineering on a scale that has never been attempted in primates.” Fortunately, it adds, molecular techniques for “brain knockout” are well known in mice and should also be expected to function in “birthing whole primates,” a class that includes both monkeys and humans. 

Would it work? “There’s one way to find out,” the letter says. 

Wang, the investor at Immortal Dragons, says he put money into R3 after it showed him it is possible to create mice without complete brains. “There were imperfections, but the resulting mice survived, grew up, and to me, that is a pretty strong experiment,” he says; it was evidence enough for him to fund R3’s attempt to “replicate the result in primates.” 

(In its emailed statement, R3 said the company and its founders “never produced any degree of brain alterations in any species, did not attempt to do so, did not hire another party to do so, and have no specific plans to do so in the future.” It added: “We do not work with live non-human primates.”) 

The bigger technical obstacle, though, remains the cloning. Out of 100 attempts to clone an animal, only a few typically succeed. That fact alone makes cloning a human—or a monkey—almost infeasible.

But R3 does seem to have made an effort to tackle the efficiency problem. In one document reviewed by MIT Technology Review, it claims to have implemented improvements to the basic procedure in rodents, referencing a protein, called a histone demethylase, that helps erase a cell’s genetic memory. Adding it can greatly increase the chance that the cell will form a cloned embryo after being injected into an egg in the lab.

Those molecules were used in the first successful cloning of a monkey, which occurred in 2018 in China. But it still wasn’t easy—in fact, it was a huge and costly effort to handle a crowd of monkeys in estrus and perform IVF on them. According to Michigan State’s Cibelli, monkey cloning remains nearly impossible, at least on US territory, just because it’s “unaffordable.”

Nevertheless, success in monkeys did help prove, at least biologically, that human reproductive cloning could be possible. 

The company may also have tried to tackle a second long-standing obstacle to cloning: defects in how the placenta works. Because of such problems, some cloned animals die quickly after birth.

The R3 document refers to a “birthing fix” it developed to further improve the cloning success rate. While MIT Technology Review didn’t learn what R3’s process entails, we found a reference to it on the LinkedIn page of Maitriyee Mahanta, a scientist who cosigned the 2023 letter to R3 stakeholders and is a former research assistant to Hébert. (We were unable to reach Mahanta for comment.)

Her page described her current role as “molecular lead” studying cloning, “birth rate fixing,” and cortical development using cells from nonhuman primates. Her job affiliation is given as the Longevity Escape Velocity Foundation, a nonprofit where de Grey is the president and chief science officer. But de Grey says his foundation only arranged a work visa for Mahanta as part of a partnership “with the company she actually spends her time at.”

Like several other people interviewed for this article, de Grey made a resourceful effort to avoid directly confirming the existence of R3 when we spoke, while at the same time freely discussing theoretical aspects of body cloning technology. For instance, he talked about ways to shorten the wait for your double to grow up to a size suitable for organ harvesting; a further genetic mutation could be added to cause “central precocious puberty” in the clone, he said. This condition causes a growth spurt, even pubic hair, in a toddler. 

Cloning dictators

Who would clone a body and pay to keep it alive for years, until it’s needed? The first customers for this costly technology (if it ever proves feasible) would likely be the ultra-rich or the ultra-powerful. 

Indeed, somehow the world’s top dictators seem to have gotten the memo about replacement parts. In September, a hot mic picked up a conversation between Russian president Vladimir Putin and Chinese leader Xi Jinping as they walked through Beijing with North Korean autocrat Kim Jong Un; in the exchange, the Russian speculated on life extension.  

“Biotechnology is continuously developing. Human organs can be continuously transplanted. The longer you live, the younger you become, and [you can] even achieve immortality,” Putin said through an interpreter.

“Some predict that in this century, humans will live to 150 years old,” Xi responded agreeably.

How the leaders learned of these possibilities is unknown. But scenarios involving dictators are a constant topic among body replacement enthusiasts. 

“There are companies working on this. They are in stealth—we can’t reveal too much about them—but the general concept on this is if you didn’t have any ethical qualms, you could do most of it today,” Will Harborne, the chief investment officer of LongGame Advisors, said last year, during an interview with the podcaster Julian Issa. “If you were the dictator of some country and wanted a clone of yourself, you can already go grow one. You can create a cloned embryo of yourself, you can get a surrogate to carry it to term, and you can grow [a] body until age 18 with a brain, and eventually, if you were a dictator, you could kill them and try to transplant your head on their body.”

“And now no one is suggesting you do that—it’s very unethical—but most of the technology is there,” he said. He noted that the reason for removing the cortex of a clone created for such a purpose is that “we don’t want to kill other people to live forever.” 

Harborne subsequently confirmed to MIT Technology Review that the fund invested $1 million in R3 about a year and a half ago.

In order to make the body replacement process ethical, the clone’s brain needs to be stunted so it lacks consciousness. That is where the interest in birth defects comes in. Remarkable medical scans of kids with a rare condition, hydranencephaly, show a total absence of the cerebral hemispheres. Yet if they are cared for, they may be able to live into their 20s, even though they cannot speak or engage in purposeful movement. 

The technical question, then, is how to intentionally produce such a condition in a clone. Sandberg, the futurist, says he’s visited R3’s lab, talked to Gilman, and sat through a presentation about how genetic engineering can be used to shape brain growth. Previous work has shown that by adding a toxic gene, it is possible to kill specific cell types in a growing embryo but spare others, leading to a mouse without a neocortex.

While Sandberg isn’t an expert in biotechnology, he says R3’s theory looked sensible to him. “I think it’s possible to actually prevent the development of the brain well enough that you can say ‘Yeah, there is almost certainly no consciousness here,’” Sandberg says. “Hence, there can’t be any suffering, or any individual, in a practical sense.”

“I think the overall aim—actually, it looks ethically pretty good,” he says. 

Two monkeys with stuffed animals in a plastic research container
Monkeys were successfully cloned in China for the first time in 2018. Although it was was a costly and difficult undertaking, the feat suggested human cloning is biologically possible.
QIANG SUN AND MU-MING POO/CHINESE ACADEMY OF SCIENCES VIA AP

Yet it could be difficult to really determine where consciousness starts and ends. Under current medical standards, taking the organs of people with hydranencephaly isn’t allowed because they don’t meet the standard of brain death: They have a functioning brain stem. An even more serious problem is evidence that the brain stem alone produces a basic form of consciousness. If that is so, says Bjorn Merker, a neuroscientist who surveyed caretakers of more than a hundred children with hydranencephaly, a plan “to harvest organs from organisms modeled on this condition would be unethical.”

Of course, the most extreme version of the replacement dream isn’t just to take organs. It’s to take over the body entirely. Sergio Canavero, a controversial Italian surgeon who has proposed head and brain transplants, says he was approached for advice by Schloendorn and others a few years ago. “They told me they were looking at a head transplant on a two- or three-year-old,” he says. “I stopped short. How could you even conceive of that? The biomechanical compatibility is not there. You have to wait until at least 14. And I would say 16. It was very clear to me these guys are not surgeons—they are biologists.” 

Canavero says he’s not opposed to cloning bodies for transplant—he thinks it could work. “But if you want to use a clone,” he says, “it must be a nonsentient clone. Otherwise it’s murder, a homicide.”    

MIT Technology Review has not found any evidence that R3 has yet created an “organ sack,” much less a brainless human clone. And there are many reasons to believe their hypothetical future of “full body replacement” will never come to pass—that it is just a live-forever fantasy.

“There are so many barriers,” says Cibelli. It’s a long list: Human cloning is illegal in many countries, it’s unsafe, and few competent experts would want, or dare, to participate. And then there’s the inconvenient fact that for now, there’s no way to grow a brainless clone to birth, except in a woman’s body. Think about it, Cibelli says: “You’d have to convince a woman to carry a fetus that is going to be abnormal.”

Sandberg agrees that is where things could start to get tricky. “The problem here, of course,” he says, “is that the yuck factor is magnificent.”

The Download: brainless human clones and the first uterus kept alive outside a body

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the stealthy startup that pitched brainless human clones 

After operating in secrecy for years, R3 Bio, a California-based startup, suddenly revealed last week that it had raised money to create nonsentient monkey “organ sacks” as an alternative to animal testing. But there is more to the story. And R3 doesn’t want that story told. 

MIT Technology Review discovered that founder John Schloendorn also pitched a startling, ethically charged vision: “brainless clones” that serve as backup human bodies. Find out all the details on the radical proposal

—Antonio Regalado 

A woman’s uterus has been kept alive outside the body for the first time 

Ten months ago, reproductive health researchers placed a freshly donated human uterus inside a new device they call “Mother.” They connected the organ to the machine’s plastic veins and arteries and pumped in modified human blood. 

The device kept the uterus alive for a day, a new feat that could lead to longer-term maintenance of wombs outside the body. Future versions of the technology could shine new light on pregnancies—and potentially even grow a human fetus. Read the full story

—Jessica Hamzelou 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 AI data centers can significantly warm up surrounding areas  
The “heat islands” may already affect 340 million people. (New Scientist
Mistral has raised $830M to build Nvidia-powered AI centers in Europe. (FT $) 
+ But nobody wants a data center in their backyard. (MIT Technology Review

2 Elon Musk reportedly joined Trump’s call with Modi about the Iran War 
It remains unclear what Musk was doing during the conversation. (NYT $)  
+ India has disputed the report. (Independent
+ The war poses a grave threat to the EV market. (Rest of World

3 Eli Lilly has struck a deal to bring AI-developed drugs to the market 
It’s secured a $2.75 billion drug collaboration with Insilico Medicine. (Reuters $) 
+ A I-designed compounds can kill drug-resistant bacteria. (MIT Technology Review

4 More and more countries are curbing children’s social media access 
Austria is the latest to pursue a ban. (Engadget
+ Indonesia has rolled out the first one in Southeast Asia. (DW
+ UK Prime Minister Keir Starmer said he will also “have to act.” (Guardian)  

5 Tech stocks just had their worst week in nearly a year 
Thanks to a combination of the Iran war and legal disputes. (CNBC
+ Tech insiders are split over the AI bubble. (MIT Technology Review

6 Meta is launching new smart glasses for prescription wearers 
It plans to debut them next week. (Bloomberg $) 

7 Taiwan is probing 11 Chinese firms for illegal poaching of tech talent 
Its semiconductors are entangled in the tensions with Beijing. (Reuters

8 Bluesky has built an AI app for customizing social media feeds 
It uses Anthropic’s Claude. (TechCrunch

9 A psychologist is making music with his brain implant 
He believes enjoyment is a prerequisite for BCI success. (Wired $) 

10 The world’s smallest QR code could store data for centuries 
It’s smaller than bacteria. (Science Daily

Quote of the day 

“We should be thinking about protecting young people in the digital world as opposed to protecting them from the digital world.” 

—YouTube CEO Neal Mohan gives the New York Times his take on the debate around children’s safety online. 

One More Thing 

AJ PICS / ALAMY STOCK PHOTO

AI’s growth needs the right interface 

You’d have to be pudding-brained to believe that chatbots are the best way to use computers. The real opportunity is a system built atop the visual interfaces we already know, but navigated through a natural mix of voice and touch. 

Crucially, this won’t just be a computer that we can use. It’ll be one we can break and remake to suit whatever uses we want. Instead of merely consuming technology like the gelatinous humans in Wall-E, we should be able to architect it to suit our own ends 

This idea is already lurching to life. Read the full story to find out how

—Cliff Kuang 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ These floating designs will elevate your perspective on architecture. 
+ Uğur Gallenkuş’s portraits of two worlds in one image beautifully build bridges. 
+ This is the anti-Karen that the world needs right now. 
+ If only we could all find a love as pure as this kitty clinging to its favorite toy. 

The Pentagon’s culture war tactic against Anthropic has backfired

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last Thursday, a California judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering government agencies to stop using its AI. It’s the latest development in the month-long feud. And the matter still isn’t settled: The government was given seven days to appeal, and Anthropic has a second case against the designation that has yet to be decided. Until then, the company remains persona non grata with the government. 

The stakes in the case—how much the government can punish a company for not playing ball—were apparent from the start. Anthropic drew lots of senior supporters with unlikely bedfellows among them, including former authors of President Trump’s AI policy.

But Judge Rita Lin’s 43-page opinion suggests that what is really a contract dispute never needed to reach such a frenzy. It did so because the government disregarded the existing process for how such disputes are governed and fueled the fire with social media posts from officials that would eventually contradict the positions it took in court. The Pentagon, in other words, wanted a culture war (on top of the actual war in Iran that began hours later). 

The government used Anthropic’s Claude for much of 2025 without complaint, according to court documents, while the company walked a branding tightrope as a safety-focused AI company that also won defense contracts. Defense employees accessing it through Palantir were required to accept terms of a government-specific usage policy that Anthropic cofounder Jared Kaplan said “prohibited mass surveillance of Americans and lethal autonomous warfare” (Kaplan’s declaration to the court didn’t include details of the policy). Only when the government aimed to contract with Anthropic directly did the disagreements begin. 

What drew the ire of the judge is that when these disagreements became public, they had more to do with punishment than just cutting ties with Anthropic. And they had a pattern: Tweet first, lawyer later. 

President Trump’s post on Truth Social on February 27 referenced “Leftwing nutjobs” at Anthropic and directed every federal agency to stop using the company’s AI. This was echoed soon after by Defense Secretary Pete Hegseth, who said he’d direct the Pentagon to label Anthropic a supply chain risk. 

Doing so necessitates that the secretary take a specific set of actions, which the judge found Hegseth did not complete. Letters sent to congressional committees, for example, said that less drastic steps were evaluated and deemed not possible, without providing any further details. The government also said the designation as a supply chain risk was necessary because Anthropic could implement a “kill switch,” but its lawyers later had to admit it had no evidence of that, the judge wrote.

Hegseth’s post also stated that “No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” But the government’s own lawyers admitted on Tuesday that the Secretary doesn’t have the power to do that, and agreed with the judge that the statement had “absolutely no legal effect at all.”

The aggressive posts also led the judge to also conclude that Anthropic was on solid ground in complaining that its First Amendment rights were violated. The government, the judge wrote while citing the posts, “set out to publicly punish Anthropic for its ‘ideology’ and ‘rhetoric,’ as well as its ‘arrogance’ for being unwilling to compromise those beliefs.”

Labeling Anthropic a supply chain risk would essentially be identifying it as a “saboteur” of the government, for which the judge did not see sufficient evidence. She issued an order last Thursday halting the designation, preventing the Pentagon from enforcing it and forbidding the government from fulfilling the promises made by Hegseth and Trump. Dean Ball, who worked on AI policy for the Trump administration but wrote a brief supporting Anthropic, described the judge’s order on Thursday as “a devastating ruling for the government, finding Anthropic likely to prevail on essentially all of its theories for why the government’s actions were unlawful and unconstitutional.”

The government is expected to appeal the decision. But Anthropic’s separate case, filed in DC, makes similar allegations. It just references a different segment of the law governing supply chain risks. 

The court documents paint a pretty clear pattern. Public statements made by officials and the President did not at all align with what the law says should happen in a contract dispute like this, and the government’s lawyers have consistently had to create justifications for social media lambasting of the company after the fact.

Pentagon and White House leadership knew that pursuing the nuclear option would spark a court battle; Anthropic vowed on February 27 to fight the supply chain risk designation days before the government formally filed it on March 3. Pursuing it anyway meant senior leadership was, to say the least, distracted during the first five days of the Iran war, launching strikes while also compiling evidence that Anthropic was a saboteur to the government, all while it could have cut ties with Anthropic by simpler means. 

But even if Anthropic ultimately wins, the government has other means to shun the company from government work. Defense contractors who want to stay on good terms with the Pentagon, for example, now have little reason to work with Anthropic even if it’s not flagged as a supply chain risk. 

“I think it’s safe to say that there are mechanisms the government can use to apply some degree of pressure without breaking the law,” says Charlie Bullock, a senior research fellow at the Institute for Law and AI. “It kind of depends how invested the government is in punishing Anthropic.”

From the evidence thus far, the administration is committing top-level time and attention to winning an AI culture war. At the same time, Claude is apparently so important to its operations that even President Trump said the Pentagon needed six months to stop using it. The White House demands political loyalty and ideological alignment from top AI companies, But the case against Anthropic, at least for now, exposes the limits of its leverage.

If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

There are more AI health tools than ever—but how well do they work?

<div data-chronoton-summary="

  • Demand is driving the boom: Microsoft, Amazon, and OpenAI have all launched consumer health AI tools in recent months, partly because people are already using general chatbots for medical advice at massive scale—Microsoft alone fields 50 million health questions daily.
  • Independent testing is lagging behind releases: Most experts agree these tools could genuinely help people who struggle to access care, but all six academic researchers interviewed raised concerns that products are going public before independent researchers can assess whether they’re actually safe.
  • Even good benchmarks have blind spots: Studies show that real users—lacking medical expertise—might not know how to get the answers they want from health chatbots, a gap that some lab-based evaluations may not catch.
  • The honest answer is still “we don’t know”: No one is demanding perfection from health AI, but without trusted third-party evaluation, it remains genuinely unclear whether today’s tools help more than they harm.

” data-chronoton-post-id=”1134795″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medical records and ask specific questions about their health. A couple of days earlier, Amazon had announced that Health AI, an LLM-based tool previously restricted to members of its One Medical service, would now be widely available. These products join the ranks of ChatGPT Health, which OpenAI released back in January, and Anthropic’s Claude, which can access user health records if granted permission. Health AI for the masses is officially a trend. 

There’s a clear demand for chatbots that provide health advice, given how hard it is for many people to access it through existing medical systems. And some research suggests that current LLMs are capable of making safe and useful recommendations. But researchers say that these tools should be more rigorously evaluated by independent experts, ideally before they are widely released. 

In a high-stakes area like health, trusting companies to evaluate their own products could prove unwise, especially if those evaluations aren’t made available for external expert review. And even if the companies are doing quality, rigorous research—which some, including OpenAI, do seem to be—they might still have blind spots that the broader research community could help to fill.

“To the extent that you always are going to need more health care, I think we should definitely be chasing every route that works,” says Andrew Bean, a doctoral candidate at the Oxford Internet Institute. “It’s entirely plausible to me that these models have reached a point where they’re actually worth rolling out.”

“But,” he adds, “the evidence base really needs to be there.”

Tipping points 

To hear developers tell it, these health products are now being released because large language models have indeed reached a point where they can effectively provide medical advice. Dominic King, the vice president of health at Microsoft AI and a former surgeon, cites AI advancement as a core reason why the company’s health team was formed, and why Copilot Health now exists. “We’ve seen this enormous progress in the capabilities of generative AI to be able to answer health questions and give good responses,” he says.

But that’s only half the story, according to King. The other key factor is demand. Shortly before Copilot Health was launched, Microsoft published a report, and an accompanying blog post, detailing how people used Copilot for health advice. The company says it receives 50 million health questions each day, and health is the most popular discussion topic on the Copilot mobile app.

Other AI companies have noticed, and responded to, this trend. “Even before our health products, we were seeing just a rapid, rapid increase in the rate of people using ChatGPT for health-related questions,” says Karan Singhal, who leads OpenAI’s Health AI team. (OpenAI and Microsoft have a long-standing partnership, and Copilot is powered by OpenAI’s models.)

It’s possible that people simply prefer posing their health problems to a nonjudgmental bot that’s available to them 24-7. But many experts interpret this pattern in light of the current state of the health-care system. “There is a reason that these tools exist and they have a position in the overall landscape,” says Girish Nadkarni, chief AI officer​ at the Mount Sinai Health System. “That’s because access to health care is hard, and it’s particularly hard for certain populations.”

The virtuous vision of consumer-facing LLM health chatbots hinges on the possibility that they could improve user health while reducing pressure on the health-care system. That might involve helping users decide whether or not they need medical attention, a task known as triage. If chatbot triage works, then patients who need emergency care might seek it out earlier than they would have otherwise, and patients with more mild concerns might feel comfortable managing their symptoms at home with the chatbot’s advice rather than unnecessarily busying emergency rooms and doctor’s offices.

But a recent, widely discussed study from Nadkarni and other researchers at Mount Sinai found that ChatGPT Health sometimes recommends too much care for mild conditions and fails to identify emergencies. Though Singhal and  some other experts have suggested that its methodology might not provide a complete picture of ChatGPT Health’s capabilities, the study has surfaced concerns about how little external evaluation these tools see before being released to the public.

Most of the academic experts interviewed for this piece agreed that LLM health chatbots could have real upsides, given how little access to health care some people have. But all six of them expressed concerns that these tools are being launched without testing from independent researchers to assess whether they are safe. While some advertised uses of these tools, such as recommending exercise plans or suggesting questions that a user might ask a doctor, are relatively harmless, others carry clear risks. Triage is one; another is asking a chatbot to provide a diagnosis or a treatment plan. 

The ChatGPT Health interface includes a prominent disclaimer stating that it is not intended for diagnosis or treatment, and the announcements for Copilot Health and Amazon’s Health AI include similar warnings. But those warnings are easy to ignore. “We all know that people are going to use it for diagnosis and management,” says Adam Rodman, an internal medicine physician and researcher at Beth Israel Deaconess Medical Center and a visiting researcher at Google.

Medical testing

Companies say they are testing the chatbots to ensure that they provide safe responses the vast majority of the time. OpenAI has designed and released HealthBench, a benchmark that scores LLMs on how they respond in realistic health-related conversations—though the conversations themselves are LLM-generated. When GPT-5, which powers both ChatGPT Health and Copilot Health, was released last year, OpenAI reported the model’s HealthBench scores: It did substantially better than previous OpenAI models, though its overall performance was far from perfect. 

But evaluations like HealthBench have limitations. In a study published last month, Bean—the Oxford doctoral candidate—and his colleagues found that even if an LLM can accurately identify a medical condition from a fictional written scenario on its own, a non-expert user who is given the scenario and asked to determine the condition with LLM assistance might figure it out only a third of the time. If they lack medical expertise, users might not know which parts of a scenario—or their real-life experience—are important to include in their prompt, or they might misinterpret the information that an LLM gives them.

Bean says that this performance gap could be significant for OpenAI’s models. In the original HealthBench study, the company reported that its models performed relatively poorly in conversations that required them to seek more information from the user. If that’s the case, then users who don’t have enough medical knowledge to provide a health chatbot with the information that it needs from the get-go might get unhelpful or inaccurate advice.

Singhal, the OpenAI health lead, notes that the company’s current GPT-5 series of models, which had not yet been released when the original HealthBench study was conducted, do a much better job of soliciting additional information than their predecessors. However, OpenAI has reported that GPT-5.4, the current flagship, is actually worse at seeking context than GPT-5.2, an earlier version.

Ideally, Bean says, health chatbots would be subjected to controlled tests with human users, as they were in his study, before being released to the public. That might be a heavy lift, particularly given how fast the AI world moves and how long human studies can take. Bean’s own study used GPT-4o, which came out almost a year ago and is now outdated. 

Earlier this month, Google released a study that meets Bean’s standards. In the study, patients discussed medical concerns with the company’s Articulate Medical Intelligence Explorer (AMIE), a medical LLM chatbot that is not yet available to the public, before meeting with a human physician. Overall, AMIE’s diagnoses were just as accurate as physicians’, and none of the conversations raised major safety concerns for researchers. 

Despite the encouraging results, Google isn’t planning to release AMIE anytime soon. “While the research has advanced, there are significant limitations that must be addressed before real-world translation of systems for diagnosis and treatment, including further research into equity, fairness, and safety testing,” wrote Alan Karthikesalingam, a research scientist at Google DeepMind, in an email. Google did recently reveal that Health100, a health platform it is building in partnership with CVS, will include an AI assistant powered by its flagship Gemini models, though that tool will presumably not be intended for diagnosis or treatment.

Rodman, who led the AMIE study with Karthikesalingam, doesn’t think such extensive, multiyear studies are necessarily the right approach for chatbots like ChatGPT Health and Copilot Health. “There’s lots of reasons that the clinical trial paradigm doesn’t always work in generative AI,” he says. “And that’s where this benchmarking conversation comes in. Are there benchmarks [from] a trusted third party that we can agree are meaningful, that the labs can hold themselves to?”

They key there is “third party.” No matter how extensively companies evaluate their own products, it’s tough to trust their conclusions completely. Not only does a third-party evaluation bring impartiality, but if there are many third parties involved, it also helps protect against blind spots.

OpenAI’s Singhal says he’s strongly in favor of external evaluation. “We try our best to support the community,” he says. “Part of why we put out HealthBench was actually to give the community and other model developers an example of what a very good evaluation looks like.” 

Given how expensive it is to produce a high-quality evaluation, he says, he’s skeptical that any individual academic laboratory would be able to produce what he calls “the one evaluation to rule them all.” But he does speak highly of efforts that academic groups have made to bring preexisting and novel evaluations together into comprehensive evaluations suites—such as Stanford’s MedHELM framework, which tests models on a wide variety of medical tasks. Currently, OpenAI’s GPT-5 holds the highest MedHELM score.

Nigam Shah, a professor of medicine at Stanford University who led the MedHELM project, says it has limitations. In particular, it only evaluates individual chatbot responses, but someone who’s seeking medical advice from a chatbot tool might engage it in a multi-turn, back-and-forth conversation. He says that he and some collaborators are gearing up to build an evaluation that can score those complex conversations, but that it will take time, and money. “You and I have zero ability to stop these companies from releasing [health-oriented products], so they’re going to do whatever they damn please,” he says. “The only thing people like us can do is find a way to fund the benchmark.”

No one interviewed for this article argued that health LLMs need to perform perfectly on third-party evaluations in order to be released. Doctors themselves make mistakes—and for someone who has only occasional access to a doctor, a consistently accessible LLM that sometimes messes up could still be a huge improvement over the status quo, as long as its errors aren’t too grave. 

With the current state of the evidence, however, it’s impossible to know for sure whether the currently available tools do in fact constitute an improvement, or whether their risks outweigh their benefits.

SEO Tactics for GenAI Visibility

Traditional search engine optimization is fundamental to visibility on generative AI platforms.

Large language models query Google to research topics and find answers. Thus low or unranked pages are largely invisible to ChatGPT, Perplexity, Gemini, and others.

Here are the top SEO tactics to elevate genAI mentions and citations.

Keyword research

To date, genAI platforms provide no prompt data. We have no definitive info on how consumers discover brands or products on those platforms.

Keyword research remains the primary source for how online consumers decide to buy. Third-party tools can organize keywords by intent, offering clues for targeting prospects at every step of their research.

Keyword gaps identify what’s missing on a site to attract would-be customers.

Prompts are longer than traditional search queries and, anecdotally, wildly unpredictable. Yet higher-level keyword optimization informs content and landing pages that cater to shoppers’ needs.

Optimized content

The best ecommerce content explains how a merchant’s products help consumers address needs and solve problems.

It may generate less traffic than a few years ago, but it remains essential for product discovery. Focusing on “bottom-of-the-funnel” queries (a common recommendation from “GEO experts”) leads to fewer new customers.

Yes, LLMs may summarize your content and include it in an answer without referring to your company. But the content will be part of that answer as a trusted LLM solution provider, foretelling potential future recommendations.

Optimizing for buying journeys, then, includes keywords to understand shoppers’ desires and relevant content for search and LLM bots to find solutions.

Site architecture

Horizontal site architecture (pages aren’t buried) and internal links ensure bot crawlability and long-tail ranking opportunities.

Clear architecture helps LLMs understand a business and correctly place its products in the training data.

Optimized site navigation is:

  • Structured for humans and LLM agents to find what they need quickly.
  • Usable without JavaScript and accessible with all web browsers.
  • Focused on a site’s most important sections and key benefits.

Link building

LLMs’ use of authority signals, such as backlinks, remains unclear. Nearly a year ago I speculated on Reddit that Gemini and AI Mode rely on PageRank at least indirectly. Whether that includes backlinks, however, is a mystery.

Yet backlinks, brand mentions, and co-citations are important for LLM visibility:

  • Higher organic rankings drive genAI discovery.
  • Entity associations (being mentioned/linked alongside prominent competitors) elevate rankings.
  • Consistent mentions and links from your site to authoritative publications help LLMs trust your business.

Such indirect LLM signals are achieved through traditional link building via journalist outreach, being quoted as an expert, and building connections on social media.

To be sure, success for GEO is visibility rather than actual sales. But absent SEO, a site’s chances of being found by LLMs are near zero.

Google: Pages Are Getting Larger & It Still Matters via @sejournal, @MattGSouthern

Google’s Gary Illyes and Martin Splitt used a recent episode of the Search Off the Record podcast to discuss whether webpages are getting too large and what that means for both users and crawlers.

The conversation started with a simple question: are websites getting fat? Splitt immediately pushed back on the framing, arguing that website-level size is meaningless. Individual page size is where the discussion belongs.

What The Data Shows

Splitt cited the 2025 Web Almanac from HTTP Archive, which found that the median mobile homepage weighed 845 KB in 2015. By July, that same median page had grown to 2,362 KB. That’s roughly a 3x increase over a decade.

Both agreed the growth was expected, given the complexity of modern web applications. But the numbers still surprised them.

Splitt noted the challenge of even defining “page weight” consistently, since different people interpret the term differently depending on whether they’re thinking about raw HTML, transferred bytes, or everything a browser needs to render a page.

How Google’s Crawl Limits Fit In

Illyes discussed a 15 MB default that applies across Google’s broader crawl infrastructure, where each URL gets its own limit, and referenced resources like CSS, JavaScript, and images are fetched separately.

That’s a different number from what appears in Google’s current Googlebot documentation. Google states that Googlebot for Google Search crawls the first 2 MB of a supported file type and the first 64 MB of a PDF.

Our previous coverage broke down the documentation update that clarified these figures earlier this year. Illyes and Splitt discussed the flexibility of these limits in a previous episode, noting that internal teams can override the defaults depending on what’s being crawled.

The Structured Data Question

One of the more interesting moments came when Illyes raised the topic of structured data and page bloat. He traced it back to a statement from Google co-founder Sergey Brin, who said early in Google’s history that machines should be able to figure out everything they need from text alone.

Illyes noted that structured data exists for machines, not users, and that adding the full range of Google’s supported structured data types to a page can add weight that visitors never see. He framed it as a tension rather than offering a clear answer on whether it’s a problem.

Does It Still Matter?

Splitt said yes. He acknowledged that his home internet connection is fast enough that page weight is irrelevant in his daily experience. But he said the picture changes when traveling to areas with slower connections, and noted that metered satellite internet made him rethink how much data websites transfer.

He suggested that page size growth may have outpaced improvements in median mobile connection speeds, though he said he’d need to verify that against actual data.

Illyes referenced prior studies suggesting that faster websites tend to have better retention and conversion rates, though the episode didn’t cite specific research.

Looking Ahead

Splitt said he plans to address specific techniques for reducing page size in a future episode.

Most pages are still unlikely to hit those limits, with the Web Almanac reporting a median mobile homepage size of 2,362 KB. But the broader trend of growing page weight affects both performance and accessibility for users on slower or metered connections.

New AI Jobs Index Ranks 784 Occupations By Loss Risk via @sejournal, @MattGSouthern

Jobs with the highest potential for AI-assisted productivity gains also face the highest projected job losses, according to a new index from Digital Planet at Tufts University’s Fletcher School.

The American AI Jobs Risk Index ranks 784 U.S. occupations, 530 metro areas, 50 states, and 20 industry sectors by vulnerability to AI-driven job loss.

All figures are model projections based on AI adoption scenarios, not actual layoffs or employment changes. The median scenario estimates 9.3 million jobs at risk, ranging from 2.7 million to 19.5 million depending on AI adoption speed.

Which Jobs Face The Highest Projected Risk

Writers and authors top the list of occupations at risk at 57%. Computer programmers and web and digital interface designers follow at 55% each. Editors are at 54%, and web developers at 46%.

Market research analysts and marketing specialists face a projected 35% job loss rate. Public relations specialists are at 37%. News analysts, reporters, and journalists face 35% risk.

Earlier analyses, such as the Anthropic Economic Index and Stanford’s “Canaries in the Coal Mine,” measured how accessible jobs are to AI. This analysis goes further by estimating how likely that exposure is to translate into projected job loss.

Augmentation & Loss Risk Go Together

Authors refer to the connection between jobs that benefit from AI-driven productivity gains and those expected to lose jobs as the “augmentation-displacement link.”

When AI increases individual workers’ efficiency, companies can produce the same output with fewer employees. This mainly affects entry-level and lower-seniority roles first, because companies can cut back on hiring rather than firing.

Writing, programming, web design, technical writing, and data analysis are where this pattern is most evident. Tasks in these fields are cognitive, language-intensive, and structured enough for large language models to manage.

By Industry

Average vulnerability across all industries is about 6%. Sectors with the highest projected job loss are Information (18%), Finance and Insurance (16%), and Professional, Scientific, and Technical Services (16%).

Software Developers, Management Analysts, and Market Research Analysts face the biggest total income losses. These three roles combine high pay with large workforces, accounting for a significant share of the projected $757 billion in total at-risk annual income.

What The Analysis Doesn’t Include

Note that job creation effects aren’t included in this version. The authors intend to add that data in future updates as they gather more evidence.

Additionally, regulatory constraints, union bargaining power, and occupational licensing requirements that could help slow job losses in some sectors are not part of this analysis. The authors emphasize that their forecasts are based on different scenarios rather than being definitive.

Why This Matters

There’s a common assumption among digital professionals that using AI to boost productivity protects their jobs. However, this data challenges that idea.

SEJ previously covered this tension in 2023 when Dr. Craig Froehle of the University of Cincinnati warned that companies not investing in employee retraining would see turnover costs double. The Tufts data puts numbers on the specific occupations where that pressure is building.

Looking Ahead

Updates to the American AI Jobs Risk Index will be made as AI capabilities and labor market conditions evolve. The authors mention that future versions will try to include job creation data along with loss estimates, providing a more complete view of AI’s overall impact on employment.

The methodology is available on the Digital Planet site, which also links to a data download page.


Featured Image: rudall30/Shutterstock