The Truth about ‘Direct’ Traffic

The “direct” traffic channel in analytics software might be mislabeled, misleading, and even detrimental.

Imagine an ecommerce sortation center. When it cannot identify the package’s origin, the center may sort it into a hypothetical “direct” bin. Similarly, Google Analytics and others sometimes assign traffic as “direct” when they cannot attribute it to a specific source.

In analytics-speak, “referrers” and “parameters” are mechanisms for determining where a site visit originated.

  • Referrer is the URL of the site a visitor came from. The referrer is passed automatically in most cases.
  • A parameter attaches to the end of a URL to share tracking information, e.g., utm_source=email.

Analytics platforms label visitors who come to a site without a referrer or parameter as “direct.”

Thus “direct” becomes a catch-all, potentially co-mingling marketing-driven visits, actual direct inbounds, and even folks coming from Google Discover with traffic that lost its identifying parameters or referrers.

Direct Traffic

Google Analytics typically assigns 20% to 60% of site traffic to “direct,” according to multiple industry reports.

“Direct” traffic accounts for 20% to 60% of a site’s visits, typically.

Yet there’s anecdotal concern among prominent practitioners — including Neil Patel, Jon Henshaw, and Katie Rigby — that direct traffic reported by Google Analytics and others is mislabeled.

The challenge is how marketers think about “direct” visits. A direct visitor was once someone who typed in the site’s URL in her browser. It indicated brand strength and recognition.

But the catch-all nature of today’s direct site traffic reporting can be problematic if it masks marketing effectiveness. An outreach campaign on Discord or a successful SMS campaign, for example, could get lost.

How to Check

To be sure, not all “direct” visits in Google Analytics or similar are mislabeled. To check their metrics, marketers can:

  • Compare trends. Review “direct” traffic alongside other channels. If “direct” spikes while “organic search” or “social” drops, it’s worth investigating.
  • Inspect landing page patterns. Genuine direct visitors usually land on the home page. “Direct” visitors who land on product or other interior pages could be misclassified.
  • Audit tagging. Ensure all email, social, and ad campaigns use correct UTM parameters. A missing parameter may cause the analytics platform to misclassify the visit.

If your site’s direct traffic looks suspicious, consider the dead, dark, and blind.

Dead traffic

While it is likely the smallest percentage of “direct” site traffic, “dead” or zombie visits are non-human crawlers — AI agents, search engine tools, monitoring systems, or competitor price scrapers — undetected by analytics providers.

Fast Company explored how bot traffic can distort behavioral signals and muck up marketing. Citing a new vibe-coded social network called Moltbook solely for AI agents, Fast Company stated, “Moltbook is a harbinger — the first real sign that a new type of internet is upon us….a ‘zombie internet’ that could have devastating consequences for advertising.”

Home page of Moltbook

Moltbook is a vibe-coded social network for AI agents.

Dark traffic

“Dark” traffic refers to legitimate visits without clear referral or parameter data. Examples include:

  • Dark social. Many social media applications and platforms, such as WhatsApp and Slack, do not allow referrers or parameters.
  • Dark AI. Some AI platforms share links but do not pass referrer data when clicked.
  • Clean URLs. Some browsers and email clients, such as the Brave browser and Apple Mail, remove tracking parameters.
  • Privacy and ad-blocking software. Browser extensions can also remove parameters from links and suppress referrers.

Analytics blindness

A sortation center tracks all arriving packages but routes those without an originating address into the “direct” bin.

In contrast, “blindness” results from visits that never reach analytics software, most often due to extreme privacy protection applications. Rather than just removing parameters, some apps block JavaScript from loading altogether, preventing Google Analytics and other platforms from recording the session.

Attribution Gaps

Mislabeled “direct” traffic obscures the truth. Merchants engaged in community marketing and advertising, or who attract privacy-minded shoppers, should audit their “direct” visits to avoid cutting high-performing channels.

Moreover, analytics software is not the only way to measure. Alternatives include:

  • Zero-party data. Use post-purchase surveys asking, “How did you hear about us?”
  • Trackable codes or pages. Use specific coupon codes or landing pages for distinct channels.
  • Media mix modeling. Use statistical analysis rather than user-level tracking to correlate spend with revenue.
  • Identity resolution. Retention.com, Audience Bridge, and similar services can help identify anonymous traffic and match it to conversions.
RFK Jr. follows a carnivore diet. That doesn’t mean you should.

Americans have a new set of diet guidelines. Robert F. Kennedy Jr. has taken an old-fashioned food pyramid, turned it upside down, and plonked a steak and a stick of butter in prime positions.

Kennedy and his Make America Healthy Again mates have long been extolling the virtues of meat and whole-fat dairy, so it wasn’t too surprising to see those foods recommended alongside vegetables and whole grains (despite the well-established fact that too much saturated fat can be extremely bad for you).

Some influencers have taken the meat trend to extremes, following a “carnivore diet.” “The best thing you could do is eliminate out everything except fatty meat and lard,” Anthony Chaffee, an MD with almost 400,000 followers, said in an Instagram post.

And I almost choked on my broccoli when, while scrolling LinkedIn, I came across an interview with another doctor declaring that “there is zero scientific evidence to say that vegetables are required in the human diet.” That doctor, who described himself as “90% carnivore,” went on to say that all he’d eaten the previous day was a kilo of beef, and that vegetables have “anti-nutrients,” whatever they might be.

You don’t have to spend much time on social media to come across claims like this. The “traditionalist” influencer, author, and psychologist Jordan Peterson was promoting a meat-only diet as far back as 2018. A recent review of research into nutrition misinformation on social media found that the most diet information is shared on Instagram and YouTube, and that a lot of it is nonsense. So much so that the authors describe it as a “growing public health concern.”

What’s new is that some of this misinformation comes from the people who now lead America’s federal health agencies. In January Kennedy, who leads the Department of Health and Human Services, told a USA Today reporter that he was on a carnivore diet. “I only eat meat or fermented foods,” he said. He went on to say that the diet had helped him lose “40% of [his] visceral fat within a month.”

“Government needs to stop spreading misinformation that natural and saturated fats are bad for you,” Food and Drug Administration commissioner Martin Makary argued in a recent podcast interview. The principles of “whole foods and clean meats” are “biblical,” he said. The interviewer said that Makary’s warnings about pesticides made him want to “avoid all salads and completely miss the organic section in the grocery store.”

For the record: There’s plenty of evidence that a diet high in saturated fat can increase the risk of heart disease. That’s not government misinformation. 

The carnivore doctors’ suggestion to avoid vegetables is wrong too, says Gabby Headrick, associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security. There’s no evidence to suggest that a meat-only diet is good for you. “All of the nutrition science to date strongly identifies a wide array of vegetables … as being very health-promoting,” she adds.

To be fair to the influencers out there, diet is a tricky thing to study. Much of the research into nutrition relies on volunteers to keep detailed and honest food diaries—something that people are generally quite bad at. And the way our bodies respond to foods might be influenced by our genetics, our microbiomes, the way we prepare or consume those foods, and who knows what else.

Still, it will come as a surprise to no one that there is plenty of what the above study calls “low-quality content” floating around on social media. So it’s worth arming ourselves with a good dose of skepticism, especially when we come across posts that mention “miracle foods” or extreme, limited diets.

The truth is that most food is neither good nor bad when eaten in moderation. Diet trends come and go, and for most people, the best reasonable advice is simply to eat a balanced diet low in sugar, salt, and saturated fat. You know—the basics. No matter what that weird upside-down food pyramid implies. To the carnivore influencers, I say: get your misinformation off my broccoli.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

US deputy health secretary: Vaccine guidelines are still subject to change

<div data-chronoton-summary="

  • Vaccine schedule may not be final O’Neill defended the CDC’s decision to cut recommended childhood vaccines but said the guidelines remain “subject to new data coming in, new ways of thinking about things,” with new safety studies underway.
  • A self-described Vitalist is running US health agencies O’Neill said he agrees with all five tenets of Vitalism—a movement that calls death “humanity’s core problem”—and wants to make reversing aging damage a federal health priority.
  • ARPA-H is betting big on organ replacement and brain repair The agency is directing $170 million toward growing new organs from patients’ own cells and exploring ways to replace aging brain tissue—a procedure O’Neill said he’d personally be “open to” trying.
  • Expect more dietary guidance—and more controversy O’Neill endorsed eating “plenty of protein and saturated fat,” echoing new federal dietary guidance that nutrition scientists have criticized for ignoring decades of research on saturated fat’s health risks.

” data-chronoton-post-id=”1132889″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Following publication of this story, Politico reported Jim O’Neill would be leaving his current roles within the Department of Health and Human Services.

Over the past year, Jim O’Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country’s federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US’s deeply controversial new vaccine schedule.

He’s also a longevity enthusiast. In an exclusive interview with MIT Technology Review earlier this month, O’Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. At the same time, he defended reducing the number of broadly recommended childhood vaccines, a move that has been widely criticized by experts in medicine and public health. 

In MIT Technology Review’s profile of O’Neill last year, people working in health policy and consumer advocacy said they found his libertarian views on drug regulation “worrisome” and “antithetical to basic public health.” 

He was later named acting director of the Centers for Disease Control and Prevention, putting him in charge of the nation’s public health agency.

But fellow longevity enthusiasts said they hope O’Neill will bring attention and funding to their cause: the search for treatments that might slow, prevent, or even reverse human aging. Here are some takeaways from the interview. 

Vaccine recommendations could change further

Last month, the US cut the number of vaccines recommended for children. The CDC no longer recommends vaccinations against flu, rotavirus, hepatitis A, or meningococcal disease for all children. The move was widely panned by medical groups and public health experts. Many worry it will become more difficult for children to access those vaccines. The majority of states have rejected the recommendations

In the confirmation hearing for his role as deputy secretary of health and human services, which took place in May last year, O’Neill said he supported the CDC’s vaccine schedule. MIT Technology Review asked him if that was the case and, if so, what made him change his mind. “Researching and examining and reviewing safety data and efficacy data about vaccines is one of CDC’s obligations,” he said. “CDC gives important advice about vaccines and should always be open to new data and new ways of looking at data.”

At the beginning of December, O’Neill said, President Donald Trump “asked me to look at what other countries were doing in terms of their vaccine schedules.” He said he spoke to health ministries of other countries and consulted with scientists at the CDC and FDA. “It was suggested to me by lots of the operating divisions that the US focus its recommendations on consensus vaccines of other developed nations—in other words, the most important vaccines that are most often part of the core recommendations of other countries,” he said.

“As a result of that, we did an update to the vaccine schedule to focus on a set of vaccines that are most important for all children.” 

But some experts in public health have said that countries like Denmark and Japan, whose vaccine schedules the new US one was supposedly modeled on, are not really comparable to the US. When asked about these criticisms, O’Neill replied, “A lot of parents feel that … more than 70 vaccine doses given to young children sounds like a really high number, and some of them ask which ones are the most important. I think we helped answer that question in a way that didn’t remove anyone’s access.”

A few weeks after the vaccine recommendations were changed, Kirk Milhoan, who leads the CDC’s Advisory Committee on Immunization Practices, said that vaccinations for measles and polio—which are currently required for entry to public schools—should be optional. (Mehmet Oz, the Center for Medicare and Medicaid Services director, has more recently urged people to “take the [measles] vaccine.”)

“CDC still recommends that all children are vaccinated against diphtheria, tetanus, whooping cough, Haemophilus influenzae type b (Hib), Pneumococcal conjugate, polio, measles, mumps, rubella, and human papillomavirus (HPV), for which there is international consensus, as well as varicella (chickenpox),” he said when asked for his thoughts on this comment.

He also said that current vaccine guidelines are “still subject to new data coming in, new ways of thinking about things.” “CDC, FDA, and NIH are initiating new studies of the safety of immunizations,” he added. “We will continue to ask the Advisory Committee on Immunization Practices to review evidence and make updated recommendations with rigorous science and transparency.”

More support for longevity—but not all science

O’Neill said he wants longevity to become a priority for US health agencies. His ultimate goal, he said, is to “make the damage of aging something that’s under medical control.” It’s “the same way of thinking” as the broader Make America Healthy Again approach, he said: “‘Again’ implies restoration of health, which is what longevity research and therapy is all about.” 

O’Neill said his interest in longevity was ignited by his friend Peter Thiel, the billionaire tech entrepreneur, around 2008 to 2009. It was right around the time O’Neill was finishing up a previous role in HHS, under the Bush administration. O’Neill said Thiel told him he “should really start looking into longevity and the idea that aging damage could be reversible.” “I just got more and more excited about that idea,” he said.

When asked if he’s heard of Vitalism, a philosophical movement for “hardcore” longevity enthusiasts who, broadly, believe that death is wrong, O’Neill replied: “Yes.” 

The Vitalist declaration lists five core statements, including “Death is humanity’s core problem,” “Obviating aging is scientifically plausible,” and “I will carry the message against aging and death.” O’Neill said he agrees with all of them. “I suppose I am [a Vitalist],” he said with a smile, although he’s not a paying member of the foundation behind it.

As deputy secretary of the Department of Health and Human Services, O’Neill assumes a level of responsibility for huge and influential science and health agencies, including the National Institutes of Health (the world’s largest public funder of biomedical research) and the Food and Drug Administration (which oversees drug regulation and is globally influential) as well as the CDC.

Today, he said, he sees support for longevity science from his colleagues within HHS. “If I could describe one common theme to the senior leadership at HHS, obviously it’s to make America healthy again, and reversing aging damage is all about making people healthy again,” he said. “We are refocusing HHS on addressing and reversing chronic disease, and chronic diseases are what drive aging, broadly.”

Over the last year, thousands of NIH grants worth over $2 billion were frozen or terminated, including funds for research on cancer biology, health disparities, neuroscience, and much more. When asked whether any of that funding will be restored, he did not directly address the question, instead noting: “You’ll see a lot of funding more focused on important priorities that actually improve people’s health.”

Watch ARPA-H for news on organ replacements and more

He promised we’ll hear more from ARPA-H, the three-year-old federal agency dedicated to achieving breakthroughs in medical science and biotechnology. It was established with the official goal of promoting “high-risk, high-reward innovation for the development and translation of transformative health technologies.”

O’Neill said that “ARPA-H exists to make the impossible possible in health and medicine.” The agency has a new director—Alicia Jackson, who formerly founded and led a company focused on women’s health and longevity, took on the role in October last year.

O’Neill said he helped recruit Jackson, and that she was hired in part because of her interest in longevity, which will now become a major focus of the agency. He said he meets with her regularly, as well as with Andrew Brack and Jean Hébert, two other longevity supporters who lead departments at ARPA-H. Brack’s program focuses on finding biological markers of aging. Hebert’s aim is to find a way to replace aging brain tissue, bit by bit.  

O’Neill is especially excited by that one, he said. “I would try it … Not today, but … if progress goes in a broadly good direction, I would be open to it. We’re hoping to see significant results in the next few years.”

He’s also enthused by the idea of creating all-new organs for transplantation. “Someday we want to be able to grow new organs, ideally from the patients’ own cells,” O’Neill said. An ARPA-H program will receive $170 million over five years to that end, he adds. “I’m very excited about the potential of ARPA-H and Alicia and Jean and Andrew to really push things forward.”

Longevity lobbyists have a friendly ear

O’Neill said he also regularly talks to the team at the lobbying group Alliance for Longevity Initiatives. The organization, led by Dylan Livingston, played an instrumental role in changing state law in Montana to make experimental therapies more accessible. O’Neill said he hasn’t formally worked with them but thinks that “they’re doing really good work on raising awareness, including on Capitol Hill.”

Livingston has told me that A4LI’s main goals center around increasing support for aging research (possibly via the creation of a new NIH institute entirely dedicated to the subject) and changing laws to make it easier and cheaper to develop and access potential anti-aging therapies.

O’Neill gave the impression that the first goal might be a little overambitious—the number of institutes is down to Congress, he said. “I would like to get really all of the institutes at NIH to think more carefully about how many chronic diseases are usefully thought of as pathologies of aging damage,” he said. There’ll be more federal funding for that research, he said, although he won’t say more for now.

Some members of the longevity community have more radical ideas when it comes to regulation: they want to create their own jurisdictions designed to fast-track the development of longevity drugs and potentially encourage biohacking and self-experimentation. 

It’s a concept that O’Neill has expressed support for in the past. He has posted on X about his support for limiting the role of government, and in support of building “freedom cities”—a similar concept that involves creating new cities on federal land. 

Another longevity enthusiast who supports the concept is Niklas Anzinger, a German tech entrepreneur who is now based in Próspera, a private city within a Honduran “special economic zone,” where residents can make their own suggestions for medical regulations. Anzinger also helped draft Montana’s state law on accessing experimental therapies. O’Neill knows Anzinger and said he talks to him “once or twice a year.”

O’Neill has also supported the idea of seasteading—building new “startup countries” at sea. He served on the board of directors of the Seasteading Institute until March 2024.

In 2009, O’Neill told an audience at a Seasteading Institute conference that “the healthiest societies in 2030 will most likely be on the sea.” When asked if he still thinks that’s the case, he said: “It’s not quite 2030, so I think it’s too soon to say … What I would say now is: the healthiest societies are likely to be the ones that encourage innovation the most.”

We might expect more nutrition advice

When it comes to his own personal ambitions for longevity, O’Neill said, he takes a simple approach that involves minimizing sugar and ultraprocessed food, exercising and sleeping well, and supplementing with vitamin D. He also said he tries to “eat a diet that has plenty of protein and saturated fat,” echoing the new dietary guidance issued by the US Departments of Health and Human Services and Agriculture. That guidance has been criticized by nutrition scientists, who point out that it ignores decades of research into the harms of a diet high in saturated fat.

We can expect to see more nutrition-related updates from HHS, said O’Neill: “We’re doing more research, more randomized controlled trials on nutrition. Nutrition is still not a scientifically solved problem.” Saturated fats are of particular interest, he said. He and his colleagues want to identify “the healthiest fats,” he said. 

“Stay tuned.”

The myth of the high-tech heist

Making a movie is a lot like pulling off a heist. That’s what Steven Soderbergh—director of the Ocean’s franchise, among other heist-y classics—said a few years ago. You come up with a creative angle, put together a team of specialists, figure out how to beat the technological challenges, rehearse, move with Swiss-watch precision, and—if you do it right—redistribute some wealth. That could describe either the plot or the making of Ocean’s Eleven.

But conversely, pulling off a heist isn’t much like the movies. Surveillance cameras, computer-controlled alarms, knockout gas, and lasers hardly ever feature in big-ticket crime. In reality, technical countermeasures are rarely a problem, and high-tech gadgets are rarely a solution. The main barrier to entry is usually a literal barrier to entry, like a door. Thieves’ most common move is to collude with, trick, or threaten an insider. Last year a heist cost the Louvre €88 million worth of antique jewelry, and the most sophisticated technology in play was an angle grinder.

The low-tech Louvre maneuvers were in keeping with what heist research long ago concluded. In 2014 US nuclear weapons researchers at Sandia National Laboratories took a detour into this demimonde, producing a 100-page report called “The Perfect Heist: Recipes from Around the World.” The scientists were worried someone might try to steal a nuke from the US arsenal, and so they compiled information on 23 high-value robberies from 1972 to 2012 into a “Heist Methods and Characteristics Database,” a critical mass of knowledge on what worked. Thieves, they found, dedicated huge amounts of money and time to planning and practice runs—sometimes more than 100. They’d use brute force, tunneling through sewers for months (Société Générale bank heist, Nice, France, 1976), or guile, donning police costumes to fool guards (Gardner Museum, Boston, 1990). But nobody was using, say, electromagnetic pulse generators to shut down the Las Vegas electrical grid. The most successful robbers got to the valuable stuff unseen and got out fast.

rench police officers stand next to a ladder used by robbers to enter the Louvre Museum
Last year a heist cost the Louvre €88 million worth of antique jewelry, and the most sophisticated technology in play was an angle grinder.
DIMITAR DILKOFF / AFP VIA GETTY IMAGES

Advance the time frame, and the situation looks much the same. Last year, Spanish researchers looking at art crimes from 1990 to 2022 found that the least technical methods are still the most successful. “High-tech technology doesn’t work so well,” says Erin L. Thompson, an art historian at John Jay College of Justice who studies art crime. Speed and practice trump complicated systems and alarms; even that Louvre robbery was, at heart, just a minutes-long smash-and-grab.

An emphasis on speed doesn’t mean heists don’t require skill—panache, even. As the old saying goes, amateurs talk strategy; professionals study logistics. Even without gadgets, heists and heist movies still revel in an engineer’s mindset. “Heist movies absolutely celebrate deep-dive nerdery—‘I’m going to know everything I can about the power grid, about this kind of stone and drill, about Chicago at night,’” says Anna Kornbluh, a professor of English at the University of Illinois at Chicago. She published a paper last October on the ways heist movies reflect an Old Hollywood approach to collective art-making, while shows about new grift, like those detailing the rise and fall of WeWork or the con artist Anna Delvey, reflect the more lone-wolf, disrupt-and-grow mindset of the streaming era. 

Her work might help explain why law-abiding citizens might cheer for the kinds of guys who’d steal a crown from the Louvre, or $100,000 worth of escargot from a farm in Champagne (as happened just a few weeks later). Heists, says Kornbluh, are anti-oligarch praxis. “Everybody wants to know how to be in a competent collective. Everybody wants there to be better logistics,” she says. “We need a better state. We need a better society. We need a better world.” Those are shared values—and as another old saying tells us, where there is value, there is crime.

The Download: an exclusive chat with Jim O’Neill, and the surprising truth about heists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

US deputy health secretary: Vaccine guidelines are still subject to change

Over the past year, Jim O’Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country’s federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US’s deeply controversial new vaccine schedule.

He’s also a longevity enthusiast. In an exclusive interview with MIT Technology Review earlier this month, O’Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. Fellow longevity enthusiasts said they hope he will bring attention and funding to their cause.

At the same time, O’Neill defended reducing the number of broadly recommended childhood vaccines, a move that has been widely criticized by experts in medicine and public health. Read the full story.

—Jessica Hamzelou

The myth of the high-tech heist

Making a movie is a lot like pulling off a heist. That’s what Steven Soderbergh—director of the Ocean’s franchise, among other heist-y classics—said a few years ago. You come up with a creative angle, put together a team of specialists, figure out how to beat the technological challenges, rehearse, move with Swiss-watch precision, and—if you do it right—redistribute some wealth.

But conversely, pulling off a heist isn’t much like the movies. Surveillance cameras, computer-controlled alarms, knockout gas, and lasers hardly ever feature in big-ticket crime. In reality, technical countermeasures are rarely a problem, and high-tech gadgets are rarely a solution. Read the full story.

—Adam Rogers

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

 RFK Jr. follows a carnivore diet. That doesn’t mean you should.

Americans have a new set of diet guidelines. Robert F. Kennedy Jr. has taken an old-fashioned food pyramid, turned it upside down, and plonked a steak and a stick of butter in prime positions.

Kennedy and his Make America Healthy Again mates have long been extolling the virtues of meat and whole-fat dairy, so it wasn’t too surprising to see those foods recommended alongside vegetables and whole grains (despite the well-established fact that too much saturated fat can be extremely bad for you).

Some influencers have taken the meat trend to extremes, following a “carnivore diet.” A recent review of research into nutrition misinformation on social media found that a lot of shared diet information is nonsense. But what’s new is that some of this misinformation comes from the people who now lead America’s federal health agencies. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has revoked a landmark climate ruling
In its absence, it can erase the limits that restrict planet-warming emissions. (WP $)
+ Environmentalists and Democrats have vowed to fight the reversal. (Politico)
+ They’re seriously worried about how it will affect public health. (The Hill)

2 An unexplained wave of bot traffic is sweeping the web
Sites across the world are witnessing automated traffic that appears to originate from China. (Wired $)

3 Amazon’s Ring has axed its partnership with Flock
Law enforcement will no longer be able to request Ring doorbell footage from its users. (The Verge)
+ Ring’s recent TV ad for a dog-finding feature riled viewers. (WSJ $)
+ How Amazon Ring uses domestic violence to market doorbell cameras. (MIT Technology Review)

4 Americans are taking the hit for almost all of Trump’s tariffs
Consumers and companies in the US, not overseas, are shouldering 90% of levies. (Reuters)
+ Trump has long insisted that his tariffs costs will be borne by foreign exporters. (FT $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

5 Meta and Snap say Australia’s social media ban hasn’t affected business
They’re still making plenty of money amid the country’s decision to ban under-16s from the platforms. (Bloomberg $)
+ Does preventing teens from going online actually do any good? (Economist $)

6 AI workers are selling their shares before their firms go public
Cashing out early used to be a major Silicon Valley taboo. (WSJ $)

7 Elon Musk posted about race almost every day last month
His fixation on a white racial majority appears to be intensifying. (The Guardian)
+ Race is a recurring theme in the Epstein emails, too. (The Atlantic $)

8 The man behind a viral warning about AI used AI to write it
But he stands behind its content.. (NY Mag $)
+ How AI-generated text is poisoning the internet. (MIT Technology Review)

9 Influencers are embracing Chinese traditions ahead of the New Year 🧧
On the internet, no one knows you’re actually from Wisconsin. (NYT $)

10 Australia’s farmers are using AI to count sheep 🐑
No word on whether it’s helping them sleep easier, though. (FT $)

Quote of the day

“Ignoring warning signs will not stop the storm. It puts more Americans directly in its path.”

—Former US secretary of state John Kerry takes aim at the US government’s decision to repeal the key rule that allows it to regulate climate-heating pollution, the Guardian reports.

One more thing

The Vera C. Rubin Observatory is ready to transform our understanding of the cosmos

High atop Chile’s 2,700-meter Cerro Pachón, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. It’s here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camera—the largest ever built—to produce a new map of the entire night sky every three days.

Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects are bound together—and pushed apart.

A quarter-­century in the making, the observatory is poised to expand our understanding of just about every corner of the universe. Read the full story.

—Adam Mann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Why 2026 is shaping up to be the year of the pop comeback.
+ Almost everything we thought we knew about Central America’s Maya has turned out to be completely wrong.
+ The Bigfoot hunters have spoken!
+ This fun game puts you in the shoes of a distracted man trying to participate in a date while playing on a GameBoy.

ALS stole this musician’s voice. AI let him sing again.

There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago.

The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease (MND), it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe.

Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again.

“Sadly, I have lost the ability to sing and play my instruments,” Darling said on stage at the event, which took place in London on Wednesday, using his voice clone. “Despite this, most of my time these days is spent still continuing to compose and produce my music. Doing so feels more important than ever to me now.”

Losing a voice

Darling says he’s been a musician and a composer since he was around 14 years old. “I learned to play bass guitar, acoustic guitar, piano, melodica, mandolin, and tenor banjo,” he said at the event. “My biggest love, though, was singing.”

He met bandmate Nick Cocking over 10 years ago, while he was still a university student, says Cocking. Darling joined Cocking’s Irish folk outfit, the Ceili House Band, shortly afterwards, and their first gig together was in April 2014. Darling, who joined the band as a singer and guitarist, “elevated the musicianship of the band,” says Cocking.

The four bandmates pose with their instruments.
Patrick Darling (second from left) with his former bandmates, including Nick Cocking (far right).
COURTESY OF NICK COCKING

But a few years ago, Cocking and his other bandmates started noticing changes in Darling. He became clumsy, says Cocking. He recalls one night when the band had to walk across the city of Cardiff in the rain: “He just kept slipping and falling, tripping on paving slabs and things like that.” 

He didn’t think too much of it at the time, but Darling’s symptoms continued to worsen. The disease affected his legs first, and in August 2023, he started needing to sit during performances. Then he started to lose the use of his hands. “Eventually he couldn’t play the guitar or the banjo anymore,” says Cocking.

By April 2024, Darling was struggling to talk and breathe at the same time, says Cocking. For that performance, the band carried Darling on stage. “He called me the day after and said he couldn’t do it anymore,” Cocking says, his voice breaking. “By June 2024, it was done.” It was the last time the band played together.

Re-creating a voice

Darling was put in touch with a speech therapist, who raised the possibility of “banking” his voice. People who are losing the ability to speak can opt to record themselves speaking and use those recordings to create speech sounds that can then be activated with typed text, whether by hand or perhaps using a device controlled by eye movements.

Some users have found these tools to be robotic sounding. But Darling had another issue. “By that stage, my voice had already changed,” he said at the event. “It felt like we were saving the wrong voice.”

Then another speech therapist introduced him to a different technology. Richard Cave is a speech and language therapist and a researcher at University College London. He is also a consultant for ElevenLabs, an AI company that develops agents and audio, speech, video, and music tools. One of these tools can create “voice clones”—realistic mimics of real voices that can be generated from minutes, or even seconds, of a person’s recorded voice.

Last year, ElevenLabs launched an impact program with a promise to provide free licenses to these tools for people who have lost their voices to ALS or other diseases, like head and neck cancer or stroke. 

The tool is already helping some of those users. “We’re not really improving how quickly they’re able to communicate, or all of the difficulties that individuals with MND are going through physically, with eating and breathing,” says Gabi Leibowitz, a speech therapist who leads the program. “But what we are doing is giving them a way … to create again, to thrive.” Users are able to stay in their jobs longer and “continue to do the things that make them feel like human beings,” she says.

Cave worked with Darling to use the tool to re-create his lost speaking voice from older recordings.

“The first time I heard the voice, I thought it was amazing,” Darling said at the event, using the voice clone. “It sounded exactly like I had before, and you literally wouldn’t be able to tell the difference,” he said. “I will not say what the first word I made my new voice say, but I can tell you that it began with ‘f’ and ended in ‘k.’”

Patrick and bandmates with their instruments prior to his MND diagnosis

COURTESY OF PATRICK DARLING

Re-creating his singing voice wasn’t as easy. The tool typically requires around 10 minutes of clear audio to generate a clone. “I had no high-quality recordings of myself singing,” Darling said. “We had to use audio from videos on people’s phones, shot in noisy pubs, and a couple of recordings of me singing in my kitchen.” Still, those snippets were enough to create a “synthetic version of [Darling’s] singing voice,” says Cave.

In the recordings, Darling sounded a little raspy and “was a bit off” on some of the notes, says Cave. The voice clone has the same qualities. It doesn’t sound perfect, Cave says—it sounds human.

“The ElevenLabs voice that we’ve created is wonderful,” Darling said at the event. “It definitely sounds like me—[it] just kind of feels like a different version of me.”

ElevenLabs has also developed an AI music generator called Eleven Music. The tool allows users to compose tracks, using text prompts to choose the musical style. Several well-known artists have also partnered with the company to license AI clones of their voices, including the actor Michael Caine, whose voice clone is being used to narrate an upcoming ElevenLabs documentary. Last month, the company released an album of 11 tracks created using the tool. “The Liza Minnelli track is really a banger,” says Cave.

Eleven Music can generate a song in a minute, but Darling and Cave spent around six weeks fine-tuning Darling’s song. Using text prompts, any user can “create music and add lyrics in any style [they like],” says Cave. Darling likes Irish folk, but Cave has also worked with a man in Colombia who is creating Colombian folk music. (The ElevenLabs tool is currently available in 74 languages.)

Back on stage

Last month, Cocking got a call from Cave, who sent him Darling’s completed track. “I heard the first two or three words he sang, and I had to turn it off,” he says. “I was just in bits, in tears. It took me a good half a dozen times to make it to the end of the track.”

Darling and Cave were making plans to perform the track live at the ElevenLabs summit in London on Wednesday, February 11. So Cocking and bandmate Hari Ma each arranged accompanying parts to play on the mandolin and fiddle. They had a couple of weeks to rehearse before they joined Darling on stage, two years after their last performance together.

“I wheeled him out on stage, and neither of us could believe it was happening,” says Cave. “He was thrilled.” The song was played as Darling remained on stage, and Cocking and Ma played their instruments live.

Cocking and Cave say Darling plans to continue to use the tools to make music. Cocking says he hopes to perform with Darling again but acknowledges that, given the nature of ALS, it is difficult to make long-term plans.

“It’s so bittersweet,” says Cocking. “But getting up on stage and seeing Patrick there filled me with absolute joy. I know Patrick really enjoyed it as well. We’ve been talking about it … He was really, really proud.”

ELEVENLABS/AMPLIFY
Ecomm Cowboy Talks AI and Underdogs

Chris Hall is an ecommerce entrepreneur turned media operator. His new “Ecomm Cowboy” show broadcasts live Monday through Friday on X and YouTube. The mission, he says, is twofold: deliver daily news to sellers and offer companionship to those working alone.

Chris first appeared on the podcast in 2023 as the marketing head of a D2C brand. In this our latest conversation, he addresses his goals for Ecomm Cowboy, production challenges, and, yes, the power of AI tools for one-person brands.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Who are you and what do you do?

Chris Hall: I’m the founder of Ecomm Cowboy, a startup media company broadcasting live Monday through Friday on X and YouTube. We talk about the current and future state of ecommerce so operators can survive and thrive. I launched the show about a month ago.

I stumbled into ecommerce in 2014. I created one of the first subscription coffee brands on the internet.

After that, I worked for a marketing agency and then with Bruce Bolt, the D2C athletic glove company.

Bandholz: What are your goals for Ecomm Cowboy?

Hall: I’ve contemplated the concept for years, with two missions.

First, ecommerce owners are on the bleeding edge of the ever-changing internet. We cover the top news stories, retail developments, direct-to-consumer topics, artificial intelligence — anything related to selling online.

Second, working from a laptop at home is common in the ecommerce industry, but it’s intensely lonely. For many, it’s a dreadful experience. So I hope Ecomm Cowboy is also a place where people can have a companion of sorts and interact.

Bandholz: A daily show with guests is a lot of work.

Hall: Yes, it is. We usually have one guest, but sometimes it’s two. Each show runs an hour. I hope to extend it eventually to two hours.

I prepare for three to four hours each day, covering everything that’s happened, who’s appearing, and what to discuss. Plus events occur in real time that alter the plan.

After each show,  there’s editing, cutting, and posting to make the most of the content. So it’s a lot of energy and time, but I love it.

I thrive on the pressure. There’s much to do every day before noon Central time, when the show goes live.

It brings me back to my time playing football at the University of Texas, where every practice I had to be ready to battle,  mentally and physically. A part of me still welcomes the challenge. I wake up excited every day because of it.

Bandholz: What’s the state of ecommerce?

Hall: AI tools are jaw-dropping. Six months ago, we were laughing at them, but no more. AI can now perform tasks such as ad creation, empowering what I call a one-person brand.

Sean Frank of Ridge, the wallet maker, calls it Ecommerce 4.0. It’s an opportunity for underdogs. One person, harnessing today’s tools, can do what took an entire team five years ago.

A good example is Kive, an AI tool that generates product specs directly within the image. A recent guest, Bart Szaniewski from Dad Gang, a D2C hat seller, described the tool. He uses the images on his Instagram feed.

Bandholz: If you can’t communicate in today’s world, you will be left behind.

Hall: That’s fair. The most adept operators are communicating (in ways I have yet to take advantage of) using AI tools that produce a voice, a video, a copywriting style.

I see two routes going forward. There’s the anti-AI bet. The best way to be anti-AI and build trust is to be live and in person. Be an actual human who’s making mistakes and producing something good enough that people will come back.

The second route is to stay at the forefront of AI technology and become expert on the tools and methods. If you can win visitors in a way that doesn’t deceive them, there’s a way to enrich yourself.

On a recent show, we touched on an app called DramaBox. It produces AI-generated TikTok-style mini dramas. Each episode is literally one minute long. I’m told the business is booming from selling access to the shows. Viewers download the app, pay, and then consume the content.

To me, it’s horrible for humanity, although I use an AI-powered video maker from ByteDance called Seedance 2.0. A number of popular videos use Seedance, such as Ethan Hunt from Mission Impossible.

Many observers say Hollywood is obsolete, a step behind. I don’t know about that. But what I do know is that the capabilities are better than ever.

And now it’s up to us. How can we use the tools to improve what we talk about or solve a problem for them?

Bandholz: Where can listeners watch your show, follow you, or get in touch?

Hall: The show “Ecomm Cowboy” on X and YouTube. I’m also on X or LinkedIn.

Google AI Shows A Site Is Offline Due To JS Content Delivery via @sejournal, @martinibuster

Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a note in the SERPs saying that the website was down since early 2026.

The Redditor didn’t create a post on Reddit, they just linked to their blog post that blamed Google and AI. This enabled Mueller to go straight to the site, identify the cause as having to do with JavaScript implementation, and then set them straight that it wasn’t Google’s fault.

Redditor Blames Google’s AI

The blog post by the Redditor blames Google, headlining the article with a computer science buzzword salad that over-complicates and (unknowingly) misstates the actual problem.

The article title is:

“Google Might Think Your Website Is Down
How Cross-page AI aggregation can introduce new liability vectors.”

That part about “cross-page AI aggregation” and “liability vectors” is eyebrow raising because none of those terms are established terms of art in computer science.

The “cross-page” thing is likely a reference to Google’s Query Fan-Out, where a question on Google’s AI Mode is turned into multiple queries that are then sent to Google’s Classic Search.

Regarding “liability vectors,” a vector is a real thing that’s discussed in SEO and is a part of Natural Language Processing (NLP). But “Liability Vector” is not a part of it.

The Redditor’s blog post admits that they don’t know if Google is able to detect if a site is down or not:

“I’m not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn’t be able to detect that since it’s behind a login wall.”

And they appear to maybe not be aware of how RAG or Query Fan-Out works, or maybe how Google’s AI systems work. The author seems to regard it as a discovery that Google is referencing fresh information instead of Parametric Knowledge (information in the LLM that was gained from training).

They write that Google’s AI answer says that the website indicated the site was offline since 2026.:

“…the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore.

…it clearly mentions the timeframe as early 2026. Since the website didn’t exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs!”

A little later in the blog post the Redditor admits that they don’t know why Google is saying that the website is offline.

They explained that they implemented a shot in the dark solution by removing a pop-up. They were incorrectly guessing that it was the pop-up that was causing the issue and this highlights the importance of being certain of what’s causing issues before making changes in the hope that this will fix them.

The Redditor shared they didn’t know how Google summarizes information about a site in response to a query about the site, and expressed their concern that they believe it’s possible that Google can scrape irrelevant information then show it as an answer.

They write:

“…we don’t know how exactly Google assembles the mix of pages it uses to generate LLM responses.

This is problematic because anything on your web pages might now influence unrelated answers.

…Google’s AI might grab any of this and present it as the answer.”

I don’t fault the author for not knowing how Google AI search works, I’m fairly certain it’s not widely known. It’s easy to get the impression that it’s an AI answering questions.

But what’s basically going on is that AI search is based on Classic Search, with AI synthesizing the content it finds online into a natural language answer. It’s like asking someone a question, they Google it, then they explain the answer from what they learned from reading the website pages.

Google’s John Mueller Explains What’s Going On

Mueller responded to the person’s Reddit post in a neutral and polite manner, showing why the fault lies in the Redditor’s implementation.

Mueller explained:

“Is that your site? I’d recommend not using JS to change text on your page from “not available” to “available” and instead to just load that whole chunk from JS. That way, if a client doesn’t run your JS, it won’t get misleading information.

This is similar to how Google doesn’t recommend using JS to change a robots meta tag from “noindex” to “please consider my fine work of html markup for inclusion” (there is no “index” robots meta tag, so you can be creative).”

Mueller’s response explains that the site is relying on JavaScript to replace placeholder text that is served briefly before the page loads, which only works for visitors whose browsers actually run that script.

What happened here is that Google read that placeholder text that the web page showed as the indexed content. Google saw the original served content with the “not available” message and treated it as the content.

Mueller explained that the safer approach is to have the correct information present in the page’s base HTML from the start, so that both users and search engines receive the same content.

Takeaways

There are multiple takeaways here that go beyond the technical issue underlying the Redditor’s problem. Top of the list is how they tried to guess their way to an answer.

They really didn’t know how Google AI search works, which introduced a series of assumptions that complicated their ability to diagnose the issue. Then they implemented a “fix” based on guessing what they thought was probably causing the issue.

Guessing is an approach to SEO problems that’s justified on Google being opaque but sometimes it’s not about Google, it’s about a knowledge gap in SEO itself and a signal that further testing and diagnosis is necessary.

Featured Image by Shutterstock/Kues

Google’s Search Relations Team Debates If You Still Need A Website via @sejournal, @MattGSouthern

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a one-size-fits-all answer.

The conversation stayed focused on trade-offs between owning a website and relying on platforms such as social networks or app stores.

In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question and repeatedly landed on the same conclusion: it depends.

What Was Said

Illyes and Splitt acknowledged that websites still offer distinct advantages, including data sovereignty, control over monetization, the ability to host services such as calculators or tools, and freedom from platform content moderation.

Both Googlers also emphasized situations where a website may not be necessary.

Illyes referenced a Google user study conducted in Indonesia around 2015-2016 where businesses ran entirely on social networks with no websites. He described their results as having “incredible sales, incredible user journeys and retention.”

Illyes also described mobile games that, in his telling, became multi-million-dollar and in some cases “billion-dollar” businesses without a meaningful website beyond legal pages.

Illyes offered a personal example:

“I know that I have a few community groups in WhatsApp for instance because that’s where the people I want to reach are and I can reach them reliably through there. I could set up a website but I never even considered because why? To do what?”

Splitt addressed trust and presentation, saying:

“I’d rather have a nicely curated social media presence that exudes trustworthiness than a website that is not well done.”

When pressed for a definitive answer, Illyes offered the closest thing to a position, saying that if you want to make information or services available to as many people as possible, a website is probably still the way to go in 2026. But he framed it as a personal opinion, not a recommendation.

Why This Matters

Google Search is built around crawling and indexing web content, but the hosts still frame “needing a website” as a business decision that depends on your goals and audience.

Neither made a case that websites are essential for every business in 2026. Neither argued that the open web offers something irreplaceable. The strongest endorsement was that websites provide a low barrier of entry for sharing information and that the web “isn’t dead.”

This is consistent with the fragmented discovery landscape that SEJ has been covering, where user journeys now span AI chatbots, social feeds, and community platforms alongside traditional search.

Looking Ahead

The Search Off the Record podcast has historically offered behind-the-scenes perspectives from the Search Relations team that sometimes run ahead of official positions.

This episode didn’t introduce new policy or guidance. But the Search Relations team’s willingness to validate social-only business models and app-only distribution reflects how the role of websites is changing in a multi-platform discovery environment.

The question is worth sitting with. If the Search Relations team frames website ownership as situational rather than essential, the value proposition rests on the specific use case, not on the assumption that every business needs one.


Featured Image: Diki Prayogo/Shutterstock

Bing AI Citation Tracking, Hidden HTTP Homepages & Pages Fall Under Crawl Limit – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates cover how you track AI visibility, how a ghost page can break your site name in search results, and what new crawl data reveals about Googlebot’s file size limits.

Here’s what matters for you and your work.

Bing Webmaster Tools Adds AI Citation Dashboard

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving publishers visibility into how often their content gets cited in Copilot and AI-generated answers. The feature is now in public preview.

Key Facts: The dashboard tracks total citations, average cited pages per day, page-level citation activity, and grounding queries. Grounding queries show the phrases AI used when retrieving your content for answers.

Why This Matters

Bing is now offering a dedicated dashboard for AI citation visibility. Google includes AI Overviews and AI Mode activity in Search Console’s overall Performance reporting, but it doesn’t break out a separate report or provide citation-style URL counts. AI Overviews also assign all linked pages to a single position, which limits what you can learn about individual page performance in AI answers.

Bing’s dashboard goes further by tracking which pages get cited, how often, and what phrases triggered the citation. The missing piece is click data. The dashboard shows when your content is cited, but not whether those citations drive traffic.

Now you can confirm which pages are referenced in AI answers and identify patterns in grounding queries, but connecting AI visibility to business outcomes still requires combining this data with your own analytics.

What SEO Professionals Are Saying

Wil Reynolds, founder of Seer Interactive, celebrated the feature on X and focused on the new grounding queries data:

“Bing is now giving you grounding queries in Bing Webmaster tools!! Just confirmed, now I gotta understand what we’re getting from them, what it means and how to use it.”

Koray Tuğberk GÜBÜR, founder of Holistic SEO & Digital, compared it directly to Google’s tooling on X:

“Microsoft Bing Webmaster Tools has always been more useful and efficient than Google Search Console, and once again, they’ve proven their commitment to transparency.”

Fabrice Canel, principal product manager at Microsoft Bing, framed the launch on X as a bridge between traditional and AI-driven optimization:

“Publishers can now see how their content shows up in the AI era. GEO meets SEO, power your strategy with real signals.”

The reaction across social media centered on a shared frustration. This is the data practitioners have been asking for, but it comes from Bing rather than Google. Several people expressed hope that Google and OpenAI would follow with comparable reporting.

Read our full coverage: Bing Webmaster Tools Adds AI Citation Performance Data

Hidden HTTP Homepage Can Break Your Site Name In Google

Google’s John Mueller shared a troubleshooting case on Bluesky where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results. The issue is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, hiding the problematic page from normal browsing.

Key Facts: The site used HTTPS, but a server-default HTTP homepage was still accessible. Chrome’s auto-upgrade meant the publisher never saw the HTTP version, but Googlebot doesn’t follow Chrome’s upgrade behavior, so Googlebot was pulling from the wrong page.

Why This Matters

This is the kind of problem you wouldn’t find in a standard site audit because your browser never shows it. If your site name or favicon in search results doesn’t match what you expect, and your HTTPS homepage looks correct, the HTTP version of your domain is worth checking.

Mueller suggested running curl from the command line to see the raw HTTP response without Chrome’s auto-upgrade. If it returns a server-default page instead of your actual homepage, that’s the source of the problem. You can also use the URL Inspection tool in Search Console with a Live Test to see what Google retrieved and rendered.

Google’s documentation on site names specifically mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what happens when an HTTP version contains content different from the HTTPS homepage you intended.

What People Are Saying

Mueller described the case on Bluesky as “a weird one,” noting that the core problem is invisible in normal browsing:

“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”

The case highlights a pattern where browser features often hide what crawlers see. Examples include Chrome’s auto-upgrade, reader modes, client-side rendering, and JavaScript content. To debug site name and favicon issues, check the server response directly, not just browser loadings.

Read our full coverage: Hidden HTTP Page Can Cause Site Name Problems In Google

New Data Shows Most Pages Fit Well Within Googlebot’s Crawl Limit

New research based on real-world webpages suggests most pages sit well below Googlebot’s 2 MB fetch cutoff. The data, analyzed by Search Engine Journal’s Roger Montti, draws on HTTP Archive measurements to put the crawl limit question into practical context.

Key Facts: HTTP Archive data suggests most pages are well below 2 MB. Google recently clarified in updated documentation that Googlebot’s limit for supported file types is 2 MB, while PDFs get a 64 MB limit.

Why This Matters

The crawl limit question has been circulating in technical SEO discussions, particularly after Google updated its Googlebot documentation earlier this month.

The new data answers the practical question that documentation alone couldn’t. Does the 2 MB limit matter for your pages? For most sites, the answer is no. Standard webpages, even content-heavy ones, rarely approach that threshold.

Where the limit could matter is on pages with extremely bloated markup, inline scripts, or embedded data that inflates HTML size beyond typical ranges.

The broader pattern here is Google making its crawling systems more transparent. Moving documentation to a standalone crawling site, clarifying which limits apply to which crawlers, and now having real-world data to validate those limits gives a clearer picture of what Googlebot handles.

What Technical SEO Professionals Are Saying

Dave Smart, technical SEO consultant at Tame the Bots and a Google Search Central Diamond Product Expert, put the numbers in perspective in a LinkedIn post:

“Googlebot will only fetch the first 2 MB of the initial html (or other resource like CSS, JavaScript), which seems like a huge reduction from 15 MB previously reported, but honestly 2 MB is still huge.”

Smart followed up by updating his Tame the Bots fetch and render tool to simulate the cutoff. In a Bluesky post, he added a caveat about the practical risk:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to cap text based files to 2 MB to simulate this.”

Google’s John Mueller endorsed the tool on Bluesky, writing:

“If you’re curious about the 2MB Googlebot HTML fetch limit, here’s a way to check.”

Mueller also shared Web Almanac data on Reddit to put the limit in context:

“The median on mobile is at 33kb, the 90-percentile is at 151kb. This means 90% of the pages out there have less than 151kb HTML.”

Roger Montti, writing for Search Engine Journal, reached a similar conclusion after reviewing the HTTP Archive data. Montti noted that the data based on real websites shows most sites are well under the limit, and called it “safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.”

Read our full coverage: New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough

Theme Of The Week: The Diagnostic Gap

Each story this week points to something practitioners couldn’t see before, or checked the wrong way.

Bing’s AI citation dashboard fills a measurement gap that has existed since AI answers started citing website content. Mueller’s HTTP homepage case reveals an invisible page that standard site audits and browser checks would miss entirely because Chrome hides it. And the Googlebot crawl limit data answers a question that documentation updates raised, but couldn’t resolve on their own.

The connecting thread isn’t that these are new problems. AI citations have been happening without measurement tools. Ghost HTTP pages have been confusing site name systems since Google introduced the feature. And crawl limits have been listed in Google’s docs for years without real-world validation. What changed this week is that each gap got a concrete diagnostic: a dashboard, a curl command, and a dataset.

The takeaway is that the tools and data for understanding how search engines interact with your content are getting more specific. The challenge is knowing where to look.

More Resources:


Featured Image: Accogliente Design/Shutterstock