Google URL Removal Bug Enabled Attackers To Deindex URLs via @sejournal, @martinibuster

Google recently fixed a bug that enabled anyone to anonymously use an official Google tool to remove any URL from Google search and get away with it. The tool had the potential to be used to devastate competitor rankings by removing their URLs completely from Google’s index. The bug was known by Google since 2023 but until now Google hadn’t taken action to fix it.

Tool Exploited For Reputation Management

A report by the Freedom of the Press Foundation recounted the case of a tech CEO who had employed numerous tactics to “censor” negative reporting by a journalist, ranging from legal action to identify the reporter’s sources, an “intimidation campaign” via the San Francisco city attorney and a DMCA takedown request.

Through it all, the reporter and the Freedom of the Press Foundation prevailed in court, and the article at the center of the actions remained online until it began getting removed through abuse of Google’s Remove Outdated Content tool. Restoring the web page with Google Search Console was easy, but the abuse continued. This led to opening a discussion on the Google Search Console Help Community.

The person posted a description of what was happening and asked if there was a way to block abuse of the tool. The post alleged that the attacker was choosing a word that was no longer in the original article and using that as the basis for claiming an article is outdated and should be removed from Google’s search index.

This is what the report on Google’s Help Community explained:

“We have a dozen articles that got removed this way. We can measure it by searching Google for the article, using the headline in quotes and with the site name. It shows no results returned.

Then, we go to GSC and find it has been “APPROVED” under outdated content removal. We cancel that request. Moments later, the SAME search brings up an indexed article. This is the 5th time we’ve seen this happen.”

Four Hundred Articles Deindexed

What was happening was an aggressive attack against a website, and Google apparently was unable to do anything to stop the abuse, leaving the user in a very bad position.

In a follow-up post, they explained the devastating effect of the sustained negative SEO attack:

“Every week, dozens of pages are being deindexed and we have to check the GSC every day to see if anything else got removed, and then restore that.

We’ve had over 400 articles deindexed, and all of the articles were still live and on our sites. Someone went in and submitted them through the public removal tool, and they got deindexed.”

Google Promised To Look Into It

They asked if there was a way to block the attacks, and Google’s Danny Sullivan responded:

“Thank you — and again, the pages where you see the removal happening, there’s no blocking mechanism on them.”

Danny responded to a follow-up post, saying that they would look into it:

“The tool is designed to remove links that are no longer live or snippets that are no longer reflecting live content. We’ll look into this further.”

How Google’s Tool Was Exploited

The initial report said that the negative SEO attack was leveraging changed words within the content to file a successful outdated content removal. But it appears that they later discovered that another attack method was being used.

Google’s Outdated Content Removal tool is case-sensitive, which means that if you submit a URL containing an uppercase letter, the crawler will go out to specifically check for the uppercase version, and if the server returns a 404 Not Found error response, Google will remove all versions of the URL.

The Freedom of the Press Foundation writes that the tool is case insensitive, but that’s not entirely correct because if it were insensitive, the case wouldn’t matter. But the case does matter, which means that it is case sensitive.

By the way, the victim of the attack could have created a workaround by rewriting all requests for uppercase URLs to lowercase and enforcing lowercase URLs across the entire website.

That’s the flaw the attacker exploited. So, while the tool was case sensitive, at some point in the system Google’s removal system is case agnostic, which resulted in the correct URL being removed.

Here’s how the Freedom of the Press Foundation described it:

“Our article… was vanished from Google search using a novel maneuver that apparently hasn’t been publicly well documented before: a sustained and coordinated abuse of Google’s “Refresh Outdated Content” tool.

This tool is supposed to allow those who are not a site’s owner to request the removal from search results of web pages that are no longer live (returning a “404 error”), or to request an update in search of web pages that display outdated or obsolete information in returned results.

However, a malicious actor could, until recently, disappear a legitimate article by submitting a removal request for a URL that resembled the target article but led to a “404 error.” By altering the capitalization of a URL slug, a malicious actor apparently could take advantage of a case-insensitivity bug in Google’s automated system of content removal.”

Other Sites Affected By Thes Exploit

Google responded to the Freedom of the Press Foundation and admitted that this exploit did, in fact, affect other sites.

They are quoted as saying the issue only impacted a “tiny fraction of websites” and that the wrongly impacted sites were reinstated.

Google responded by email to note that this bug has been fixed.

Reddit Prioritizes Search, Sees 5X Growth in AI-Powered Answers via @sejournal, @MattGSouthern

Reddit is investing heavily in search, with CEO Steve Huffman announcing plans to position the platform as a destination for people seeking answers online.

In its Q2 shareholder letter, Reddit revealed that more than 70 million people now use its on-platform search each week.

Its AI-powered Reddit Answers feature is also gaining traction, reaching 6 million weekly users, up five times from the previous quarter.

Search Becomes a Strategic Priority

Reddit is now focusing on three key areas: improving the core product, growing its search presence, and expanding internationally.

As part of this shift, the company is scaling back work on its user economy initiatives.

Huffman stated:

“Reddit is one of the few platforms positioned to become a true search destination. We offer something special: a breadth of conversations and knowledge you can’t find anywhere else.”

The company plans to integrate Reddit Answers more deeply into its search experience, expand the feature to more markets, and launch marketing efforts to grow adoption globally.

Reddit Answers Gains Momentum

Reddit Answers, introduced earlier this year, uses the platform’s archive of human discussions to generate relevant responses to search queries.

It now has 6 million weekly active users and is available in the U.S., U.K., Canada, Australia, and India.

Integration with Reddit’s primary search experience is also being tested to make discovery more seamless.

Why This Matters

Reddit’s focus on search may offer new visibility opportunities. Its posts already rank well in Google results, now its internal search tools are being enhanced to surface answers directly.

Reddit also emphasizes its commercial value. The company says 40% of posts demonstrate purchase intent, making it a destination for people researching products and services.

Looking Ahead

As AI-generated content becomes more widespread, Reddit is betting that human perspectives will remain valuable.

The company expects Q3 revenue between $535 million and $545 million, with deeper integration of Reddit Answers planned as it continues to build out its search capabilities.


Featured Image: PJ McDonnell/Shutterstock

Bing Recommends lastmod Tags For AI Search Indexing via @sejournal, @MattGSouthern

Bing has updated its sitemap guidance with a renewed focus on the lastmod tag, highlighting its role in AI-powered search to determine which pages need to be recrawled.

While real-time tools like IndexNow offer faster updates, Bing says accurate lastmod values help keep content discoverable, especially on frequently updated or large-scale sites.

Bing Prioritizes lastmod For Recrawling

Bing says the lastmod field in your sitemap is a top signal for AI-driven indexing. It helps determine whether a page needs to be recrawled or can be skipped.

To make it work effectively, use ISO 8601 format with both date and time (e.g. 2004-10-01T18:23:17+00:00). That level of precision helps Bing prioritize crawl activity based on actual content changes.

Avoid setting lastmod to the time your sitemap was generated, unless the page was truly updated.

Bing also confirmed that changefreq and priority tags are ignored and no longer affect crawling or ranking.

Submission & Verification Tips

Bing recommends submitting your sitemap in one of two ways:

  • Reference it in your robots.txt file
  • Submit it via Bing Webmaster Tools

Once submitted, Bing fetches the sitemap immediately and rechecks it daily.

You can verify whether it’s working by checking the submission status, last read date, and any processing errors in Bing Webmaster Tools.

Combine With IndexNow For Better Coverage

To increase the chances of timely indexing, Bing suggests combining sitemaps with IndexNow.

While sitemaps give Bing a full picture of your site, IndexNow allows real-time URL-level updates—useful when content changes frequently.

The Bing team states:

“By combining sitemaps for comprehensive site coverage with IndexNow for fast, URL-level submission, you provide the strongest foundation for keeping your content fresh, discoverable, and visible.”

Sitemaps at Massive Scale

If you manage a large website, Bing’s sitemap capacity limits are worth your attention:

  • Up to 50,000 URLs per sitemap
  • 50,000 sitemaps per index file
  • 2.5 billion URLs per index
  • Multiple index files support indexing up to 2.5 trillion URLs

That makes the standard sitemap protocol scalable enough even for enterprise-level ecommerce or publishing platforms.

Fabrice Canel and Krishna Madhavan of Microsoft AI, Bing, noted that using these limits to their full extent helps ensure content remains discoverable in AI search.

Why This Matters

As search becomes more AI-driven, accurate crawl signals matter more.

Bing’s reliance on sitemaps, especially the lastmod field, shows that basic technical SEO practices still matter, even as AI reshapes how content is surfaced.

For large sites, Bing’s support for trillions of URLs offers scalability. For everyone else, the message is simpler: keep your sitemaps clean, accurate, and updated in real-time. This gives your content the best shot at visibility in AI search.


Featured Image: PJ McDonnell/Shutterstock

Industry Pioneer Reveals Why SEO Isn’t Working & What To Refocus On via @sejournal, @theshelleywalsh

Bill Hunt is a true pioneer in the industry, with more than 25 years of experience working on the websites of some of the largest multinationals. Having built two large digital/search agencies, one of which was acquired by Ogilvy, Bill has now moved into consulting focused on repositioning search to leverage marketing for shareholder growth.

His approach is not myopic surface-level SEO, but as an enterprise specialist who looks at what users actually want from their online experience. He connects the dots between search visibility, user experience, and business value for real results.

Bill is currently writing a series for Search Engine Journal about connecting search visibility to business value, and I spoke to him for IMHO to find out why he thinks SEO is currently not working.

“SEOs are creatures of habit. To succeed now, we need to unlearn and relearn how discovery actually works.”

The Real Problems Aren’t What You Think

I started out by asking Bill why SEO isn’t working, and his key message was not that SEO is broken, but there is paralysis, distraction from AI hype, and a neglect of fundamentals:

“I think there are three key problems right now. One is paralysis. We see that clients put search on pause, especially organic search, because they just don’t know what to do.

The second is the distraction with all the hype around the AI thing.

I mean, there’s a different acronym every day. So, which do we do? Are we chasing answers? Are we doing LLM index files or whatever craziness comes out?

And then the third is that there’s such a distraction from all this that a lot of the fundamentals aren’t being covered. And I think that’s where the problem is.”

Bill emphasized that the impact varies significantly by business type. Information-based businesses have been significantly affected because AI now directly answers queries that previously drove traffic to their sites. However, many other businesses might not be negatively impacted if they understand what’s actually changed.

Three Fundamental Shifts To Pay Attention To

Bill went on to talk about how three core changes have reshaped search, and understanding them is crucial for adaptation:

  • Intent understanding has evolved: Everything is about what did they search for? What are they hoping to see?
  • Friction must be removed: Platforms reward the path of least resistance.
  • Monetization is leading the way: It’s not just about helpful, but also about profitable.

Bill used an example from his work with Absolut Vodka.

“When I was working with Absolut Vodka, we had a drink site that was really just an awareness driver, and every month we sat down and we looked at Google’s search results and said, ‘If we were Google, what would we be changing around drinks or recipes or things like that?’

And so, by looking at the results, we could see, little by little, [that] somebody [was] looking for yellow cocktails. What should Google present?”

Rather than just optimizing for rankings, his team studied Google’s interface changes and adapted their visual content accordingly.

“We started focusing on the drink, bringing it front and center, amplifying the colors, the ingredients, and more and more people clicked.

We were generating millions and millions of visits because every step that Google was making to create a different user experience, we were trying to accommodate it.”

Bill believes that the idea of intent is still crucial. Considering how users just want to get to an answer, we must think about how they discover information and how we then present information to them.

“I think that’s really it in a nutshell. All of this change has paralyzed us and distracted us, and we need to recenter and refocus.

And that’s really a key part of what this series [at SEJ] is about: How do we refocus? How do we rethink this, both from a strategic point of view, from a shareholder value standpoint, and from a simple workflow standpoint?”

AI Tools Reward Consensus, Not Originality

In a recent LinkedIn post, Bill stated that AI tools don’t reward originality; they reward consensus.

As generative AI becomes embedded into how users explore and consume information, Bill warned against assuming that originality is enough to get discovered.

“AI systems synthesize consensus. If you’re saying something radically different, you won’t show up unless you connect it to what people already know.”

So, I asked Bill if you are creating this original content, how do you teach the systems to see you?

Bill’s advice is that to succeed in AI search environments, businesses need to:

  • Link new ideas to familiar terms.
  • Reflect user language and legacy concepts.
  • Be explicit in bridging the gap between old and new methods.

Otherwise, you risk being invisible to LLMs and answer engines that rely on summarizing well-established viewpoints.

“If you’re stating that you’re radically different, you’re not going to be shown because you’re radically different. So, you have to connect, and this is what I put in that article. You need to connect back to the consensus idea.

If you’re saying you’ve got a new way to cut bread, you have to talk about the old way to cut bread and connect it to a more efficient or easier way to do it.”

Is Your Product Even Discoverable?

The most practical insight from our conversation surrounded how people can discover your brand or your product.

Historically, keyword research has been focused on connecting to searches that have existing search volume. But, if somebody doesn’t know a product exists to solve a problem, how would they search for it?

“I used to tell companies, if somebody doesn’t know a product exists to solve a problem, how would they search for it?

They would use the problem or symptoms of the problem. If they know a product exists but don’t know you exist, how would they search for it?”

Bill recommended that you run searches for problems related to your product and see if you show up. Search as if you know the solution exists, but not your brand.

And if you don’t surface, ask yourself why not?

“Take the symptoms people have, go into any tool you want, Google, Perplexity, ChatGPT, Gemini, and search and see if you come up.

If you don’t come up, the very next question you should ask is, ‘Why isn’t this product or this company in your result set?’ That’s probably the single most illuminating thing a senior executive can do…

When it tells you that you don’t have the answer, your very next step is, ‘How do we then create the answer, and then how do we get it into these?’”

This kind of query-path analysis is more revealing than traditional keyword research because it aligns with how people actually search, especially in AI environments that interpret broader queries.

Moving Forward: Back To Basics

Despite all the AI disruption, Bill recommends a return to fundamental principles. Companies need to ensure they’re indexable, crawlable, and seen as authorities in their space. The same core elements that have always mattered for search visibility.

“Who got cited? Who was number one? And Larry and Sergey said, ‘Well, if they’re cited most frequently as a source for a question, shouldn’t they be?’”

The key difference is that these fundamentals now operate in an AI-enhanced environment where understanding user intent and creating relevant, engaging content matter more than ever.

And if you want to find answers, ask the tools; they can tell you everything you need to know.

“I would tell everybody to go do that query and do the follow-up saying why aren’t we there? And you’d be surprised how efficient these tools are at telling you what you need to do to close that gap.”

Rather than panicking about AI destroying SEO, organizations should focus on understanding what’s actually changed and adapting their strategies accordingly.

The fundamentals remain solid; they just need to be applied in new ways.

You can watch the full interview with Bill Hunt below:

Don’t miss the new series that Bill is currently writing for SEJ about how you can connect the dots between search visibility, user experience, and business value that will not only help CMOs but also help search marketers get buy-in from CMOs.

Thank you to Bill Hunt for offering his insights and being my guest on IMHO.

More Resources: 


Featured Image: Shelley Walsh/Search Engine Journal

Should I Still Invest In SEO? (Yes, But Not In The Old Way) via @sejournal, @TaylorDanRW

How users are starting to interact with the internet has changed and soon will be unrecognizable from the internet we’ve grown comfortable with.

With Google integrating AI-powered features into Search, and the rise of third-party large language models (LLMs), it’s a different search experience.

Over the past few months, many CMOs I’ve spoken with, as well as business founders, have been asking the same questions around continuing investment in different marketing channels, including continuing investment in SEO.

I was fortunate to attend and speak at Google’s Search Central Live in Bangkok last week, and during the opening keynote, there was one snippet that has stood out for me that goes a long way to answering this question:

Traffic patterns may fluctuate: Long-held traffic patterns are likely to fluctuate, creating new opportunities for all sites. Past success on Search may not guarantee future success.

Should I Still Invest In SEO?

SEO is one of the few marketing channels that compound over time and investment.

Paid campaigns stop the moment you pause spending, but a strong organic program can keep driving traffic, leads, and sales long after it’s been implemented.

Often, it performs better over time, depending on how your competitors react.

That compounding effect is what separates SEO from most other digital investments. Every decent piece of content, every technical fix, every solid backlink adds to a base that grows stronger the more you invest.

SEO isn’t dead. It’s evolving.

That means fast, mobile-first websites, content demonstrating expertise and experience, clean internal links, and a solid structure; content that plays well with AI summaries and result variations, and more than anything, seeing SEO as part of your brand presence, not just a traffic lever.

Why Some Brands Are Pulling Back

There’s a rising anxiety in the air, caused by a number of unknowns and changes in our data, such as the great decoupling we’re witnessing.

Some CMOs are questioning whether SEO and content are still worth the effort.

There are a few reasons for this, namely that the SERP has changed dramatically. AI Overviews and expanded result features push the traditional organic links further down the page.

Some brands see less return from the same level of effort, and the result is frustration and, in some cases, panic.

At the same time, reporting is harder and attribution is messier.

It’s not always easy to show exactly where SEO contributes, especially when its influence spans across discovery, consideration, and conversion, which can make it a target when budgets begin to tighten.

Some teams are also misreading the signals, but in reality (in my opinion), we’re using the wrong measurement techniques, and measuring the new Search ecosystem by the standards of the old.

They assume that if fewer people click, fewer people are engaging, but visibility itself is valuable. Just because someone doesn’t click today doesn’t mean they won’t take action tomorrow.

In my opinion, pulling back now is the wrong move. Organic search remains the biggest visibility lever on the web, and when you stop investing in content, you’re choosing to disappear.

In an AI-first search world, visibility starts before the click.

The brands that stay active will be the ones users see and remember. This is no longer just about blue links and last click, it’s about brand recognition and building visibility across the multiple faces of the modern Search ecosystem.

Content’s Evolving Role In SEO

Top-of-funnel traffic might not be what it once was, but it’s still powerful.

Being visible in an AI Overview or response to a generic query still influences perception. It can lead to brand searches, direct visits, or conversions later down the line.

I don’t think the metric is how many people see your result. It’s how many go on to take meaningful action. SEO now runs across the funnel, and across formats. It’s not just 10 links on a page anymore.

Content has to work harder. A single piece might need to satisfy different intents, answer multiple questions, or show up in several places, from featured snippets, videos, product results, or AI-generated outputs.

SEO And AI

AI-powered search is splitting discovery across more surfaces. It’s not just Google anymore. It’s ChatGPT, Perplexity, Gemini, and others.

To stay visible in that world, you still need content. In fact, content is the price of admission. If you’re not producing it, you’re not part of the conversation.

SEO now includes shaping how AI systems understand your brand. If you’re not contributing to the information ecosystem, someone else is deciding your narrative.

Strategic SEO Investment

Smart SEO means:

  • Durable content that keeps working.
  • Authority-building through links, mentions, and structure.
  • A balance between fast wins and long-term gains.
  • Understanding and answering layered queries that do more than just inform – they convert.

For bigger businesses with multiple brands or sites, there’s an extra edge. Google and AI models understand entity relationships.

Coordinated content can strengthen authority across brands, especially in a world where AI pulls from consensus.

So, Should You Still Invest In SEO?

If you’re asking whether SEO still works, the answer is yes, but not in the old way.

It’s not just a traffic source; it’s becoming your visibility layer for both traditional Search, Google’s AI features, and many LLMs.

It’s fast becoming a lever for reputation and brand visibility, and a strategic asset as well as a marketing channel.

The real question is whether you can afford not to invest.

Paid traffic dries up the second you stop paying. Organic builds on itself. It’s one of the few channels that gives you more tomorrow for what you do today.

As AI changes how search looks and works, SEO stays relevant because it supports every layer of digital presence. It creates a base you own, not rent.

The brands that win next are the ones that stay active. The ones that keep showing up, even when the rules shift.

Content isn’t just about clicks. It’s about influence. It’s about being there when people are asking the big questions, wherever they’re asking them.

In a shifting landscape, SEO gives you something stable. A long-term play that doesn’t vanish when your budget runs out. For businesses planning beyond the quarter, it’s still one of the smartest bets you can make.

More Resources:


Featured Image: G-Stock Studio/Shutterstock

Vulnerability Uncovered In Wix Vibe Coding Platform via @sejournal, @martinibuster

Cloud security company Wiz discovered a critical flaw in Wix’s Base44 vibe coding platform that enabled attackers to bypass authentication and gain access to private enterprise applications. The relative simplicity of finding what should have been a secret app ID number, and using it to gain access, made the vulnerability a serious concern.

Exposed Sensitive Identification Number

An apparently randomly generated identification number, called an app_id, was embedded in public-facing paths such as the application URL and the manifest.json file. Attackers could use that data to generate a verified account, even when user registration was disabled. This bypassed the platform’s access controls, including Single Sign-On (SSO), which many organizations use for enterprise security.

The Wiz security report notes how easy it was to find the sensitive app_id numbers:

“When we navigate to any application developed on top of Base44, the app_id is immediately visible in the URI and manifest.json file path, all applications have their app_ids value hardcoded in their manifest path: manifests/{app_id}/manifest.json.”

Creating A Rogue Account Was Relatively Trivial

The vulnerability did not require privileged access or deep technical expertise. Once an attacker identified a valid app_id, they could use tools like the open source Swagger-UI to register a new account, receive a one-time password (OTP) via email, and verify the account without restriction.

From there, logging in through the application’s SSO flow granted full access to internal systems, despite the original access being restricted to specific users or teams. This process exposed a serious flaw in the platform’s assumption that the app_id would not be tampered with or reused externally.

Authentication Flaw Risked Exposure of Sensitive Data

Many of the affected apps were built using the popular Base44 vibe coding platform for internal use, supporting operations such as HR, chatbots, and knowledge bases. These systems contained personally identifiable information (PII) and were used for HR operations. The exploit enabled attackers to bypass identity controls and access private enterprise applications, potentially exposing sensitive data.

Wix Fixes Flaw Within 24 Hours

The cloud security company discovered the flaw by using a methodical process of examining public information for potential weak points, eventually culminating in finding the exposed app_id numbers, and from there creating the workflow for generating access to accounts. They next contacted Wix, which immediately fixed the issue.

According to the report published by the security company, there is no evidence that the flaw was exploited, and the vulnerability has been fully addressed.

Threat To Entire Ecosystems

The Wiz security report noted that the practice of vibe coding is proceeding at a rapid pace and with not enough time to address potential security issues, expressing the opinion that it creates “systemic risks” not just to individual apps but to “entire ecosystems.”

Why Did This Security Incident Happen?

Wix States It Is Proactive On Security

The report published a statement from Wix that states that they are proactive about security:

“We continue to invest heavily in strengthening the security of all products and potential vulnerabilities are proactively managed. We remain committed to protecting our users and their data.”

Security Company Says Discovery Of Flaw Was Simple

The report by Wiz describes the discovery as a relatively simple matter, explaining that they used “straightforward reconnaissance techniques,” including “passive and active discovery of subdomains,” which are widely accessible methods.

The security report explained that exploiting the flaw was simple:

“What made this vulnerability particularly concerning was its simplicity – requiring only basic API knowledge to exploit. This low barrier to entry meant that attackers could systematically compromise multiple applications across the platform with minimal technical sophistication.”

The existence of that report, in itself, raises the concern that if discovering the issue was “straightforward” and exploiting it had a “low barrier to entry,” how is it that Wix was proactive and yet this was not discovered?

  • If they had used a third-party security testing company, why hadn’t they discovered the publicly available app_id numbers?
  • The manifest.json exposure is trivial to detect. Why hadn’t that been flagged by a security audit?

The contradiction between a simple discovery/exploit process and Wix’s claimed proactive security posture raises a reasonable doubt about the thoroughness or effectiveness of their proactive measures.

Takeaways:

  • Simple Discovery and Exploitation:
    The vulnerability could be found and exploited using basic tools and publicly available information, with no need for advanced skills or insider access.
  • Bypassing Enterprise Controls:
    Attackers could gain full access to internal apps despite controls like disabled registration and SSO-based identity restrictions.
  • Systemic Risk from Vibe Coding:
    Wiz warns that fast-paced vibe coding platforms may introduce widespread security risks across application ecosystems.
  • Discrepancy Between Claims and Reality:
    The ease of exploitation contrasts with Wix’s claims of proactive security, prompting questions about the thoroughness of their security audits.

Wiz discovered that Wix’s Base44 vibe coding platform exposed a critical vulnerability that could have enabled attackers to bypass authentication and access internal enterprise applications.  The security company that discovered the flaw expressed the opinion that this incident highlights potential risks of insufficient security considerations, which can put entire ecosystems at risk.

Read the original report:

Wiz Research Uncovers Critical Vulnerability in AI Vibe Coding platform Base44 Allowing Unauthorized Access to Private Applications

Featured Image by Shutterstock/mailcaroline

The Download: a 30-year old baby, and OpenAI’s push into colleges

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

A baby boy has just won the new record for the “oldest baby.” Thaddeus Daniel Pierce, who arrived on July 26, developed from an embryo that had been in storage for 30 and a half years.

Lindsey and her husband, Tim Pierce, who live in London, Ohio, “adopted” the embryo from Linda Archerd, who had it created in 1994. The couple, aged 35 and 34, respectively, had been trying for a baby for seven years. Read more about their remarkable story.

—Jessica Hamzelou

OpenAI is launching a version of ChatGPT for college students

OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. 

The chatbot begins by asking what the student wants to know and then attempts to build an exchange, where the pair work methodically toward the answer together. OpenAI says the tool was built after consulting with pedagogy experts from over 40 institutions.

But there’s an ambitious vision behind Study Mode: It’s part of a wider push by OpenAI to get AI more deeply embedded into classrooms when the new academic year starts in September. Read the full story.

—James O’Donnell

MIT Technology Review Narrated: Are we ready to hand AI agents the keys?

In recent months, a new class of agents has arrived on the scene: ones built using large language models. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.

LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon. Despite that, like chatbot LLMs, agents can be chaotic and unpredictable.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The first tsunami waves have reached the US West Coast
But early damage from the powerful Russian earthquake has been thankfully limited. (WP $)
+ It’ll take some time before we can be confident there’s no danger, though. (WSJ $)
+ These underwater cables can improve tsunami detection. (MIT Technology Review)

2 Google has signed the EU code of practice
Despite criticisms from the US that it stands to stifle growth. (FT $)
+ Europe and America are taking very different paths. (The Register)

3 NASA is launching a new Earth-observing satellite today
It’ll keep a watch over precursors to earthquakes, landslides and volcanoes. (BBC)
+ Its data will be turned into maps to help scientists better respond. (NYT $)

4 US antibiotics research is likely to suffer without federal funding
It plays a critical role in antibiotic discovery. (Undark)
+ How bacteria-fighting viruses could go mainstream. (MIT Technology Review)

5 Russia is building its own new web
And at its heart is VK Co, a social network controlled by its government. (Bloomberg $)
+ How Russia killed its tech industry. (MIT Technology Review)

6 How Anthropic became so good at coding
Everyone else in Silicon Valley is dying to know. (Insider $)
+ The second wave of AI coding is here. (MIT Technology Review)

7 Demand for Vietnam’s chips is booming
It’s reaping the benefits of the world looking for alternatives to China’s products. (Rest of World)
+ Things aren’t looking great for AI chipmaker Groq. (The Information $)

8 Yelp has started making its own AI restaurant videos
And users can’t opt out of having their photos used in them. (The Verge)

9 Are memes the new comics?
If comics didn’t have a plot, that is. (Ars Technica)
+ Generative AI is reshaping South Korea’s webcomics industry. (MIT Technology Review)

10 Starbucks is abandoning launching stores that only accept mobile orders 📱☕
The vibes are off, apparently. (WSJ $)

Quote of the day

“Any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud.”

—Judge Michael Slade criticizes a lawyer who used AI-generated citations in a legal case, PC Gamer reports.

One more thing

The return of pneumatic tubes

Pneumatic tubes were once touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where they help to deliver orders for the main character, Winston Smith, in his job rewriting history to fit the ruling party’s changing narrative.

In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. For a while, the United States took up the systems with gusto.

But by the mid to late 20th century, use of the technology had largely fallen by the wayside, and pneumatic tube technology became virtually obsolete. Except in hospitals. Read the full story.

—Vanessa Armstrong

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This sweet baby pudu fawn is just too cute for words.
+ There’s some great picks in this list of the 100 best podcasts (and some shocking omissions).
+ The infamous gigantic Home Depot skeleton is getting a voice!
+ If you’re never not thinking about the Roman empire, here’s what happened after it all came crashing down.

Roundtables: Why It’s So Hard to Make Welfare AI Fair

Amsterdam tried using algorithms to fairly assess welfare applicants, but bias still crept in. Why did Amsterdam fail? And more important, can this ever be done right? Hear from MIT Technology Review editor Amanda Silverman, investigative reporter Eileen Guo, and Lighthouse Reports investigative reporter Gabriel Geiger as they explore if algorithms can ever be fair.

Speakers: Eileen Guo, features & investigations reporter, Amanda Silverman, features & investigations editor, and Gabriel Geiger investigative reporter at Lighthouse Reports

Recorded on July 30, 2025

Related Coverage:

An EPA rule change threatens to gut US climate regulations

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

The mechanism that allows the US federal government to regulate climate change is on the chopping block.

On Tuesday, US Environmental Protection Agency administrator Lee Zeldin announced that the agency is taking aim at the endangerment finding, a 2009 rule that’s essentially the tentpole supporting federal greenhouse-gas regulations.

This might sound like an obscure legal situation, but it’s a really big deal for climate policy in the US. So buckle up, and let’s look at what this rule says now, what the proposed change looks like, and what it all means.

To set the stage, we have to go back to the Clean Air Act of 1970, the law that essentially gave the EPA the power to regulate air pollution. (Stick with me—I promise I’ll keep this short and not get too into the legal weeds.)

There were some pollutants explicitly called out in this law and its amendments, including lead and sulfur dioxide. But it also required the EPA to regulate new pollutants that were found to be harmful. In the late 1990s and early 2000s, environmental groups and states started asking for the agency to include greenhouse-gas pollution.

In 2007, the Supreme Court ruled that greenhouse gases qualify as air pollutants under the Clean Air Act, and that the EPA should study whether they’re a danger to public health. In 2009, the incoming Obama administration looked at the science and ruled that greenhouse gases pose a threat to public health because they cause climate change. That’s the endangerment finding, and it’s what allows the agency to pass rules to regulate greenhouse gases.  

The original case and argument were specifically about vehicles and the emissions from tailpipes, but this finding was eventually used to allow the agency to set rules around power plants and factories, too. It essentially underpins climate regulations in the US.

Fast-forward to today, and the Trump administration wants to reverse the endangerment finding. In a proposed rule released on Tuesday, the EPA argues that the Clean Air Act does not, in fact, authorize the agency to set emissions standards to address global climate change. Zeldin, in an appearance on the conservative politics and humor podcast Ruthless that preceded the official announcement, called the proposal the “largest deregulatory action in the history of America.”

The administration was already moving to undermine the climate regulations that rely on this rule. But this move directly targets a “fundamental building block of EPA’s climate policy,” says Deborah Sivas, an environmental-law professor at Stanford University.

The proposed rule will go up for public comment, and the agency will then take that feedback and come up with a final version. It’ll almost certainly get hit with legal challenges and will likely wind up in front of the Supreme Court.

One note here is that the EPA makes a mostly legal argument in the proposed rule reversal rather than focusing on going after the science of climate change, says Madison Condon, an associate law professor at Boston University. That could make it easier for the Supreme Court to eventually uphold it, she says, though this whole process is going to take a while. 

If the endangerment finding goes down, it would have wide-reaching ripple effects. “We could find ourselves in a couple years with no legal tools to try and address climate change,” Sivas says.

To take a step back for a moment, it’s wild that we’ve ended up in this place where a single rule is so central to regulating emissions. US climate policy is held up by duct tape and a dream. Congress could have, at some point, passed a law that more directly allows the EPA to regulate greenhouse-gas emissions (the last time we got close was a 2009 bill that passed the House but never made it to the Senate). But here we are.

This move isn’t a surprise, exactly. The Trump administration has made it very clear that it is going after climate policy in every way that it can. But what’s most striking to me is that we’re not operating in a shared reality anymore when it comes to this subject. 

While top officials tend to acknowledge that climate change is real, there’s often a “but” followed by talking points from climate denial’s list of greatest hits. (One of the more ridiculous examples is the statement that carbon dioxide is good, actually, because it helps plants.) 

Climate change is real, and it’s a threat. And the US has emitted more greenhouse gases into the atmosphere than any other country in the world. It shouldn’t be controversial to expect the government to be doing something about it. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The AI Hype Index: The White House’s war on “woke AI”

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

The Trump administration recently declared war on so-called “woke AI,” issuing an executive order aimed at preventing companies whose models exhibit a liberal bias from landing federal contracts. Simultaneously, the Pentagon inked a deal with Elon Musk’s xAI just days after its chatbot, Grok, spouted harmful antisemitic stereotypes on X, while the White House has partnered with an anti-DEI nonprofit to create AI slop videos of the Founding Fathers. What comes next is anyone’s guess.