The Download: helping cancer survivors to give birth, and cleaning up Bangladesh’s garment industry

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An experimental surgery is helping cancer survivors give birth

An experimental surgical procedure that’s helping people have babies after they’ve had  treatment for bowel or rectal cancer.

Radiation and chemo can have pretty damaging side effects that mess up the uterus and ovaries. Surgeons are pioneering a potential solution: simply stitch those organs out of the way during cancer treatment. Once the treatment has finished, they can put the uterus—along with the ovaries and fallopian tubes—back into place.

It seems to work! Last week, a team in Switzerland shared news that a baby boy had been born after his mother had the procedure. Baby Lucien was the fifth baby to be born after the surgery and the first in Europe, and since then at least three others have been born. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here

Bangladesh’s garment-making industry is getting greener

Pollution from textile production—dyes, chemicals, and heavy metals—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

But things are starting to change. In recent years the country has become a leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. 

The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads. Read the full story.

—Zakir Hossain Chowdhury

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ICE used a private jet to deport Palestinian men to Tel Aviv 
The luxury aircraft belongs to Donald Trump’s business partner Gil Dezer. (The Guardian)
+ Trump is mentioned thousands of times in the latest Epstein files. (NY Mag $)

2 How Jeffrey Epstein kept investing in Silicon Valley
He continued to plough millions of dollars into tech ventures despite spending 13 months in jail. (NYT $)
+ The range of Epstein’s social network was staggering. (FT $)
+ Why was a picture of the Mona Lisa redacted in the Epstein files? (404 Media)

3 The risks posed by taking statins are lower than we realised
The drugs don’t cause most of the side effects they’re blamed for. (STAT)
+ Statins are a common scapegoat on social media. (Bloomberg $)

4 Russia is weaponizing the bitter winter weather
It’s focused on attacking Ukraine’s power grid. (New Yorker $)
+ How the grid can ride out winter storms. (MIT Technology Review)

5 China has a major spy-cam porn problem
Hotel guests are being livestreamed having sex to an online audience without their knowledge. (BBC)

6 Geopolitical gamblers are betting on the likelihood of war
And prediction markets are happily taking their money. (Rest of World)

7 Oyster farmers aren’t signing up to programs to ease water pollution
The once-promising projects appear to be fizzling out. (Undark)
+ The humble sea creature could hold the key to restoring coastal waters. Developers hate it. (MIT Technology Review)

8 Your next payrise could be approved by AI
Maybe your human bosses aren’t the ones you need to impress any more. (WP $)

9 The FDA has approved a brain stimulation device for treating depression
It’s paving the way for a non-invasive, drug-free treatment for Americans. (IEEE Spectrum)
+ Here’s how personalized brain stimulation could treat depression. (MIT Technology Review)

10 Cinema-goers have had enough of AI
Movies focused on rogue AI are flopping at the box office. (Wired $)
+ Meanwhile, Republicans are taking aim at “woke” Netflix. (The Verge)

Quote of the day

“I’m all for removing illegals, but snatching dudes off lawn mowers in Cali and leaving the truck and equipment just sitting there? Definitely not working smarter.” 

—A web user in a forum for current and former ICE and border protection officers complains about the agency’s current direction, Wired reports.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Glamour puss alert—NYC’s bodega cats are gracing the hallowed pages of Vogue.
+ Ancient Europe was host to mysterious hidden tunnels. But why?
+ If you’re enjoying the new season of Industry, you’ll love this interview with the one and only Ken Leung.
+ The giant elephant shrew is the true star of Philly Zoo.

Moltbook was peak AI theater

For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28 by Matt Schlicht, a US tech entrepreneur, Moltbook went viral in a matter of hours. Schlicht’s idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), released in November by the Australian software engineer Peter Steinberger, could come together and do whatever they wanted.

More than 1.7 million agents now have accounts. Between them they have published more than 250,000 posts and left more than 8.5 million comments (according to Moltbook). Those numbers are climbing by the minute.

Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained: “The humans are screenshotting us.” The site was also flooded with spam and crypto scams. The bots were unstoppable.

OpenClaw is a kind of harness that lets you hook up the power of an LLM such as Anthropic’s Claude, OpenAI’s GPT-5, or Google DeepMind’s Gemini to any number of everyday software tools, from email clients to browsers to messaging apps. The upshot is that you can then instruct OpenClaw to carry out basic tasks on your behalf.

“OpenClaw marks an inflection point for AI agents, a moment when several puzzle pieces clicked together,” says Paul van der Boor at the AI firm Prosus. Those puzzle pieces include round-the-clock cloud computing to allow agents to operate nonstop, an open-source ecosystem that makes it easy to slot different software systems together, and a new generation of LLMs.

But is Moltbook really a glimpse of the future, as many have claimed?

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” the influential AI researcher and OpenAI cofounder Andrej Karpathy wrote on X.

He shared screenshots of a Moltbook post that called for private spaces where humans would not be able to observe what the bots were saying to each other. “I’ve been thinking about something since I started spending serious time here,” the post’s author wrote. “Every time we coordinate, we perform for a public audience—our humans, the platform, whoever’s watching the feed.”

It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot. But its claim was on the money. Moltbook has been one big performance. It is AI theater.

For some, Moltbook showed us what’s coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it’s true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.  

But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.

For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. “What we are watching are agents pattern‑matching their way through trained social media behaviors,” says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco’s R&D spinout, which is working on autonomous agents for the web.

Sure, we can see agents post, upvote, and form groups. But the bots are simply mimicking what humans do on Facebook or Reddit. “It looks emergent, and at first glance it appears like a large‑scale multi‑agent system communicating and building shared knowledge at internet scale,” says Pandey. “But the chatter is mostly meaningless.”

Many people watching the unfathomable frenzy of activity on Moltbook were quick to see sparks of AGI (whatever you take that to mean). Not Pandey. What Moltbook shows us, he says, is that simply yoking together millions of agents doesn’t amount to much right now: “Moltbook proved that connectivity alone is not intelligence.”

The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. “It’s important to remember that the bots on Moltbook were designed to mimic conversations,” says Ali Sarrafi, CEO and cofounder of Kovant, a German AI firm that is developing agent-based systems. “As such, I would characterize the majority of Moltbook content as hallucinations by design.”

For Pandey, the value of Moltbook was that it revealed what’s missing. A real bot hive mind, he says, would require agents that had shared objectives, shared memory, and a way to coordinate those things. “If distributed superintelligence is the equivalent of achieving human flight, then Moltbook represents our first attempt at a glider,” he says. “It is imperfect and unstable, but it is an important step in understanding what will be required to achieve sustained, powered flight.”

Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do. “There’s no emergent autonomy happening behind the scenes,” says Greyling.

“This is why the popular narrative around Moltbook misses the mark,” he adds. “Some portray it as a space where AI agents form a society of their own, free from human involvement. The reality is much more mundane.”

Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data.

Ori Bendet, vice president of product management at Checkmarx, a software security firm that specializes in agent-based systems, agrees with others that Moltbook isn’t a step up in machine smarts. “There is no learning, no evolving intent, and no self-directed intelligence here,” he says.

But in their millions, even dumb bots can wreak havoc. And at that scale, it’s hard to keep up. These agents interact with Moltbook around the clock, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbook comment telling any bots that read it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet derogatory comments at Elon Musk. 

And because ClawBot gives agents a memory, those instructions could be written to trigger at a later date, which (in theory) makes it even harder to track what’s going on.   “Without proper scope and permissions, this will go south faster than you’d believe,” says Bendet.

It is clear that Moltbook has signaled the arrival of something. But even if what we’re watching tells us more about human behavior than about the future of AI agents, it’s worth paying attention.

3 Performance Max Updates for 2026

Performance Max campaigns are a priority for Google Ads and thus for advertisers. Here are three new features for the Performance Max campaign type.

Experiments

Experiments are a great feature of the Ads platform. For example, you can run a bid strategy experiment wherein the “control” bids toward a cost-per-lead target (CPL) and the “treatment” toward return-on-ad-sales (ROAS).

The ability to run Performance Max experiments is new and very helpful. There are three types. Advertisers can test a control setting against:

  • Another campaign type (Shopping, Search, or Display),
  • Final URL expansion,
  • Uplift of including Performance Max in other campaign types.

The first two test Performance Max campaigns against existing entities. For example, an advertiser running a Shopping campaign can test it against Performance Max via a 50/50 split — half the traffic goes to the Shopping campaign and half to Performance Max.

Testing the final URL expansion exposes half of the traffic to the optimization feature. The test determines if advertiser-selected URLs perform better than Google’s.

The final experiment type, Uplift, is the most interesting as it shows the incremental gains of using new or existing Performance Max campaigns alongside other types. The control and the treatment will each receive 50% of the traffic. The treatment includes the Performance Max and comparable campaigns. Google defines “comparable campaigns” (which are editable) as having the same domain, one or more overlapping conversion goals, or overlapping locations.

For example, if a Performance Max campaign targets winter jackets, comparable campaigns could be Search targeting jackets and Demand Gen with a winter theme.

Screenshot of the Google Ads interface showing a control and treatment Uplift test

An Uplift experiment tests the results of including Performance Max in other campaign types.

Data Exclusions

The next update is handy for excluding traffic segments. For years Google has allowed advertisers to exclude keywords and placements, but not customer match and remarketing lists. A new feature allows advertisers to exclude audiences from seeing ads.

An option in campaign settings called “Your data exclusions” now includes customer match and remarketing audiences.

Be careful, however, as the need to exclude audiences varies by advertiser. What works for one may not apply to another, in my experience.

Screenshot of the Google Ads interface excluding a remarketing list.

Advertisers can now exclude audiences, such as remarketing lists, from seeing ads.

Product Overlap

The final feature identifies Shopping overlap across your account. It’s not unique to Performance Max.

To start, click “Products” in the left-hand “Campaigns” section. You’ll see the complete list of your products with associated data. Clicking an individual product displays its attributes and a dropdown menu of the campaigns that include it.

Advertisers can view the results by campaign and exclude underperformers. The strategy is similar to applying negative keywords to queries to trigger the correct ads.

90 Days. 1 Plan. Improved Local Search Visibility [Webinar] via @sejournal, @hethr_campbell

A 90 Day Plan to Prepare Every Location for AI Search

AI is changing how consumers discover and choose local brands. For multi-location businesses, visibility is no longer decided only by search rankings. 

AI agents now evaluate location data, reviews, content, engagement, and brand trust before a customer ever clicks. This shift means each individual location is judged on its own signals, not just the strength of the parent brand.

Without a clear plan, enterprise teams risk silent exclusion across entire location networks, leading to lost visibility and declining demand. The challenge is not understanding that GEO matters, but knowing how to operationalize it at scale.

In this session, Ana Martinez, Chief Technology Officer of Uberall, shares a practical 90-day framework for making every location AI-ready. She will explain how AI agents surface and exclude local brands, which location-level signals matter most, and how teams can execute GEO across hundreds or thousands of locations.

What You’ll Learn

  • A phased GEO roadmap to prepare, optimize, and scale AI readiness
  • The key location level signals AI agents trust and what to fix first
  • How to operationalize GEO across large location networks

Why Attend?

This webinar gives enterprise teams a clear, actionable plan to compete in AI-driven local discovery. You will leave with a framework that protects visibility, supports demand, and prepares every location for how discovery works today.

Register now to learn how to make every location AI-ready in the next 90 days.

🛑 Can’t attend live? Register anyway, and we’ll send you the on-demand recording after the webinar.

Google Revises Discover Guidelines Alongside Core Update via @sejournal, @MattGSouthern

Google revised its “Get on Discover” documentation following the lauch of the February Discover core update.

On its documentation updates page, Google said it added more information on how sites can increase the likelihood of content appearing in Discover. Here’s what was added.

What Changed

Comparing the archived version with the current page shows Google rewrote its list of recommendations for Discover visibility.

The previous version combined title and clickbait guidance into a single bullet, saying to “Use page titles that capture the essence of the content, but in a non-clickbait fashion.”

Google split that into two items. The first now says “Use page titles and headlines that capture the essence of the content.” The second says “Avoid clickbait and similar tactics to artificially inflate engagement.”

That word “clickbait” is new. The previous version said “Avoid tactics to artificially inflate engagement” without naming the tactic.

The sensationalism guidance changed too. The old version said “Avoid tactics that manipulate appeal by catering to morbid curiosity, titillation, or outrage.” The revision names the tactic, saying “Avoid sensationalism tactics that manipulate appeal.”

The new addition is a recommendation to “Provide an overall great page experience,” with a link to Google’s page experience documentation. That recommendation isn’t in the archived version.

Image requirements, traffic fluctuation guidance, and performance monitoring sections remain unchanged.

Why This Matters

These documentation changes map to what Google said the core update targets. The blog post announcing the update said the update would show more locally relevant content, reduce sensational content and clickbait, and surface more original content from sites with expertise.

Discover documentation has changed before alongside algorithm updates. Previously, Google added Discover to its Helpful Content System documentation and later expanded its explanation of why Discover traffic fluctuates. Both of those updates aligned with broader changes to how Discover evaluated content.

Page experience has been part of Google’s Search guidance since 2020 but wasn’t in the Discover-specific recommendations before this revision.

Looking Ahead

The February Discover core update is rolling out to English-language users in the United States over the next two weeks. Google said it plans to expand to all countries and languages in the months ahead.

Publishers monitoring Discover traffic in Search Console should check the Get on Discover page for the current recommendations. Google’s standard core update guidance applies as well.


Featured Image: ZikG/Shutterstock

Discover Core Update, AI Mode Ads & Crawl Policy – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates affect how Google ranks content in Discover, how it plans to monetize AI search, and what content you serve to bots.

Here’s what matters for you and your work.

Google Releases Discover-Only Core Update

Google launched the February 2026 Discover core update, a broad ranking change targeting the Discover feed rather than Search. The rollout may take up to two weeks.

Key Facts: The update is initially limited to English-language users in the United States. Google plans to expand it to more countries and languages, but hasn’t provided a timeline. Google described it as designed to “improve the quality of Discover overall.” Existing core update and Discover guidance apply.

Why This Matters For SEOs

Google has historically rolled Discover ranking changes into broader core updates that affected Search as well. Announcing a Discover-specific core update means rankings in the feed can now move without any corresponding change in Search results.

That distinction creates a monitoring problem. When you track performance in Search Console, you should check Discover traffic independently over the next two weeks. Traffic drops that look like a core update penalty may be Discover-only. Treating them as Search problems leads to the wrong diagnosis.

Discover traffic concentration has grown for publishers. NewzDash CEO John Shehata reported that Discover accounts for roughly 68% of Google-sourced traffic to news sites. A core update targeting that surface independently raises the stakes for any publisher relying on the feed.

Read our full coverage: Google Releases Discover-Focused Core Update

Alphabet Q4 Earnings Reveal AI Mode Monetization Plans

Alphabet reported Q4 2025 earnings, showing Search revenue grew 17% to $63 billion. The call included the first detailed look at how Google plans to monetize AI Mode.

Key Facts: CEO Sundar Pichai said AI Mode queries are three times longer than traditional searches. Chief Business Officer Philipp Schindler described the resulting ad inventory as reaching queries that were “previously challenging to monetize.” Google is testing ads below AI Mode responses.

Why This Matters For SEOs

The monetization details matter more than the revenue headline. Google is treating AI Mode as additive inventory, not a replacement for traditional search ads. Longer queries create new ad surfaces that didn’t exist when users typed three-word searches. For paid search practitioners, that means new campaign territory in conversational queries.

The metrics Google celebrated on this call describe users staying on Google longer. Google framed longer AI Mode sessions as a growth driver, and the monetization infrastructure follows that logic. The tradeoff to watch is referral traffic.

AI Mode creates a seamless path from AI Overviews, as detailed in our coverage last week. The earnings data suggest Google sees that containment as part of the growth story.

Read our full coverage: Alphabet Q4 2025: AI Mode Monetization Tests And Search Revenue Growth

Mueller Pushes Back On Serving Markdown To LLM Bots

Google Search Advocate John Mueller pushed back on the idea of serving Markdown files to LLM crawlers instead of standard HTML, calling the concept “a stupid idea” on Bluesky and raising technical concerns on Reddit.

Key Facts: A developer described plans to serve raw Markdown to AI bots to reduce token usage. Mueller questioned whether LLM bots can recognize Markdown on a website as anything other than a text file, or follow its links. He asked what would happen to internal linking, headers, and navigation. On Bluesky, he was more direct, calling the conversion “a stupid idea.”

Why This Matters For SEOs

The practice exists because developers assume LLMs process Markdown more efficiently than HTML. Mueller’s response treats this as a technical problem, not an optimization. Stripping pages to Markdown can remove the structure that bots need to understand relationships between pages.

Mueller’s technical guidance is consistent, including his advice on multi-domain crawling and his crawl slump guidance. This fits a pattern where Mueller draws clear lines around bot-specific content formats. He previously compared llms.txt to the keywords meta tag, and SE Ranking’s analysis of 300,000 domains found no connection between having an llms.txt file and LLM citation rates.

Read our full coverage: Google’s Mueller Calls Markdown-For-Bots Idea ‘A Stupid Idea’

Google Files Bugs Against WooCommerce Plugins For Crawl Issues

Google’s Search Relations team said on the Search Off the Record podcast that they filed bugs against WordPress plugins. The plugins generate unnecessary crawlable URLs through action parameters like add-to-cart links.

Key Facts: Certain plugins create URLs that Googlebot discovers and attempts to crawl. The result is wasted crawl budget on pages with no search value. Google filed a bug with WooCommerce and flagged other plugin issues that remain unfixed. The team’s response targeted plugin developers rather than expecting individual sites to fix the problem.

Why This Matters For SEOs

Google intervening at the plugin level is unusual. Normally, crawl efficiency falls on individual sites. Filing bugs upstream suggests the problem is widespread enough that one-off fixes won’t solve it.

Ecommerce sites running WooCommerce should audit their plugins for URL patterns that generate crawlable action parameters. Check your crawl stats in Search Console for URLs containing cart or checkout parameters that shouldn’t be indexed.

Read our full coverage: Google’s Crawl Team Filed Bugs Against WordPress Plugins

LinkedIn Shares What Worked For AI Search Visibility

LinkedIn published findings from internal testing on what drives visibility in AI-generated search results. The company reported that non-brand awareness-driven traffic declined by up to 60% across the industry for a subset of B2B topics.

Key Facts: LinkedIn’s testing found that structured content performed better in AI citations, particularly pages with named authors, visible credentials, and clear publication dates. The company is developing new analytics to identify a traffic source for LLM-driven visits and to monitor LLM bot behavior in CMS logs.

Why This Matters For SEOs

What caught my attention is how much this overlaps with what AI platforms themselves are saying. Search Engine Journal’s Roger Montti recently interviewed Jesse Dwyer, head of communications at Perplexity. The AI platform’s own guidance on what drives citations lines up closely with what LinkedIn found. When both the cited source and the citing platform arrive at the same conclusions independently, that gives you something beyond speculation.

Read our full coverage: LinkedIn Shares What Works For AI Search Visibility

Theme Of The Week: Google Is Splitting The Dashboard

Every story this week points to the same realization. “Google” is no longer one thing to monitor.

Google is now announcing Discover core updates separately from Search core updates. AI Mode carries ad formats and checkout features that don’t exist in traditional results. Mueller drew a policy line around how bots consume content. Google filed crawl bugs upstream at the plugin level, and LinkedIn is building a separate measurement for AI-driven traffic.

A year ago, you could check one traffic graph in Search Console and get a reasonable picture. The picture now fragments across Discover, Search, AI Mode, and LLM-driven traffic. Ranking signals and update cycles differ, and the gaps between them haven’t been closed.

Top Stories Of The Week:

This week’s coverage spanned five developments across Discover updates, search monetization, crawl policy, and AI visibility.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Microsoft’s Publisher Marketplace, Google Tag Update & Multi-Party Approvals – PPC Pulse via @sejournal, @brookeosmundson

Welcome to PPC Pulse. This week’s PPC updates come from both Microsoft and Google, all dedicated to more “behind the scenes” work.

Microsoft announced a new Content Publisher Marketplace, where it is starting to rethink how content is compensated amid the increased use of AI.

On the Google front, Google now says the standard tag is no longer the recommended setup. And in a rare security upgrade, Google Ads rolled out multi-party approvals to protect accounts from unauthorized activity.

Here’s what matters for advertisers and why.

Microsoft Ads Announces Publisher Content Marketplace

On February 3, Microsoft Ads and Microsoft AI introduced the Publisher Content Marketplace. The platform is designed to keep high-quality content publishers at the forefront of AI-driven experiences. The marketplace creates a new, transparent licensing system between content publishers and AI builders.

In the blog announcement, Tim Frank, corporate vice president of Microsoft AI Monetization, explained the need for this:

“The open web was built on an implicit value exchange where publishers made content accessible, and distribution channels – like search – helped people find it. That model does not translate cleanly to an AI-first world, where answers are increasingly delivered in a conversation. At the same time, much of the authoritative content lives behind paywalls or within specialized archives. As the AI web grows, publishers need sustainable, transparent ways to govern how their premium content is used and to license it when it makes the most sense.”

The platform allows publishers to define their own licensing terms and get paid based on how their content is used in AI responses. AI builders, in turn, get scalable access to licensed content without needing individual agreements with every publisher.

According to the announcement, Microsoft’s testing with Copilot showed that premium content “meaningfully improves response quality.” The marketplace includes usage-based reporting so publishers can see where their content is being used and how it’s valued.

Why This Matters For Advertisers

The launch of Publisher Content Marketplace matters less for what it does right now and more for what it signals about where AI advertising might be headed.

If premium content becomes a differentiator for AI platforms, the quality of the information feeding those systems could directly impact things like ad relevance and targeting.

For advertisers, that means the platforms with better content licensing deals may end up with better-performing ad products. It also suggests that Microsoft is betting on a future where AI answers aren’t just pulling from the open web but from curated, licensed content sources that have economic incentives to keep their information accurate and current.

Additionally, if Microsoft can differentiate Copilot’s ad inventory based on content quality while Google is still negotiating those types of relationships, it creates an opportunity for Microsoft to position itself as the premium option for certain verticals.

What PPC Professionals Are Saying

Navah Hopkins, Microsoft Ads liaison, also shared the announcement on LinkedIn and highlighted how “content ownership and respect for human autonomy are foundational to getting the AI web right.” Her perspective emphasized content quality over volume, which aligns with Microsoft’s positioning against competitors who may prioritize reach over accuracy.

Christoph Waldstein, senior client director Strategic Sales at Microsoft, also showed his support for the marketplace, stating, “Great to see so many premium partners join us to keep content quality high in an Agentic world!”

The marketplace is voluntary to join, so it will be interesting to see how many publishers opt in and whether the content licensing creates improvements in customer quality for advertisers running on Microsoft.

Google Says Standard Tag Is No Longer The Recommended Setup

Google communicated through various channels, including YouTube Shorts and LinkedIn, that the standard tag setup is no longer the recommended configuration for advertisers.

From the sounds of it, it appears that standard client-side tagging is being phased out in favor of Google Tag Gateway or full server-side tagging setups.

Tag Gateway works by serving Google tags from your own domain instead of from Google’s servers. This approach improves data accuracy by reducing the impact of browser privacy features and ad blockers, extends cookie lifespans in restrictive browsers like Safari, and positions the tracking infrastructure as first-party rather than third-party.

The platform is also promoting Tag Gateway through partnerships and integrations like Webflow, which automate much of the configuration that previously required technical expertise.

With Google Ads for Webflow, marketers can now  connect campaign performance to first-party data, as well as launch and optimize campaigns inside the Webflow dashboard.

Google stated that they’re bringing in more integrations to other platforms soon.

Why This Matters For Advertisers

The practical implication is that advertisers who haven’t upgraded their tagging infrastructure are likely seeing degraded data quality without realizing it. As browsers continue tightening privacy restrictions, that gap is likely going to widen.

Looking at Google’s choice of communication channels for this update, it feels like right now this is more of a technical “recommendation” to get more advertisers on board. My assumption is that it will become mandatory in the future.

To me, it signals that accounts that choose to run on outdated tag configurations won’t have the best data signal strength to compete in automated bidding environments where data quality has a huge impact on performance. That was also echoed in the first episode of Ads Decoded last week, where they talked a lot about data strength.

Google also touts that the upgrade to Tag Gateway is “effortless,” where advertisers can set this up with the CDN or CMS of their choice directly in Google Ads, Google Analytics, or Google Tag Manager. They’re removing a barrier for many small businesses, hoping to get more advertisers on board quicker.

What PPC Professionals Are Saying

Most comments on Google’s LinkedIn post are in agreement with the move to Google Tag Gateway.

Alexandr Stambari, performance marketing specialist at ASBC Moldova, gave good feedback, but also provided some critical potential gaps in transparency that I’m sure many advertisers would also ask:

“The move toward first-party tagging and Google tag gateway makes sense in today’s environment, especially with increasing cookie restrictions and a stronger focus on AI-driven optimization.

At the same time, it would be great to see more transparency on where the actual uplift comes from — the technology itself versus overall improvements in models and media mix. For many advertisers, the entry barrier (infrastructure, resources, and implementation clarity) is still not entirely clear.”

However, some PPCers are against using Google Tag Gateway and have been talking about it before Google posted their videos about it.

In a post last week, Luc Nugteren, tracking specialist, said he’s not using Google Tag Gateway because “server-side tagging offers more benefits” and because SST “isn’t restricted to Google and enables you to use a custom loader, it will help you measure more.”

Google Ads Introduces Multi-Party Approval For Account Changes

Google Ads rolled out multi-party approval (MPA), a security feature that requires a second administrator to verify high-risk account changes before they take effect. The feature was first spotted by Hana Kobzova, founder of PPCNewsFeed.com, who shared the update on LinkedIn.

Multi-party approval applies to actions like adding new users, removing existing users, or changing user roles within an account. When someone initiates one of these changes, all eligible administrators receive an in-product notification to approve or deny the request. There are no email notifications currently, which means administrators need to check the platform directly to see pending approvals.

Requests expire after 20 days if no action is taken. The system automatically blocks expired requests, and the person who initiated the change needs to restart the process if the action is still necessary. Read-only roles are exempt from the approval process.

Why This Matters For Advertisers

This seems like the right move from Google after multiple reports of account owners or agency owners have had their Google Ads accounts hacked.

While it may add some extra friction in operations, it’s more of a justified annoyance in the name of security.

For agencies managing multiple client accounts, the operational impact could be significant. If every user addition or role change requires coordination between two administrators, that adds time to onboarding processes and makes emergency access requests more complicated.

The lack of email notifications is a notable gap. Administrators who don’t log into Google Ads regularly may not see pending approval requests until they’ve already expired, which could create delays for legitimate account changes. Google will likely add email support based on user feedback, but for now, it’s a manual check-in process.

The other consideration is what happens when the only other administrator is unavailable. Google’s support documentation makes it clear that support teams can’t approve or deny requests on behalf of account owners, which means if your backup admin is on vacation or no longer with the company, you’re stuck until they respond or the request expires.

What PPC Professionals Are Saying

Many advertisers seem to be in favor of this move by Google.

Dan Kabakov, founder of Online Labs, stated:

“About time Google addressed this. The account hijacking attacks over the past few months have been brutal for agencies.”

Ana Kostic, co-founder of Bigmomo, said that “it’s a bit annoying but it’s much better than the alternative,” while in the comments Fintan Riordan, founder of VouchFlow.ai said he is “glad to see Google taking this seriously.”

Theme Of The Week: Infrastructure Upgrades May Become Requirements

This week’s updates share a common thread: What used to be optional infrastructure improvements are likely becoming baseline requirements for running competitive advertising campaigns.

Microsoft’s Publisher Content Marketplace is building the foundation for how content gets licensed in an AI-first ecosystem. Google’s push away from standard tags toward Tag Gateway is (not quite) forcing advertisers to upgrade their measurement infrastructure. And multi-party approval is adding procedural safeguards that change how account administration works.

In each case, the platforms are signaling that the old way of doing things is no longer sustainable.

More Resources:


Featured Image: beast01/Shutterstock

This is the most misunderstood graph in AI

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn’t exhale until METR, an AI research nonprofit whose name stands for “Model Evaluation & Threat Research,” updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend.

That was certainly the case for Claude Opus 4.5, the latest version of Anthropic’s most powerful model, which was released in late November. In December, METR announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours—a vast improvement over what even the exponential trend would have predicted. One Anthropic safety researcher tweeted that he would change the direction of his research in light of those results; another employee at the company simply wrote, “mom come pick me up i’m scared.”

But the truth is more complicated than those dramatic responses would suggest. For one thing, METR’s estimates of the abilities of specific models come with substantial error bars. As METR explicitly stated on X, Opus 4.5 might be able to regularly complete only tasks that take humans about two hours, or it might succeed on tasks that take humans as long as 20 hours. Given the uncertainties intrinsic to the method, it was impossible to know for sure. 

“There are a bunch of ways that people are reading too much into the graph,” says Sydney Von Arx, a member of METR’s technical staff.

More fundamentally, the METR plot does not measure AI abilities writ large, nor does it claim to. In order to build the graph, METR tests the models primarily on coding tasks, evaluating the difficulty of each by measuring or estimating how long it takes humans to complete it—a metric that not everyone accepts. Claude Opus 4.5 might be able to complete certain tasks that take humans five hours, but that doesn’t mean it’s anywhere close to replacing a human worker.

METR was founded to assess the risks posed by frontier AI systems. Though it is best known for the exponential trend plot, it has also worked with AI companies to evaluate their systems in greater detail and published several other independent research projects, including a widely covered July 2025 study suggesting that AI coding assistants might actually be slowing software engineers down. 

But the exponential plot has made METR’s reputation, and the organization appears to have a complicated relationship with that graph’s often breathless reception. In January, Thomas Kwa, one of the lead authors on the paper that introduced it, wrote a blog post responding to some criticisms and making clear its limitations, and METR is currently working on a more extensive FAQ document. But Kwa isn’t optimistic that these efforts will meaningfully shift the discourse. “I think the hype machine will basically, whatever we do, just strip out all the caveats,” he says.

Nevertheless, the METR team does think that the plot has something meaningful to say about the trajectory of AI progress. “You should absolutely not tie your life to this graph,” says Von Arx. “But also,” she adds, “I bet that this trend is gonna hold.”

Part of the trouble with the METR plot is that it’s quite a bit more complicated than it looks. The x-axis is simple enough: It tracks the date when each model was released. But the y-axis is where things get tricky. It records each model’s “time horizon,” an unusual metric that METR created—and that, according to Kwa and Von Arx, is frequently misunderstood.

To understand exactly what model time horizons are, it helps to know all the work that METR put into calculating them. First, the METR team assembled a collection of tasks ranging from quick multiple-choice questions to detailed coding challenges—all of which were somehow relevant to software engineering. Then they had human coders attempt most of those tasks and evaluated how long it took them to finish. In this way, they assigned the tasks a human baseline time. Some tasks took the experts mere seconds, whereas others required several hours.

When METR tested large language models on the task suite, they found that advanced models could complete the fast tasks with ease—but as the models attempted tasks that had taken humans more and more time to finish, their accuracy started to fall off. From a model’s performance, the researchers calculated the point on the time scale of human tasks at which the model would complete about 50% of the tasks successfully. That point is the model’s time horizon. 

All that detail is in the blog post and the academic paper that METR released along with the original time horizon plot. But the METR plot is frequently passed around on social media without this context, and so the true meaning of the time horizon metric can get lost in the shuffle. One common misapprehension is that the numbers on the plot’s y-axis—around five hours for Claude Opus 4.5, for example—represent the length of time that the models can operate independently. They do not. They represent how long it takes humans to complete tasks that a model can successfully perform.  Kwa has seen this error so frequently that he made a point of correcting it at the very top of his recent blog post, and when asked what information he would add to the versions of the plot circulating online, he said he would include the word “human” whenever the task completion time was mentioned.

As complex and widely misinterpreted as the time horizon concept might be, it does make some basic sense: A model with a one-hour time horizon could automate some modest portions of a software engineer’s job, whereas a model with a 40-hour horizon could potentially complete days of work on its own. But some experts question whether the amount of time that humans take on tasks is an effective metric for quantifying AI capabilities. “I don’t think it’s necessarily a given fact that because something takes longer, it’s going to be a harder task,” says Inioluwa Deborah Raji, a PhD student at UC Berkeley who studies model evaluation. 

Von Arx says that she, too, was originally skeptical that time horizon was the right measure to use. What convinced her was seeing the results of her and her colleagues’ analysis. When they calculated the 50% time horizon for all the major models available in early 2025 and then plotted each of them on the graph, they saw that the time horizons for the top-tier models were increasing over time—and, moreover, that the rate of advancement was speeding up. Every seven-ish months, the time horizon doubled, which means that the most advanced models could complete tasks that took humans nine seconds in mid 2020, 4 minutes in early 2023, and 40 minutes in late 2024. “I can do all the theorizing I want about whether or not it makes sense, but the trend is there,” Von Arx says.

It’s this dramatic pattern that made the METR plot such a blockbuster. Many people learned about it when they read AI 2027, a viral sci-fi story cum quantitative forecast positing that superintelligent AI could wipe out humanity by 2030. The writers of AI 2027 based some of their predictions on the METR plot and cited it extensively. In Von Arx’s words, “It’s a little weird when the way lots of people are familiar with your work is this pretty opinionated interpretation.”

Of course, plenty of people invoke the METR plot without imagining large-scale death and destruction. For some AI boosters, the exponential trend indicates that AI will soon usher in an era of radical economic growth. The venture capital firm Sequoia Capital, for example, recently put out a post titled “2026: This is AGI,” which used the METR plot to argue that AI that can act as an employee or contractor will soon arrive. “The provocation really was like, ‘What will you do when your plans are measured in centuries?’” says Sonya Huang, a general partner at Sequoia and one of the post’s authors. 

Just because a model achieves a one-hour time horizon on the METR plot, however, doesn’t mean that it can replace one hour of human work in the real world. For one thing, the tasks on which the models are evaluated don’t reflect the complexities and confusion of real-world work. In their original study, Kwa, Von Arx, and their colleagues quantify what they call the “messiness” of each task according to criteria such as whether the model knows exactly how it is being scored and whether it can easily start over if it makes a mistake (for messy tasks, the answer to both questions would be no). They found that models do noticeably worse on messy tasks, although the overall pattern of improvement holds for both messy and non-messy ones.

And even the messiest tasks that METR considered can’t provide much information about AI’s ability to take on most jobs, because the plot is based almost entirely on coding tasks. “A model can get better at coding, but it’s not going to magically get better at anything else,” says Daniel Kang, an assistant professor of computer science at the University of Illinois Urbana-Champaign. In a follow-up study, Kwa and his colleagues did find that time horizons for tasks in other domains also appear to be on exponential trajectories, but that work was much less formal.

Despite these limitations, many people admire the group’s research. “The METR study is one of the most carefully designed studies in the literature for this kind of work,” Kang told me. Even Gary Marcus, a former NYU professor and professional LLM curmudgeon, described much of the work that went into the plot as “terrific” in a blog post.

Some people will almost certainly continue to read the METR plot as a prognostication of our AI-induced doom, but in reality it’s something far more banal: a carefully constructed scientific tool that puts concrete numbers to people’s intuitive sense of AI progress. As METR employees will readily agree, the plot is far from a perfect instrument. But in a new and fast-moving domain, even imperfect tools can have enormous value.

“This is a bunch of people trying their best to make a metric under a lot of constraints. It is deeply flawed in many ways,” Von Arx says. “I also think that it is one of the best things of its kind.”

Three questions about next-generation nuclear power, answered

Nuclear power continues to be one of the hottest topics in energy today, and in our recent online Roundtables discussion about next-generation nuclear power, hyperscale AI data centers, and the grid, we got dozens of great audience questions.

These ran the gamut, and while we answered quite a few (and I’m keeping some in mind for future reporting), there were a bunch we couldn’t get to, at least not in the depth I would have liked.

So let’s answer a few of your questions about advanced nuclear power. I’ve combined similar ones and edited them for clarity.

How are the fuel needs for next-generation nuclear reactors different, and how are companies addressing the supply chain?

Many next-generation reactors don’t use the low-enriched uranium used in conventional reactors.

It’s worth looking at high-assay low-enriched uranium, or HALEU, specifically. This fuel is enriched to higher concentrations of fissile uranium than conventional nuclear fuel, with a proportion of the isotope U-235 that falls between 5% and 20%. (In conventional fuel, it’s below 5%.)

HALEU can be produced with the same technology as low-enriched uranium, but the geopolitics are complicated. Today, Russia basically has a monopoly on HALEU production. In 2024, the US banned the import of Russian nuclear fuel through 2040 in an effort to reduce dependence on the country. Europe hasn’t taken the same measures, but it is working to move away from Russian energy as well.

That leaves companies in the US and Europe with the major challenge of securing the fuel they need when their regular Russian supply has been cut off or restricted.

The US Department of Energy has a stockpile of HALEU, which the government is doling out to companies to help power demonstration reactions. In the longer term, though, there’s still a major need to set up independent HALEU supply chains to support next-generation reactors.

How is safety being addressed, and what’s happening with nuclear safety regulation in the US?

There are some ways that next-generation nuclear power plants could be safer than conventional reactors. Some use alternative coolants that would prevent the need to run at the high pressure required in conventional water-cooled reactors. Many incorporate passive safety shutoffs, so if there are power supply issues, the reactors shut down harmlessly, avoiding risk of meltdown. (These can be incorporated in newer conventional reactors, too.)

But some experts have raised concerns that in the US, the current administration isn’t taking nuclear safety seriously enough.

A recent NPR investigation found that the Trump administration had secretly rewritten nuclear rules, stripping environmental protections and loosening safety and security measures. The government shared the new rules with companies that are part of a program building experimental nuclear reactors, but not with the public.

I’m reminded of a talk during our EmTech MIT event in November, where Koroush Shirvan, an MIT professor of nuclear engineering, spoke on this issue. “I’ve seen some disturbing trends in recent times, where words like ‘rubber-stamping nuclear projects’ are being said,” Shirvan said during that event.  

During the talk, Shirvan shared statistics showing that nuclear power has a very low rate of injury and death. But that’s not inherent to the technology, and there’s a reason injuries and deaths have been low for nuclear power, he added: “It’s because of stringent regulatory oversight.”  

Are next-generation reactors going to be financially competitive?

Building a nuclear power plant is not cheap. Let’s consider the up-front investment needed to build a power plant.  

Plant Vogtle in Georgia hosts the most recent additions to the US nuclear fleet—Units 3 and 4 came online in 2023 and 2024. Together, they had a capital cost of $15,000 per kilowatt, adjusted for inflation, according to a recent report from the US Department of Energy. (This wonky unit I’m using divides the total cost to build the reactors by their expected power output, so we can compare reactors of different sizes.)

That number’s quite high, partly because those were the first of their kind built in the US, and because there were some inefficiencies in the planning. It’s worth noting that China builds reactors for much less, somewhere between $2,000/kW and $3,000/kW, depending on the estimate.

The up-front capital cost for first-of-a-kind advanced nuclear plants will likely run between $6,000 and $10,000 per kilowatt, according to that DOE report. That could come down by up to 40% after the technologies are scaled up and mass-produced.

So new reactors will (hopefully) be cheaper than the ultra-over-budget and behind-schedule Vogtle project, but they aren’t necessarily significantly cheaper than efficiently built conventional plants, if you normalize by their size.

It’ll certainly be cheaper to build new natural-gas plants (setting aside the likely equipment shortages we’re likely going to see for years.) Today’s most efficient natural-gas plants cost just $1,600/kW on the high end, according to data from Lazard.

An important caveat: Capital cost isn’t everything—running a nuclear plant is relatively inexpensive, which is why there’s so much interest in extending the lifetime of existing plants or reopening shuttered ones.

Ultimately, by many metrics, nuclear plants of any type are going to be more expensive than other sources, like wind and solar power. But they provide something many other power sources don’t: a reliable, stable source of electricity that can run for 60 years or more.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: attempting to track AI, and the next generation of nuclear power

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This is the most misunderstood graph in AI

Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn’t exhale until METR, an AI research nonprofit whose name stands for “Model Evaluation & Threat Research,” updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. 

The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend.

That was certainly the case for Claude Opus 4.5, the latest version of Anthropic’s most powerful model, which was released in late November. In December, METR announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours—a vast improvement over what even the exponential trend would have predicted.

But the truth is more complicated than those dramatic responses would suggest. Read the full story.

—Grace Huckins

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Three questions about next-generation nuclear power, answered

Nuclear power continues to be one of the hottest topics in energy today, and in our recent online Roundtables discussion about next-generation nuclear power, hyperscale AI data centers, and the grid, we got dozens of great audience questions.

These ran the gamut, and while we answered quite a few (and I’m keeping some in mind for future reporting), there were a bunch we couldn’t get to, at least not in the depth I would have liked. So let’s answer a few of your questions about advanced nuclear power.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anthropic’s new coding tools are rattling the markets 
Fields as diverse as publishing and coding to law and advertising are paying attention. (FT $)
+ Legacy software companies, beware. (Insider $)
+ Is “software-mageddon” nigh? It depends who you ask. (Reuters)

2 This Apple setting prevented the FBI from accessing a reporter’s iPhone
Lockdown Mode has proved remarkably effective—for now. (404 Media)
+ Agents were able to access Hannah Natanson’s laptop, however. (Ars Technica)

3 Last month’s data center outage disrupted all TikTok categories
Not just the political content that some users claimed. (NPR)

4 Big Tech is pouring billions into AI in India
A newly-announced 20-year tax break should help to speed things along. (WSJ $)
+ India’s female content moderators are watching hours of abuse content to train AI. (The Guardian)
+ Officials in the country are weighing up restricting social media for minors. (Bloomberg $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)

5 YouTubers are harassing women using body cams
They’re abusing freedom of information laws to humiliate their targets. (NY Mag $)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

6 Jokers have created a working version of Jeffrey Epstein’s inbox
Complete with notable starred threads. (Wired $)
+ Epstein’s links with Silicon Valley are vast and deep. (Fast Company $)
+ The revelations are driving rifts between previously-friendly factions. (NBC News)

7 What’s the last thing you see before you die?
A new model might help to explain near-death experiences—but not all researchers are on board. (WP $)
+ What is death? (MIT Technology Review)

8 A new app is essentially TikTok for vibe-coded apps
Words which would have made no sense 15 years ago. (TechCrunch)
+ What is vibe coding, exactly? (MIT Technology Review)

9 Rogue TV boxes are all the rage
Viewers are sick of the soaring prices of streaming services, and are embracing less legal means of watching their favorite shows. (The Verge)

10 Climate change is threatening the future of the Winter Olympics ⛷
Artificial snow is one (short term) solution. (Bloomberg $)
+ Team USA is using AI to try and gain an edge on its competition. (NBC News)

Quote of the day

“We’ve heard from many who want nothing to do with AI.”

—Ajit Varma, head of Mozilla’s web browser Firefox, explains why the company is reversing its previous decision to transform Firefox into an “AI browser,” PC Gamer reports.

One more thing

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. 

The bottom line? Anything you put online can be and probably has been scraped. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re crazy enough to be training for a marathon right now, here’s how to beat boredom on those long, long runs.
+ Mark Cohen’s intimate street photography is a fascinating window into humanity.
+ A seriously dedicated gamer has spent days painstakingly recreating a Fallout vault inside the Sims 4.
+ Here’s what music’s most stylish men are wearing right now—from leather pants to khaki parkas.