The Download: chatbots for health, and US fights over AI regulation

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

“Dr. Google” had its issues. Can ChatGPT Health do better?  

For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week.  

That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. The big question is: can the obvious risks of using AI for health-related queries be mitigated enough for them to be a net benefit? Read the full story

—Grace Huckins

America’s coming war over AI regulation  

In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry.  

Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy. The move marked a victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.

In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead. Read our story about what’s on the horizon

—Michelle Kim

This story is from MIT Technology Review’s What’s Next series of stories that look across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.  

Measles is surging in the US. Wastewater tracking could help.

This week marked a rather unpleasant anniversary: It’s a year since Texas reported a case of measles—the start of a significant outbreak that ended up spreading across multiple states. Since the start of January 2025, there have been over 2,500 confirmed cases of measles in the US. Three people have died. 

As vaccination rates drop and outbreaks continue, scientists have been experimenting with new ways to quickly identify new cases and prevent the disease from spreading. And they are starting to see some success with wastewater surveillance. Read the full story.

—Jessica Hamzelou 

This story is from The Checkup, our weekly newsletter giving you the inside track on all things health and biotech. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is dismantling itself
A foreign enemy could not invent a better chain of events to wreck its standing in the world. (Wired $)  
+ We need to talk about whether Donald Trump might be losing it.  (New Yorker $)

2 Big Tech is taking on more debt to fund its AI aspirations
And the bubble just keeps growing. (WP $)
Forget unicorns. 2026 is shaping up to be the year of the “hectocorn.” (The Guardian)
+ Everyone in tech agrees we’re in a bubble. They just can’t agree on what happens when it pops. (MIT Technology Review)

3 DOGE accessed even more personal data than we thought 
Even now, the Trump administration still can’t say how much data is at risk, or what it was used for. (NPR)

4 TikTok has finalized a deal to create a new US entity 
Ending years of uncertainty about its fate in America. (CNN)
Why China is the big winner out of all of this. (FT $)

5 The US is now officially out of the World Health Organization 
And it’s leaving behind nearly $300 million in bills unpaid. (Ars Technica
The US withdrawal from the WHO will hurt us all. (MIT Technology Review)

6 AI-powered disinformation swarms pose a threat to democracy
A would-be autocrat could use them to persuade populations to accept cancelled elections or overturn results. (The Guardian)
The era of AI persuasion in elections is about to begin. (MIT Technology Review)

7 We’re about to start seeing more robots everywhere
But exactly what they’ll look like remains up for debate. (Vox $)
Chinese companies are starting to dominate entire sectors of AI and robotics. (MIT Technology Review)

8 Some people seem to be especially vulnerable to loneliness
If you’re ‘other-directed’, you could particularly benefit from less screentime. (New Scientist $)

9 This academic lost two years of work with a single click
TL;DR: Don’t rely on ChatGPT to store your data. (Nature)

10 How animals develop a sense of direction 🦇🧭
Their ‘internal compass’ seems to be informed by landmarks that help them form a mental map. (Quanta $)

Quote of the day

“The rate at which AI is progressing, I think we have AI that is smarter than any human this year, and no later than next year.”

—Elon Musk simply cannot resist the urge to make wild predictions at Davos, Wired reports. 

One more thing

ADAM DETOUR

Africa fights rising hunger by looking to foods of the past

After falling steadily for decades, the prevalence of global hunger is now on the rise—nowhere more so than in sub-Saharan Africa. 

Africa’s indigenous crops are often more nutritious and better suited to the hot and dry conditions that are becoming more prevalent, yet many have been neglected by science, which means they tend to be more vulnerable to diseases and pests and yield well below their theoretical potential.

Now the question is whether researchers, governments, and farmers can work together in a way that gets these crops onto plates and provides Africans from all walks of life with the energy and nutrition that they need to thrive, whatever climate change throws their way. Read the full story.

—Jonathan W. Rosen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The only thing I fancy dry this January is a martini. Here’s how to make one.
+ If you absolutely adore the Bic crystal pen, you might want this lamp
+ Cozy up with a nice long book this winter. ($)
+ Want to eat healthier? Slow down and tune out food ‘noise’. ($)

M&A Advisor on Ecommerce Valuations

Frank Kosarek is the co-founder of BizPort, a mergers-and-acquisitions marketplace launched in November 2025. Before that, he was head of acquisitions for a large ecommerce aggregator.

He says buyers of ecommerce businesses today focus on discretionary earnings, not revenue, and seek recurring sales, such as subscriptions.

He addressed those items, the state of ecommerce M&A, and more in our recent conversation.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Who are you, and what do you do?

Frank Kosarek: I’m the co-founder of BizPort, a marketplace that helps founders exit their companies. I lead BizPort’s ecommerce division, connecting buyers and sellers. Before BizPort, I was the head of mergers and acquisitions at OpenStore, an aggregator in Miami, where I acquired about 50 Shopify brands. That experience exposed me to ecommerce transactions and what founders should and shouldn’t do when preparing to sell their businesses.

One of the most important concepts in exits is the seller’s discretionary earnings. It’s the foundation of most ecommerce valuations. SDE starts with a company’s annual net income (what’s on the tax return), then adds back the owner’s salary and benefits, and any one-time or non-recurring expenses.

For example, if a business earns $250,000 in net income, the founder pays herself $100,000, has $40,000 in benefits, and incurs a one-time $10,000 legal expense, the SDE would be about $400,000. That number is then multiplied by a valuation multiple, typically 2x to 2.5x for most ecommerce brands, and up to 5x for category leaders.

The best advice for founders is to track SDE monthly. Know your true net income and add-backs. It gives you a clear picture of growth and future valuation.

Eric Bandholz: What’s the demand for ecommerce acquisitions?

Frank Kosarek: Ecommerce experienced extreme acceleration in 2020. We saw years of growth compressed into about 12 months as Covid reshaped consumer behavior. During that period, valuation multiples increased, and many ecommerce businesses launched that probably shouldn’t have. Some lacked product-market fit or a dependable, repeat customer base.

What’s changed since then is buyer behavior. Aggregators, in particular, have pulled back or refined their strategies. As a result, sellers can no longer assume there’s an easy, quick exit waiting for them. Acquirers are more selective and more disciplined about what they buy.

Companies that exist at top multiples tend to resemble subscription businesses. A one-time purchase product, such as a kids’ tricycle, doesn’t create much long-term value if the customer never returns. Compare that to categories such as skincare or supplements, where consumers can subscribe and reorder. Buyers focus heavily on lifetime value and how much revenue they can generate from a customer after paying to acquire them.

That’s why brands without repeat or subscription-driven revenue often see leaner valuations, while strong subscription-heavy brands can still command multiples closer to 5x SDE.

Eric Bandholz: What’s the minimum revenue level to sell an ecommerce business?

Frank Kosarek: At BizPort, we generally look for brands doing at least $1 million in annual revenue before getting involved. At that level, ecommerce margins usually provide enough cash flow to underwrite a transaction, whether through a loan, capital injection, or both. That’s typically the minimum size where an acquisition becomes feasible.

When annual revenue reaches $30 million, potential buyers include private equity firms or larger strategic buyers. Those acquirers are more likely to evaluate businesses using revenue multiples instead of earnings multiples. There isn’t a hard line, but it’s an important distinction for founders to be aware of as their brands scale.

Eric Bandholz: How do founders separate personal attachment from fair market value?

Frank Kosarek: M&A for small ecommerce brands is much more art than science. There’s no one-size-fits-all deal structure. Most ecommerce founders have very high expectations for their company’s value, often thinking in large multiples of revenue.

That’s understandable because building a brand from the ground up requires a huge amount of work, much of which doesn’t show up on an income statement. That effort is intangible, and outside buyers can’t fully appreciate it from financials alone. Plus, many founders don’t realize that a multiple of discretionary earnings, not top-line revenue, typically values ecommerce businesses. That often leads to a reality check.

Eric Bandholz: How often do earn-outs fail?

Frank Kosarek: Some sellers want a complete exit with no ongoing involvement, and buyers generally understand that. Still, a smart buyer will usually negotiate a transition period, often three to six months, to help transfer operations and institutional knowledge. Additional support can turn into a short-term consulting agreement in which sellers receive a fixed monthly fee. In that case, sellers no longer have equity or performance-based upside; they’re simply helping with continuity.

I’ve seen situations where sellers and buyers clash operationally or strategically. When that happens, earn-outs often suffer. Sellers miss targets and don’t receive additional payouts, and buyers struggle because the transition doesn’t go smoothly.

Bandholz: What can stop a deal or hurt valuation?

Kosarek: One major piece of advice for sellers is to sell when your numbers are strong. Don’t wait until performance starts to decline or the market turns against you. Be open to exploratory conversations, especially after a banner year. Waiting until the curve crashes makes exits much harder.

Another common mistake is overspending on marketing to inflate top-line revenue. For smaller ecommerce brands, valuation is typically based on profit, not revenue. Pumping the top line at the expense of the bottom usually doesn’t earn a premium.

Another red flag is a lack of operational structure. Buyers don’t want to walk into a business and have to build everything from scratch. They want to see systems and processes in place. That includes working with a third-party logistics provider for fulfillment and returns, clear ownership of marketing functions, and documented processes.

Buyers’ confidence in the deal increases when they can quickly understand how the company operates and distributes work.

Bandholz: Where can people follow you, reach out to you?

Kosarek: Our site is Biz-port.com. You can find me on LinkedIn.

SEO Pulse: Google’s AI Mode Gets Personal, AI Bots Blocked, Domains Matter in Search via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse. This week’s updates affect how AI Mode personalizes answers, which AI bots can access your site, and why your domain choice still matters for search visibility.

Here’s what matters for you and your work.

Google Connects Gmail And Photos To AI Mode

Google is rolling out Personal Intelligence, a feature that connects Gmail and Google Photos to AI Mode in Search, delivering personalized responses based on users’ own data.

Key facts: The feature is available to Google AI Pro and AI Ultra subscribers who opt in. It launches as a Labs experiment for eligible users in the U.S. Google says it doesn’t train on users’ Gmail inbox or Photos library.

Why This Matters

This is the personal context feature Google promised at I/O but delayed until now. We covered the delay in December when Nick Fox, Google’s SVP of Knowledge and Information, said the feature was “still to come” with no public timeline.

For the 75 million daily active users Fox reported in AI Mode, this could reduce how much context you need to type to get tailored responses. Google’s examples include trip recommendations that factor in hotel bookings from Gmail and past travel photos, or coat suggestions that account for preferred brands and upcoming travel weather.

The SEO effects depend on how this changes query patterns. If users rely on Google pulling context from their email and photos instead of typing it, queries may get shorter and more ambiguous. That makes it harder to target long-tail searches with explicit intent signals.

What People Are Saying

The early social reaction is framing this as Google pushing AI Mode from “ask and answer” into “already knows your context.” Robby Stein, VP of Product at Google Search, positioned it as a more personal search experience driven by opt-in data connections.

On LinkedIn, the discussion quickly moved to trust and privacy tradeoffs. Michele Curtis, a content marketing specialist, framed personalization as something that only works when trust comes first.

Curtis wrote:

“Personalization only works when trust is architected before intelligence.”

Syed Shabih Haider, founder of Fluxxy AI, raised security concerns about connecting multiple apps.

Haider wrote:

“Personal Intelligence.. yeah the features/benefits look amazing.. but cant help but wonder about the data security. Once all apps are connected, the risk for breach becomes extremely high..”

Read our full coverage: Google Launches Personal Intelligence In AI Mode

AI Training Bots Lose Access While Search Bots Expand

Hostinger analyzed 66 billion bot requests across more than 5 million websites and found AI crawlers are following two different paths. Training bots are losing access as more sites block them. Search and assistant bots are expanding their reach.

Key facts: Hostinger reports 55.67% coverage for GPTBot and 55.67% average coverage for OAI-SearchBot, but their trajectories differ. GPTBot, which collects training data, fell from 84% to 12% over the measurement period. OAI-SearchBot, which powers ChatGPT search, reached that average without the same decline. Googlebot maintained 72% coverage. Apple’s bot reached 24.33%.

Why This Matters

The data confirms what we’ve tracked through multiple studies over the past year. BuzzStream found 79% of top news publishers block at least one training bot. Cloudflare’s Year in Review showed GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives. The Hostinger data puts numbers on the access gap between training and search crawlers.

The distinction matters because these bots serve different purposes. Training bots collect data to build models, while search bots retrieve content in real time when users ask questions. Blocking training bots opts you out of future model updates, and blocking search bots means you won’t appear when AI tools try to cite sources.

As a best practice, check your server logs to see what’s hitting your site, then make blocking decisions based on your goals.

What People Are Saying

On the practical SEO side, the most consistent advice is to separate “training” from “search and retrieval” in your robots decisions where you can. Aleyda Solís previously summarized the idea as blocking GPTBot while still allowing OAI-SearchBot, so your content can be surfaced in ChatGPT-style search experiences without being used for model training.

Solís wrote:

“disallow the ‘GPTbot’ user-agent but allow ‘OAI-SearchBot’”

At the same time, developers and site operators keep emphasizing the cost side of bot traffic. In one r/webdev discussion, a commenter said AI bots made up 95% of requests before blocking and rate limiting.

A commenter in r/webdev wrote:

“95% of the requests to one of our websites was AI bots before I started blocking and rate limiting them”

Read our full coverage: OpenAI Search Crawler Passes 55% Coverage In Hostinger Study

Mueller: Free Subdomain Hosting Makes SEO Harder

Google’s John Mueller warned that free subdomain hosting services create SEO challenges even when publishers do everything else right. The advice came in response to a Reddit post from a publisher whose site shows up in Google but doesn’t appear in normal search results.

Key facts: The publisher uses Digitalplat Domains, a free subdomain service on the Public Suffix List. Mueller explained that free subdomain services attract spam and low-effort content, making it harder for search engines to assess individual site quality. He recommended building direct traffic through promotion and community engagement rather than expecting search visibility first.

Why This Matters

Mueller’s guidance fits a pattern we’ve covered over the years. Google’s Gary Illyes previously warned against cheap TLDs for the same reason. When a domain extension becomes overrun by spam, search engines may struggle to identify legitimate sites among the noise.

Free subdomain hosting creates a specific version of this problem. While the Public Suffix List is meant to treat these subdomains as separate registrable units, the neighborhood signal can still matter. If most subdomains on a host contain spam, Google’s systems have to work harder to find yours.

This affects anyone considering free hosting as a way to test an idea before buying a real domain. The test environment itself becomes part of the evaluation. As Mueller wrote, “Being visible in popular search results is not the first step to becoming a useful & popular web presence.”

For anyone advising clients or building new projects, the domain investment is part of the SEO foundation. Starting on a free subdomain may save money upfront, but it adds friction to visibility that a proper domain avoids.

What SEO Professionals Are Saying

Most of the social sharing here is treating Mueller’s “neighborhood” analogy as the headline takeaway. In the original Reddit exchange, he said publishing on free subdomain hosts can mean opening up shop among “problematic flatmates,” which makes it harder for search systems to understand your site’s value in context.

Mueller wrote:

“opening up shop on a site that’s filled with … potentially problematic ‘flatmates’.”

On LinkedIn, the story is being recirculated as a broader reminder that “cheap or free” hosting decisions can quietly cap performance even when everything else looks right. Fernando Paez V, a digital marketing specialist, called it out as a visibility issue tied to spam-heavy environments.

Paez V wrote:

“free subdomain hosting services … attract spam and make it more difficult for legitimate sites to gain visibility”

Read our full coverage: Google’s Mueller: Free Subdomain Hosting Makes SEO Harder

Theme Of The Week: Access Is The New Advantage

This week’s stories share a common element. Access, whether to personal data, to websites via bots, or to fair evaluation by choosing the right domain, shapes outcomes before any optimization happens.

Personal Intelligence gives AI Mode access to your email and photos, changing what kinds of queries even need to happen. The Hostinger data shows search bots gaining access while training bots get locked out. Mueller’s subdomain warning reminds us that domain choice determines whether Google’s systems give your content a fair evaluation at all.

The common thread is that visibility increasingly depends on what you allow in and where you build. Blocking the wrong bots can reduce your chances of being surfaced or cited in AI tools. Building on a spam-heavy domain puts you at a disadvantage before you write a word. And Google’s AI features now have access to personal context that publishers can’t access or observe.

For practitioners, this means access decisions, both yours and the platforms’, shape results more than incremental optimization gains. Review your crawler permissions and domain choices, and watch how personal context in AI Mode changes the queries you’re trying to rank for.

Top Stories Of The Week:

More Resources:


Featured Image: Accogliente Design/Shutterstock

User Data Is Important In Google Search, Per Liz Reid’s DOJ Filing via @sejournal, @marie_haynes

I found some interesting things in the latest document in the DOJ vs. Google trial. Google has appealed the ruling that says they need to give proprietary information to competitors.

Image Credit: Marie Haynes

Key Takeaways:

  • Google has been ordered to give information to competitors so as not to be an illegal monopoly. Google does not want to give its extensive user-side data away.
  • Google’s data on page quality and freshness is proprietary. They don’t want to give it away.
  • Pages that are indexed are marked up with annotations, including signals that identify spam pages.
  • If spammers got hold of those spam signals, it would make stopping spam difficult.
  • User data is important to Google’s Glue system that stores info on every query searched, what the user saw, and how they interacted with the search results.
  • User data is important for training RankEmbed BERT – one of the deep learning systems behind Search.

OK, let’s get into the interesting stuff!

Google Has Proprietary Page Quality And Freshness Signals

This really isn’t a surprise. I did find it interesting that freshness signals are at the heart of Google’s proprietary secrets.

Image Credit: Marie Haynes

Again, here’s more on the importance of Google’s proprietary freshness signals:

Image Credit: Marie Haynes

Pages That Are Crawled Are Marked Up With ‘Proprietary Page Understanding Annotations’

Every page in Google’s index is marked up with annotations to help it understand the page. These include signals to identify spam and duplicate pages. I’ve written before about how every page in the index has a spam score.

Image Credit: Marie Haynes

Spam Scores Could Be Used To Reverse Engineer Ranking Systems

Google doesn’t want to share information with its competitors on these scores.

Image Credit: Marie Haynes

If the spam scores get out, it could lead to more spamming and more difficulty for Google in fighting spam.

Image Credit: Marie Haynes

Google Builds The Index Using These Marked-Up Pages

The pages that Google has added page understanding annotations on are organized based on how frequently Google expects the content will need to be accessed and how fresh the content needs to be.

Image Credit: Marie Haynes

Only A Fraction Of Pages Make It Into Google’s Index

Google argues that giving competitors a list of indexed URLs will enable them to “forgo crawling and analyzing the larger web, and to instead focus their efforts on crawling only the fraction of pages Google has included in its index.” Building this index costs Google extensive time and money. They don’t want to give that away for free.

Image Credit: Marie Haynes

The Role Of User Data In Google’s Ranking Systems

This is the most interesting part. I feel that we do not pay enough attention to Google’s use of user data. (Stay tuned to my YouTube channel as I’m soon about to release a very interesting video with my thoughts on how user-side data is so important – likely the MOST important factor in Google’s ranking systems.)

User Data Is Used To Build GLUE And RankEmbed Models

Google Glue is a huge table of user activity. It collects the text of the queries searched, the user’s language, location and device type, and information on what appeared on the SERP, what the user clicked on or hovered over, how long they stayed on a SERP, and more.

RankEmbed BERT is even more interesting. RankEmbed BERT is one of the deep learning systems that underpins Search. In the Pandu Nayak testimony, we learned that RankEmbed BERT is used in reranking the results returned by traditional ranking systems. RankEmbed BERT is trained on click and query data from actual users.

The AI systems behind search are continually learning to improve upon presenting searchers with satisfying results. Google looks at what they are clicking on and whether they return to the SERPs or not. Google also runs live experiments that look at what searchers choose to click on and stay on. Those actions help train RankEmbed BERT. It is further fine-tuned by ratings from the quality raters. I will be publishing more on this soon. The take-home point I want to hammer on is that user satisfaction is by far the most important thing we should be optimizing for!

From the Liz Reid document we are analyzing today, we can see that user data is used to train, build, and operate RankEmbed models.

Image Credit: Marie Haynes

Once again, we learn that the user data that is used to train these models includes query, location, time of search, and how the user interacted with what was displayed to them.

Image Credit: Marie Haynes

This is talking about the actions that users take from within the Google Search results. What I really want to know is how much of a role Chrome data uses. Does Google look at whether people are engaging with your pages, filling out your forms, making your recipes, and more? I think they do. The judgment summary of this trial hints that Chrome data is used in the ranking systems, but not a lot of detail is shared.

Image Credit: Marie Haynes

Google Says That If Someone Had The Glue And RankEmbed User Data, They Could Train An LLM With It

This user data is the key to Google’s success.

Image Credit: Marie Haynes

It’s worthwhile reading the whole declaration from Liz Reid.

More Resources:


This post was originally published on Marie Haynes Consulting.


Featured Image: N Universe/Shutterstock

PPC Pulse: Google’s Podcast Launch, Demand Gen, ChatGPT Ads via @sejournal, @brookeosmundson

Welcome to this week’s PPC Pulse. The big news this week centers on platform evolution: how advertisers get information, where ads show up, and what formats are gaining traction.

OpenAI announced it’s testing ads inside ChatGPT for the first time. Google launched a new podcast to help advertisers navigate platform changes. And Demand Gen added features designed to make video campaigns more actionable for commerce and travel advertisers.

Here’s what matters for advertisers and why.

Google Ads Launches “Ads Decoded” Podcast

Google is officially launching an ad-focused podcast, “Ads Decoded.” It’s aimed at helping advertisers better understand platform updates and AI-powered features.

Announced on LinkedIn, Ginny Marvin, Google Ads Liaison, will be officially hosting the podcast. The first episode launches on Monday, Jan. 26, 2026.

In the announcement, Marvin stated:

“The response to our pilot episodes proved that there is a hunger for a different kind of conversation – one that moves past the headlines and announcements and into the mechanics and nuances of how things actually work.”

Throughout the podcast series, Marvin will bring in Google product managers and platform experts to discuss new features, answer community questions, and provide their unique insights on how updates work in practice.

The original pilot episode featured product managers discussing AI Max for Search campaigns and Performance Max (PMax) channel performance reporting.

Why This Matters For Advertisers

Google Ads has no shortage of releasing product updates, but this podcast signals a shift in how those updates are being communicated.

Instead of relying solely on blog posts, help center articles, and occasional webinars, Google is creating a recurring channel specifically designed with PPC marketers in mind. It’s to better explain these features and why they matter.

For advertisers trying to keep up with the velocity of platform updates, this should be extremely useful. Product managers have the chance to explain more technical details that don’t always make it into official announcements from Google.

Hearing context directly from the team building these features adds clarity that marketers need.

The podcast also gives Google a way to address confusion or pushback on updates in real time, rather than waiting for feedback to bubble up through support channels or community forums.

For advertisers who prefer audio formats or need to stay current on platform updates without constantly checking multiple sources, “Ads Decoded” offers a centralized option worth adding to your lineup.

What PPC Professionals Are Saying

The feedback from advertisers on LinkedIn is all positive. Handfuls of marketers offered their enthusiasm and encouragement to Marvin.

Jonathan Milanes, founder of Proverve, said this is “long awaited,” and many others, including Tony Adam, founder and CEO of Visible Factors, can’t wait to tune in.”

Ben Luong, director at Copperchunk Ltd, asked:

“Is there a way to ask questions or where do you get the questions from to answer?”

Marvin replied that marketers can drop their questions along the way, and will also try to “surface questions and answers that may be buried.”

Further reading: 25 Years Of Google Ads: Was It Better Then Or Now?

Demand Gen Adds New Features

Also this week, Google announced several new features for Demand Gen campaigns that are now live. These were previously previewed capabilities announced at Google Marketing Live’s 2025 event back in May.

The updates include Shoppable CTV, attributed brand searches, and travel feeds. The features are designed to help advertisers reach new customers while being able to measure their impact more effectively.

  • Shoppable CTV: Users can now browse and purchase products directly while watching YouTube ads on connected TV screens. According to Google’s data, Demand Gen campaigns that include TV screens drive an average of 7% additional conversions at the same ROI.
  • Attributed Branded Searches: This feature is now available for Demand Gen. It’s meant to show the volume of your campaign’s branded searches on Google and/or YouTube to help quantify the impact of upper-funnel campaigns.
  • Travel Feeds: Advertisers can now connect their Hotel Center feed in Demand Gen campaigns to build dynamic video ads. The videos can feature hotel pricing, ratings, and availability.

Google cited LG Electronics as an example of Demand Gen’s effectiveness, noting that the company achieved a 24% higher conversion rate than its paid social campaigns, while reaching high-value customers at a 91% lower CPA.

Why This Matters For Advertisers

The long-awaited Demand Gen updates make this campaign type more actionable for commerce and travel advertisers, especially those who have been testing this campaign type but are wanting more control over creative and measurement.

Shoppable CTV can help address one of the biggest challenges with connected TV advertising: measuring direct response. If viewers can browse and purchase without leaving the screen, that removes a layer of friction and makes TV inventory more accountable.

Attributed brand searches can help advertisers justify upper-funnel spend by showing how campaigns influenced search behavior, not just immediate last-click conversions. This is especially important for teams that need to prove incremental impact to stakeholders who are more accustomed to last-click attribution.

Travel feeds bring dynamic creative to video advertising in a way that mirrors how Shopping campaigns work for retail. Instead of generic hotel ads that can get lost in the noise, advertisers can now surface pricing and availability based on what users are actually searching for.

What PPC Professionals Are Saying

While advertisers are excited about these updates, there was some justified constructive feedback as well.

Jyll Saskin Gales, Google Ads Coach at Inside Google Ads, responded to Google:

“Please make Attributed Branded Searches more widely available! It’s by request via Google rep only right now, and it will be such a helpful metric to justify increased YouTube & Demand Gen investment.”

Alexandru Stambari, performance marketing specialist at ASBC Moldova, agreed that this is the right direction for Demand Gen, but “the real impact of Demand Gen still heavily depends on data quality, attribution, and feed setup.”

Further reading: Demand Gen Vs. Lead Gen: What Every CMO Needs To Know

ChatGPT To Begin Testing Ads In The US

Officially announced last Friday, OpenAI confirmed it will begin testing ads in ChatGPT for Free and Go tier users in the coming weeks. This marks the first time ads will appear inside the ChatGPT experience.

Ads will appear at the bottom of responses, only when there’s a relevant sponsored product or service tied to the active conversation. They’ll be clearly labeled, visually separated from organic answers, and dismissible. Users can see why a particular ad is shown and turn off ad personalization entirely.

OpenAI was also explicit about where ads won’t appear:

  • No ads for users under 18.
  • No ads near sensitive or regulated topics (like health, mental health, or politics).

According to the release, conversations won’t be shared with advertisers, and user data won’t be sold. OpenAI also emphasized that advertising won’t influence ChatGPT’s responses.

Read our full coverage: ChatGPT To Begin Testing Ads In The United States

Why This Matters For Advertisers

For the first time in a while, we’re watching the birth of a completely new ad environment.

The context of these ads is completely different in ChatGPT versus someone searching on Google or Bing.

For example, when someone asks ChatGPT for dinner recipes or travel recommendations, they’re likely in decision mode versus a simple research mode. The query itself is further down the funnel than most search queries. Typically, they’re looking for a solution they can act on.

If ads show up in that moment with strict relevance guardrails and zero ability to influence the answer itself, this resembles something more like a recommendation engine than a traditional search ad. The intent signal is there, but the buying mechanism doesn’t exist yet.

While this is something advertisers can’t plan for yet, what they should actually pay attention to is the framework OpenAI is setting.

They’re not opening this up to everyone. They’re not letting advertisers target by conversation history. They’re explicitly saying ads won’t change answers. If they stick to that, it means the only way in is through genuine relevance to what someone is already trying to do.

What PPC Professionals Are Saying

There’s been no shortage of comments and opinions from PPC marketers surrounding this topic.

A mix of excitement and scrutiny seemed to be the theme of users’ comments.

In a highly active LinkedIn post from Adriaan Dekker, co-founder of The PPC Talent Network, including 798 likes, 78 reposts, and 51 comments, a recap of reactions is summarized below.

Ofer Miller, performance marketing team lead at TestGorilla, stated:

“This is interesting, but I’m more interested in seeing their targeting methods and audience building tools: keywords? Topics? Demographics? Also I’d argue that it’ll start more as a B2C tool as the majority of companies and professionals who use GPT (if they’re using it, many are in Claude/Perplexity) will have a paid account, so no B2B relevancy.”

Some practitioners, like Joseph Williams, performance lead at ZIGGY, called this “exciting times for paid advertising,” and Alex R., platform & services director at Vibetrace, seemed excited for “new opportunities to make money.”

Aaron Levy, evangelist at Optmyzr, shared his unique perspective while analyzing the fact Google hasn’t announced ads into Gemini yet. His opinion is that the tech “just isn’t there yet and ads will feel intrusive.” He continued by saying:

“It would be foolish of us to dismiss Google for not being a first mover, while we as advertisers often lament them releasing products too early.”

Theme Of The Week: Platforms Are Adapting To New Behaviors

This week’s updates show platforms responding to shifts in how people discover products and consume information – both for marketers and consumers.

Google’s new ad-focused podcast will help advertisers keep up with platform changes in a unique way with more in-depth information. Demand Gen now has available features that make video campaigns more measurable, but adapts to how consumers are researching and buying. Lastly, ChatGPT is testing whether ads can exist inside a conversational interface without breaking trust.

In each case, the platforms are adapting to behaviors that are already happening. People are using AI for product research. Advertisers are struggling to stay current on platform updates. Video is becoming more shoppable.

More Resources:


Featured Image: beast01/Shutterstock

BuddyPress WordPress Vulnerability May Impact Up To 100,000 Sites via @sejournal, @martinibuster

A newly disclosed security vulnerability waffects the BuddyPress plugin, a WordPress plugin installed in over 100,000 websites. The vulnerability, given a threat level rating of 7.3 (high),  enables unauthenticated attackers to execute arbitrary shortcodes.

BuddyPress WordPress Plugin

The BuddyPress plugin enables WordPress sites to create community features such as user profiles, activity streams, private messaging, and groups. It is commonly used on membership sites and online communities and is installed on more than 100,000 WordPress websites.

BuddyPress has a good track record with regard to vulnerabilities. There was only one vulnerability reported for the entire year of 2025, which was a relatively mild medium threat vulnerability, ranked at a 5.3 threat level on a scale of 1-10.

Unauthenticated Arbitrary Shortcode Execution

The vulnerability can be exploited by unauthenticated attackers. An attacker does not need a WordPress account or any level of user access to trigger the issue.

The BuddyPress plugin is vulnerable to arbitrary shortcode execution in all versions up to and including 14.3.3. That means that an attacker can execute shortcodes on the website. Shortcodes are used by WordPress to add dynamic functionality to pages and posts. Because the plugin does not properly validate input before executing shortcodes, attackers can cause the site to run shortcodes they are not authorized to use.

The vulnerability is caused by missing validation before user-supplied input is passed to the do_shortcode function.

Wordfence described the issue:

“The The BuddyPress plugin for WordPress is vulnerable to arbitrary shortcode execution in all versions up to, and including, 14.3.3. This is due to the software allowing users to execute an action that does not properly validate a value before running do_shortcode. This makes it possible for unauthenticated attackers to execute arbitrary shortcodes.”

This means that attackers can trigger a shortcode which in turn will carry out whatever action it is supposed to run, which in the worst case scenario could expose restricted site features or functionality. Depending on the shortcodes available on a site, this can enable attackers to access sensitive information, modify site content, or interact with other plugins in unintended ways.

The vulnerability does not depend on special server settings or optional configurations. Any site running a vulnerable version of the plugin is affected.

The issue was patched in BuddyPress version 14.3.4. Users of the plugin should update to version 14.3.4 or newer to fix the vulnerability.

Featured Image by Shutterstock/Login

Yann LeCun’s new venture is a contrarian bet against large language models  

Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems. 

Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic. 

Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI. 

LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. 

Both the questions and answers below have been edited for clarity and brevity.

You’ve just announced a new company, Advanced Machine Intelligence (AMI).  Tell me about the big ideas behind it.

It is going to be a global company, but headquartered in Paris. You pronounce it “ami”—it means “friend” in French. I am excited. There is a very high concentration of talent in Europe, but it is not always given a proper environment to flourish. And there is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American. I think that is going to be to our advantage.

So an ambitious alternative to the US-China binary we currently have. What made you want to pursue that third path?

Well, there are sovereignty issues for a lot of countries, and they want some control over AI. What I’m advocating is that AI is going to become a platform, and most platforms tend to become open-source. Unfortunately, that’s not really the direction the American industry is taking. Right? As the competition increases, they feel like they have to be secretive. I think that is a strategic mistake.

It’s certainly true for OpenAI, which went from very open to very closed, and Anthropic has always been closed. Google was sort of a little open. And then Meta, we’ll see. My sense is that it’s not going in a positive direction at this moment.

Simultaneously, China has completely embraced this open approach. So all leading open-source AI platforms are Chinese, and the result is that academia and startups, outside of the US, have basically embraced Chinese models. There’s nothing wrong with that—you know, Chinese models are good. Chinese engineers and scientists are great. But you know, if there is a future in which all of our information diet is being mediated by AI assistance, and the choice is either English-speaking models produced by proprietary companies always close to the US or Chinese models which may be open-source but need to be fine-tuned so that they answer questions about Tiananmen Square in 1989—you know, it’s not a very pleasant and engaging future. 

They [the future models] should be able to be fine-tuned by anyone and produce a very high diversity of AI assistance, with different linguistic abilities and value systems and political biases and centers of interests. You need high diversity of assistance for the same reason that you need high diversity of press. 

That is certainly a compelling pitch. How are investors buying that idea so far?

They really like it. A lot of venture capitalists are very much in favor of this idea of open-source, because they know for a lot of small startups, they really rely on open-source models. They don’t have the means to train their own model, and it’s kind of dangerous for them strategically to embrace a proprietary model.

You recently left Meta. What’s your view on the company and Mark Zuckerberg’s leadership? There’s a perception that Meta has fumbled its AI advantage.

I think FAIR [LeCun’s lab at Meta] was extremely successful in the research part. Where Meta was less successful is in picking up on that research and pushing it into practical technology and products. Mark made some choices that he thought were the best for the company. I may not have agreed with all of them. For example, the robotics group at FAIR was let go, which I think was a strategic mistake. But I’m not the director of FAIR. People make decisions rationally, and there’s no reason to be upset.

So, no bad blood? Could Meta be a future client for AMI?

Meta might be our first client! We’ll see. The work we are doing is not in direct competition. Our focus on world models for the physical world is very different from their focus on generative AI and LLMs.

You were working on AI long before LLMs became a mainstream approach. But since ChatGPT broke out, LLMs have become almost synonymous with AI.

Yes, and we are going to change that. The public face of AI, perhaps, is mostly LLMs and chatbots of various types. But the latest ones of those are not pure LLMs. They are LLM plus a lot of things, like perception systems and code that solves particular problems. So we are going to see LLMs as kind of the orchestrator in systems, a little bit.

Beyond LLMs, there is a lot of AI that is behind the scenes that runs a big chunk of our society. There are assistance driving programs in a car, quick-turn MRI images, algorithms that drive social media—that’s all AI. 

You have been vocal in arguing that LLMs can only get us so far. Do you think LLMs are overhyped these days? Can you summarize to our readers why you believe that LLMs are not enough?

There is a sense in which they have not been overhyped, which is that they are extremely useful to a lot of people, particularly if you write text, do research, or write code. LLMs manipulate language really well. But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.

The truly difficult part is understanding the real world. This is the Moravec Paradox (a phenomenon observed by the computer scientist Hans Moravec in 1988): What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car.

We are going to have AI systems that have humanlike and human-level intelligence, but they’re  not going to be built on LLMs, and it’s not going to happen next year or two years from now. It’s going to take a while. There are major conceptual breakthroughs that have to happen before we have AI systems that have human-level intelligence. And that is what I’ve been working on. And this company, AMI Labs, is focusing on the next generation.

And your solution is world models and JEPA architecture (JEPA, or “joint embedding predictive architecture,” is a learning framework that trains AI models to understand the world, created by LeCun while he was at Meta). What’s the elevator pitch?

The world is unpredictable. If you try to build a generative model that predicts every detail of the future, it will fail.  JEPA is not generative AI. It is a system that learns to represent videos really well. The key is to learn an abstract representation of the world and make predictions in that abstract space, ignoring the details you can’t predict. That’s what JEPA does. It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world. The most exciting work so far on this is coming from academia, not the big industrial labs stuck in the LLM world.

The lack of non-text data has been a problem in taking AI systems further in understanding the physical world. JEPA is trained on videos. What other kinds of data will you be using?

Our systems will be trained on video, audio, and sensor data of all kinds—not just text. We are working with various modalities, from the position of a robot arm to lidar data to audio. I’m also involved in a project using JEPA to model complex physical and clinical phenomena. 

What are some of the concrete, real-world applications you envision for world models?

The applications are vast. Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory. There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable. An agentic system that is supposed to take actions in the world cannot work reliably unless it has a world model to predict the consequences of its actions. Without it, the system will inevitably make mistakes. This is the key to unlocking everything from truly useful domestic robots to Level 5 autonomous driving.

Humanoid robots are all the rage recently, especially ones built by companies from China. What’s your take?

There are all these brute-force ways to get around the limitations of learning systems, which require inordinate amounts of training data to do anything. So the secret of all the companies getting robots to do kung fu or dance is they are all planned in advance. But frankly, nobody—absolutely nobody—knows how to make those robots smart enough to be useful. Take my word for it. 


You need an enormous amount of tele-operation training data for every single task, and when the environment changes a little bit, it doesn’t generalize very well. What this tells us is we are missing something very big. The reason why a 17-year-old can learn to drive in 20 hours is because they already know a lot about how the world behaves. If we want a generally useful domestic robot, we need systems to have a kind of good understanding of the physical world. That’s not going to happen until we have good world models and planning.

There’s a growing sentiment that it’s becoming harder to do foundational AI research in academia because of the massive computing resources required. Do you think the most important innovations will now come from industry?

No. LLMs are now technology development, not research. It’s true that it’s very difficult for academics to play an important role there because of the requirements for computation, data access, and engineering support. But it’s a product now. It’s not something academia should even be interested in. It’s like speech recognition in the early 2010s—it was a solved problem, and the progress was in the hands of industry. 

What academia should be working on is long-term objectives that go beyond the capabilities of current systems. That’s why I tell people in universities: Don’t work on LLMs. There is no point. You’re not going to be able to rival what’s going on in industry. Work on something else. Invent new techniques. The breakthroughs are not going to come from scaling up LLMs. The most exciting work on world models is coming from academia, not the big industrial labs. The whole idea of using attention circuits in neural nets came out of the University of Montreal. That research paper started the whole revolution. Now that the big companies are closing up, the breakthroughs are going to slow down. Academia needs access to computing resources, but they should be focused on the next big thing, not on refining the last one.

You wear many hats: professor, researcher, educator, public thinker … Now you just took on a new one. What is that going to look like for you?

I am going to be the executive chairman of the company, and Alex LeBrun [a former colleague from Meta AI] will be the CEO. It’s going to be LeCun and LeBrun—it’s nice if you pronounce it the French way.

I am going to keep my position at NYU. I teach one class per year, I have PhD students and postdocs, so I am going to be kept based in New York. But I go to Paris pretty often because of my lab. 

Does that mean that you won’t be very hands-on?

Well, there’s two ways to be hands-on. One is to manage people day to day, and another is to actually get your hands dirty in research projects, right? 

I can do management, but I don’t like doing it. This is not my mission in life. It’s really to make science and technology progress as far as we can, inspire other people to work on things that are interesting, and then contribute to those things. So that has been my role at Meta for the last seven years. I founded FAIR and led it for four to five years. I kind of hated being a director. I am not good at this career management thing. I’m much more visionary and a scientist.

What makes Alex LeBrun the right fit?

Alex is a serial entrepreneur; he’s built three successful AI companies. The first he sold to Microsoft; the second to Facebook, where he was head of the engineering division of FAIR in Paris. He then left to create Nabla, a very successful company in the health-care space. When I offered him the chance to join me in this effort, he accepted almost immediately. He has the experience to build the company, allowing me to focus on science and technology. 

You’re headquartered in Paris. Where else do you plan to have offices?

We are a global company. There’s going to be an office in North America.

New York, hopefully?

New York is great. That’s where I am, right? And it’s not Silicon Valley. Silicon Valley is a bit of a monoculture.

What about Asia? I’m guessing Singapore, too?

Probably, yeah. I’ll let you guess. 

And how are you attracting talent?

We don’t have any issue recruiting. There are a lot of people in the AI research community who think the future of AI is in world models. Those people, regardless of pay package, will be motivated to come work for us because they believe in the technological future we are building. We’ve already recruited people from places like OpenAI, Google DeepMind, and xAI.

I heard that Saining Xie, a prominent researcher from NYU and Google DeepMind, might be joining you as chief scientist. Any comments?

Saining is a brilliant researcher. I have a lot of admiration for him. I hired him twice already. I hired him at FAIR, and I convinced my colleagues at NYU that we should hire him there. Let’s just say I have a lot of respect for him.

When will you be ready to share more details about AMI Labs, like financial backing or other core members?

Soon—in February, maybe. I’ll let you know.

Why 2026 is a hot year for lithium

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now.

The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. Prices have been on quite the roller coaster over the last few years, and they’re ticking up again after a low period. What happens next could have big implications for mining and battery technology.

Before we look ahead, let’s take a quick trip down memory lane. In 2020, global EV sales started to really take off, driving up demand for the lithium used in their batteries. Because of that growing demand and a limited supply, prices shot up dramatically, with lithium carbonate going from under $10 per kilogram to a high of roughly $70 per kilogram in just two years.

And the tech world took notice. During those high points, there was a ton of interest in developing alternative batteries that didn’t rely on lithium. I was writing about sodium-based batteries, iron-air batteries, and even experimental ones that were made with plastic.

Researchers and startups were also hunting for alternative ways to get lithium, including battery recycling and processing methods like direct lithium extraction (more on this in a moment).

But soon, prices crashed back down to earth. We saw lower-than-expected demand for EVs in the US, and developers ramped up mining and processing to meet demand. Through late 2024 and 2025, lithium carbonate was back around $10 a kilogram again. Avoiding lithium or finding new ways to get it suddenly looked a lot less crucial.

That brings us to today: lithium prices are ticking up again. So far, it’s nowhere close to the dramatic rise we saw a few years ago, but analysts are watching closely. Strong EV growth in China is playing a major role—EVs still make up about 75% of battery demand today. But growth in stationary storage, batteries for the grid, is also contributing to rising demand for lithium in both China and the US.

Higher prices could create new opportunities. The possibilities include alternative battery chemistries, specifically sodium-ion batteries, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF. (I’ll note here that we recently named sodium-ion batteries to our 2026 list of 10 Breakthrough Technologies.)

It’s not just batteries, though. Another industry that could see big changes from a lithium price swing: extraction.

Today, most lithium is mined from rocks, largely in Australia, before being shipped to China for processing. There’s a growing effort to process the mineral in other places, though, as countries try to create their own lithium supply chains. Tesla recently confirmed that it’s started production at its lithium refinery in Texas, which broke ground in 2023. We could see more investment in processing plants outside China if prices continue to climb.

This could also be a key year for direct lithium extraction, as Katie Brigham wrote in a recent story for Heatmap. That technology uses chemical or electrochemical processes to extract lithium from brine (salty water that’s usually sourced from salt lakes or underground reservoirs), quickly and cheaply. Companies including Lilac Solutions, Standard Lithium, and Rio Tinto are all making plans or starting construction on commercial facilities this year in the US and Argentina. 

If there’s anything I’ve learned about following batteries and minerals over the past few years, it’s that predicting the future is impossible. But if you’re looking for tea leaves to read, lithium prices deserve a look. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: Yann LeCun’s new venture, and lithium’s on the rise

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Yann LeCun’s new venture is a contrarian bet against large language models    

Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems.  

Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. 

LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. Read the full interview

—Caiwei Chen

Why 2026 is a hot year for lithium

—Casey Crownhart

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now. The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. 

Prices have been on quite the roller coaster over the last few years, and they’re ticking up again. What happens next could have big implications for mining and battery technology. Read the full storyThis story first appeared in The Spark, our newsletter all about the tech we can use to combat the climate crisis. Sign up to receive it in your inbox every Wednesday.  

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump has climbed down from his plan for the US to take Greenland 
To the relief of many across Europe. (BBC)
Trump says he’s agreed a deal to access Greenland’s rare earths. Experts say that’s ‘bonkers.’ (CNN)
+ European leaders are feeling flummoxed about what’s going on. (FT $)

2 Apple is reportedly developing a wearable AI pin
It’s still in the very early stages—but this could be a huge deal if it makes it to launch. (The Information $)
+ It’s also planning to revamp Siri and turn it into an AI chatbot. (Bloomberg $)
Are we ready to trust AI with our bodies? (MIT Technology Review)

3 CEOs say AI saves people time. Their employees disagree.
Many even say that it’s currently dragging down their productivity. (WSJ $)
The AI boom will increase US carbon emissions—but it doesn’t have to. (Wired $)
+ Let’s also not forget that large language models remain a security nightmare. (IEEE Spectrum)

4 This chart shows how measles cases are exploding in America
They’ve hit a 30-year high, with the US on track to lose its ‘elimination status.’ (Axios $)
Things are poised to get even worse this year. (Wired $)

5 Your first humanoid robot coworker will almost definitely be Chinese
But will it be truly useful? That’s the even bigger question. (Wired $)
+ Nvidia CEO Jensen Huang says Europe could do more to compete in robotics and AI. (CNBC)

6 Bezos’ Blue Origin is about to compete with Starlink
It plans to send the first ‘TeraWave’ satellites into space next year. (Reuters $)
On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

7 Trump’s family made $1.4 billion off crypto last year 
Move along, no conflicts of interest to see here. (Bloomberg $)

8 Comic-Con has banned AI art
After an artist-led backlash last week. (404 Media)
Hundreds of creatives are warning against an AI future built on ‘theft on a grand scale’. (The Verge $)

9 What it’s like living without a smartphone for a month
Potentially blissful for you, but probably a bit annoying for everyone else. (The Guardian)
Why teens with ADHD are particularly vulnerable to the perils of social media. (Nature

10 Elon Musk is feuding with a budget airline 
The airline is winning, in case you wondered. (WP $)

Quote of the day

“I wouldn’t edit anything about Donald Trump, because the man makes me insane.”

—Wikipedia founder Jimmy Wales tells Wired why he’s steering clear of the US President’s page.   

One more thing

BOB O’CONNOR

How electricity could help tackle a surprising climate villain

Cement hides in plain sight—it’s used to build everything from roads and buildings to dams and basement floors. But it’s also a climate threat. Cement production accounts for more than 7% of global carbon dioxide emissions—more than sectors like aviation, shipping, or landfills.

One solution to this climate catastrophe might be coursing through the pipes at Sublime Systems. The startup is developing an entirely new way to make cement. Instead of heating crushed-up rocks in lava-hot kilns, Sublime’s technology zaps them in water with electricity, kicking off chemical reactions that form the main ingredients in its cement.

But it faces huge challenges: competing with established industry players, and persuading builders to use its materials in the first place. Read the full story.

—Casey Crownhart

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Earth may be a garbage fire, but space is beautiful
+ Do you know how to tie your shoelaces up properly? Are you sure?!
+ I defy British readers not to feel a pang of nostalgia at these crisp packets.
+ Going to bed around the same time every night seems to be a habit worth adopting. ($)

Dispatch from Davos: hot air, big egos and cold flexes

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

It’s supposed to be frigid in Davos this time of year. Part of the charm is seeing the world’s elite tromp through the streets in respectable suits and snow boots. But this year it’s positively balmy, with highs in the mid 30s, or a little over 1°C. The current conditions when I flew out of New York were colder, and definitely snowier. I’m told this is due to something called a föhn, a dry warm wind that’s been blowing across the Alps. 

I’m no meteorologist, but it’s true that there is a lot of hot air here. 

On Wednesday, President Donald Trump arrived in Davos to address the assembly, and held forth for more than 90 minutes, weaving his way through remarks about the economy, Greenland, windmills, Switzerland, Rolexes, Venezuela, and drug prices. It was a talk lousy with gripes, grievances and outright falsehoods. 

One small example: Trump made a big deal of claiming that China, despite being the world leader in manufacturing windmill componentry, doesn’t actually use them for energy generation itself. In fact, it is the world leader in generation, as well. 

I did not get to watch this spectacle from the room itself. Sad! 

By the time I got to the Congress Hall where the address was taking place, there was already a massive scrum of people jostling to get in. 

I had just wrapped up moderating a panel on “the intelligent co-worker,” ie: AI agents in the workplace. I was really excited for this one as the speakers represented a diverse cross-section of the AI ecosystem. Christoph Schweizer, CEO of BCG had the macro strategic view; Enrique Lores, HP CEO, could speak to both hardware and large enterprises, Workera CEO Kian Katanforoosh has the inside view on workforce training and transformation, Manjul Shah CEO of Hippocratic AI addressed working in the high stakes field of healthcare, and Kate Kallot CEO of Amini AI gave perspective on the global south and Africa in particular. 

Interestingly, most of the panel shied away from using the term co-worker, and some even rejected the term agent. But the view they painted was definitely one of humans working alongside AI and augmenting what’s possible. Shah, for example, talked about having agents call 16,000 people in Texas during a heat wave to perform a health and safety check. It was a great discussion. You can watch the whole thing here

But by the time it let out, the push of people outside the Congress Hall was already too thick for me to get in. In fact I couldn’t even get into a nearby overflow room. I did make it into a third overflow room, but getting in meant navigating my way through a mass of people, so jammed in tight together that it reminded me of being at a Turnstile concert. 

The speech blew way past its allotted time, and I had to step out early to get to yet another discussion. Walking through the halls while Trump spoke was a truly surreal experience. He had truly captured the attention of the gathered global elite. I don’t think I saw a single person not starting at a laptop, or phone or iPad, all watching the same video. 

Trump is speaking again on Thursday in a previously unscheduled address to announce his Board of Peace. As is (I heard) Elon Musk. So it’s shaping up to be another big day for elite attention capture. 

I should say, though, there are elites, and then there are elites. And there are all sorts of ways of sorting out who is who. Your badge color is one of them. I have a white participant badge, because I was moderating panels. This gets you in pretty much anywhere and therefore is its own sort of status symbol. Where you are staying is another. I’m in Klosters, a neighboring town that’s a 40 minute train ride away from the Congress Centre. Not so elite. 

There are more subtle ways of status sorting, too. Yesterday I learned that when people ask if this is your first time at Davos, it’s sometimes meant as a way of trying to figure out how important you are. If you’re any kind of big deal, you’ve probably been coming for years. 

But the best one I’ve yet encountered happened when I made small talk with the woman sitting next to me as I changed back into my snow boots. It turned out that, like me, she lived in California–at least part time. “But I don’t think I’ll stay there much longer,” she said, “due to the new tax law.” This was just an ice cold flex. 

Because California’s newly proposed tax legislation? It only targets billionaires. 

Welcome to Davos.