PPC Pulse: Google’s Podcast Launch, Demand Gen, ChatGPT Ads via @sejournal, @brookeosmundson

Welcome to this week’s PPC Pulse. The big news this week centers on platform evolution: how advertisers get information, where ads show up, and what formats are gaining traction.

OpenAI announced it’s testing ads inside ChatGPT for the first time. Google launched a new podcast to help advertisers navigate platform changes. And Demand Gen added features designed to make video campaigns more actionable for commerce and travel advertisers.

Here’s what matters for advertisers and why.

Google Ads Launches “Ads Decoded” Podcast

Google is officially launching an ad-focused podcast, “Ads Decoded.” It’s aimed at helping advertisers better understand platform updates and AI-powered features.

Announced on LinkedIn, Ginny Marvin, Google Ads Liaison, will be officially hosting the podcast. The first episode launches on Monday, Jan. 26, 2026.

In the announcement, Marvin stated:

“The response to our pilot episodes proved that there is a hunger for a different kind of conversation – one that moves past the headlines and announcements and into the mechanics and nuances of how things actually work.”

Throughout the podcast series, Marvin will bring in Google product managers and platform experts to discuss new features, answer community questions, and provide their unique insights on how updates work in practice.

The original pilot episode featured product managers discussing AI Max for Search campaigns and Performance Max (PMax) channel performance reporting.

Why This Matters For Advertisers

Google Ads has no shortage of releasing product updates, but this podcast signals a shift in how those updates are being communicated.

Instead of relying solely on blog posts, help center articles, and occasional webinars, Google is creating a recurring channel specifically designed with PPC marketers in mind. It’s to better explain these features and why they matter.

For advertisers trying to keep up with the velocity of platform updates, this should be extremely useful. Product managers have the chance to explain more technical details that don’t always make it into official announcements from Google.

Hearing context directly from the team building these features adds clarity that marketers need.

The podcast also gives Google a way to address confusion or pushback on updates in real time, rather than waiting for feedback to bubble up through support channels or community forums.

For advertisers who prefer audio formats or need to stay current on platform updates without constantly checking multiple sources, “Ads Decoded” offers a centralized option worth adding to your lineup.

What PPC Professionals Are Saying

The feedback from advertisers on LinkedIn is all positive. Handfuls of marketers offered their enthusiasm and encouragement to Marvin.

Jonathan Milanes, founder of Proverve, said this is “long awaited,” and many others, including Tony Adam, founder and CEO of Visible Factors, can’t wait to tune in.”

Ben Luong, director at Copperchunk Ltd, asked:

“Is there a way to ask questions or where do you get the questions from to answer?”

Marvin replied that marketers can drop their questions along the way, and will also try to “surface questions and answers that may be buried.”

Further reading: 25 Years Of Google Ads: Was It Better Then Or Now?

Demand Gen Adds New Features

Also this week, Google announced several new features for Demand Gen campaigns that are now live. These were previously previewed capabilities announced at Google Marketing Live’s 2025 event back in May.

The updates include Shoppable CTV, attributed brand searches, and travel feeds. The features are designed to help advertisers reach new customers while being able to measure their impact more effectively.

  • Shoppable CTV: Users can now browse and purchase products directly while watching YouTube ads on connected TV screens. According to Google’s data, Demand Gen campaigns that include TV screens drive an average of 7% additional conversions at the same ROI.
  • Attributed Branded Searches: This feature is now available for Demand Gen. It’s meant to show the volume of your campaign’s branded searches on Google and/or YouTube to help quantify the impact of upper-funnel campaigns.
  • Travel Feeds: Advertisers can now connect their Hotel Center feed in Demand Gen campaigns to build dynamic video ads. The videos can feature hotel pricing, ratings, and availability.

Google cited LG Electronics as an example of Demand Gen’s effectiveness, noting that the company achieved a 24% higher conversion rate than its paid social campaigns, while reaching high-value customers at a 91% lower CPA.

Why This Matters For Advertisers

The long-awaited Demand Gen updates make this campaign type more actionable for commerce and travel advertisers, especially those who have been testing this campaign type but are wanting more control over creative and measurement.

Shoppable CTV can help address one of the biggest challenges with connected TV advertising: measuring direct response. If viewers can browse and purchase without leaving the screen, that removes a layer of friction and makes TV inventory more accountable.

Attributed brand searches can help advertisers justify upper-funnel spend by showing how campaigns influenced search behavior, not just immediate last-click conversions. This is especially important for teams that need to prove incremental impact to stakeholders who are more accustomed to last-click attribution.

Travel feeds bring dynamic creative to video advertising in a way that mirrors how Shopping campaigns work for retail. Instead of generic hotel ads that can get lost in the noise, advertisers can now surface pricing and availability based on what users are actually searching for.

What PPC Professionals Are Saying

While advertisers are excited about these updates, there was some justified constructive feedback as well.

Jyll Saskin Gales, Google Ads Coach at Inside Google Ads, responded to Google:

“Please make Attributed Branded Searches more widely available! It’s by request via Google rep only right now, and it will be such a helpful metric to justify increased YouTube & Demand Gen investment.”

Alexandru Stambari, performance marketing specialist at ASBC Moldova, agreed that this is the right direction for Demand Gen, but “the real impact of Demand Gen still heavily depends on data quality, attribution, and feed setup.”

Further reading: Demand Gen Vs. Lead Gen: What Every CMO Needs To Know

ChatGPT To Begin Testing Ads In The US

Officially announced last Friday, OpenAI confirmed it will begin testing ads in ChatGPT for Free and Go tier users in the coming weeks. This marks the first time ads will appear inside the ChatGPT experience.

Ads will appear at the bottom of responses, only when there’s a relevant sponsored product or service tied to the active conversation. They’ll be clearly labeled, visually separated from organic answers, and dismissible. Users can see why a particular ad is shown and turn off ad personalization entirely.

OpenAI was also explicit about where ads won’t appear:

  • No ads for users under 18.
  • No ads near sensitive or regulated topics (like health, mental health, or politics).

According to the release, conversations won’t be shared with advertisers, and user data won’t be sold. OpenAI also emphasized that advertising won’t influence ChatGPT’s responses.

Read our full coverage: ChatGPT To Begin Testing Ads In The United States

Why This Matters For Advertisers

For the first time in a while, we’re watching the birth of a completely new ad environment.

The context of these ads is completely different in ChatGPT versus someone searching on Google or Bing.

For example, when someone asks ChatGPT for dinner recipes or travel recommendations, they’re likely in decision mode versus a simple research mode. The query itself is further down the funnel than most search queries. Typically, they’re looking for a solution they can act on.

If ads show up in that moment with strict relevance guardrails and zero ability to influence the answer itself, this resembles something more like a recommendation engine than a traditional search ad. The intent signal is there, but the buying mechanism doesn’t exist yet.

While this is something advertisers can’t plan for yet, what they should actually pay attention to is the framework OpenAI is setting.

They’re not opening this up to everyone. They’re not letting advertisers target by conversation history. They’re explicitly saying ads won’t change answers. If they stick to that, it means the only way in is through genuine relevance to what someone is already trying to do.

What PPC Professionals Are Saying

There’s been no shortage of comments and opinions from PPC marketers surrounding this topic.

A mix of excitement and scrutiny seemed to be the theme of users’ comments.

In a highly active LinkedIn post from Adriaan Dekker, co-founder of The PPC Talent Network, including 798 likes, 78 reposts, and 51 comments, a recap of reactions is summarized below.

Ofer Miller, performance marketing team lead at TestGorilla, stated:

“This is interesting, but I’m more interested in seeing their targeting methods and audience building tools: keywords? Topics? Demographics? Also I’d argue that it’ll start more as a B2C tool as the majority of companies and professionals who use GPT (if they’re using it, many are in Claude/Perplexity) will have a paid account, so no B2B relevancy.”

Some practitioners, like Joseph Williams, performance lead at ZIGGY, called this “exciting times for paid advertising,” and Alex R., platform & services director at Vibetrace, seemed excited for “new opportunities to make money.”

Aaron Levy, evangelist at Optmyzr, shared his unique perspective while analyzing the fact Google hasn’t announced ads into Gemini yet. His opinion is that the tech “just isn’t there yet and ads will feel intrusive.” He continued by saying:

“It would be foolish of us to dismiss Google for not being a first mover, while we as advertisers often lament them releasing products too early.”

Theme Of The Week: Platforms Are Adapting To New Behaviors

This week’s updates show platforms responding to shifts in how people discover products and consume information – both for marketers and consumers.

Google’s new ad-focused podcast will help advertisers keep up with platform changes in a unique way with more in-depth information. Demand Gen now has available features that make video campaigns more measurable, but adapts to how consumers are researching and buying. Lastly, ChatGPT is testing whether ads can exist inside a conversational interface without breaking trust.

In each case, the platforms are adapting to behaviors that are already happening. People are using AI for product research. Advertisers are struggling to stay current on platform updates. Video is becoming more shoppable.

More Resources:


Featured Image: beast01/Shutterstock

BuddyPress WordPress Vulnerability May Impact Up To 100,000 Sites via @sejournal, @martinibuster

A newly disclosed security vulnerability waffects the BuddyPress plugin, a WordPress plugin installed in over 100,000 websites. The vulnerability, given a threat level rating of 7.3 (high),  enables unauthenticated attackers to execute arbitrary shortcodes.

BuddyPress WordPress Plugin

The BuddyPress plugin enables WordPress sites to create community features such as user profiles, activity streams, private messaging, and groups. It is commonly used on membership sites and online communities and is installed on more than 100,000 WordPress websites.

BuddyPress has a good track record with regard to vulnerabilities. There was only one vulnerability reported for the entire year of 2025, which was a relatively mild medium threat vulnerability, ranked at a 5.3 threat level on a scale of 1-10.

Unauthenticated Arbitrary Shortcode Execution

The vulnerability can be exploited by unauthenticated attackers. An attacker does not need a WordPress account or any level of user access to trigger the issue.

The BuddyPress plugin is vulnerable to arbitrary shortcode execution in all versions up to and including 14.3.3. That means that an attacker can execute shortcodes on the website. Shortcodes are used by WordPress to add dynamic functionality to pages and posts. Because the plugin does not properly validate input before executing shortcodes, attackers can cause the site to run shortcodes they are not authorized to use.

The vulnerability is caused by missing validation before user-supplied input is passed to the do_shortcode function.

Wordfence described the issue:

“The The BuddyPress plugin for WordPress is vulnerable to arbitrary shortcode execution in all versions up to, and including, 14.3.3. This is due to the software allowing users to execute an action that does not properly validate a value before running do_shortcode. This makes it possible for unauthenticated attackers to execute arbitrary shortcodes.”

This means that attackers can trigger a shortcode which in turn will carry out whatever action it is supposed to run, which in the worst case scenario could expose restricted site features or functionality. Depending on the shortcodes available on a site, this can enable attackers to access sensitive information, modify site content, or interact with other plugins in unintended ways.

The vulnerability does not depend on special server settings or optional configurations. Any site running a vulnerable version of the plugin is affected.

The issue was patched in BuddyPress version 14.3.4. Users of the plugin should update to version 14.3.4 or newer to fix the vulnerability.

Featured Image by Shutterstock/Login

Yann LeCun’s new venture is a contrarian bet against large language models  

Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems. 

Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic. 

Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI. 

LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. 

Both the questions and answers below have been edited for clarity and brevity.

You’ve just announced a new company, Advanced Machine Intelligence (AMI).  Tell me about the big ideas behind it.

It is going to be a global company, but headquartered in Paris. You pronounce it “ami”—it means “friend” in French. I am excited. There is a very high concentration of talent in Europe, but it is not always given a proper environment to flourish. And there is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American. I think that is going to be to our advantage.

So an ambitious alternative to the US-China binary we currently have. What made you want to pursue that third path?

Well, there are sovereignty issues for a lot of countries, and they want some control over AI. What I’m advocating is that AI is going to become a platform, and most platforms tend to become open-source. Unfortunately, that’s not really the direction the American industry is taking. Right? As the competition increases, they feel like they have to be secretive. I think that is a strategic mistake.

It’s certainly true for OpenAI, which went from very open to very closed, and Anthropic has always been closed. Google was sort of a little open. And then Meta, we’ll see. My sense is that it’s not going in a positive direction at this moment.

Simultaneously, China has completely embraced this open approach. So all leading open-source AI platforms are Chinese, and the result is that academia and startups, outside of the US, have basically embraced Chinese models. There’s nothing wrong with that—you know, Chinese models are good. Chinese engineers and scientists are great. But you know, if there is a future in which all of our information diet is being mediated by AI assistance, and the choice is either English-speaking models produced by proprietary companies always close to the US or Chinese models which may be open-source but need to be fine-tuned so that they answer questions about Tiananmen Square in 1989—you know, it’s not a very pleasant and engaging future. 

They [the future models] should be able to be fine-tuned by anyone and produce a very high diversity of AI assistance, with different linguistic abilities and value systems and political biases and centers of interests. You need high diversity of assistance for the same reason that you need high diversity of press. 

That is certainly a compelling pitch. How are investors buying that idea so far?

They really like it. A lot of venture capitalists are very much in favor of this idea of open-source, because they know for a lot of small startups, they really rely on open-source models. They don’t have the means to train their own model, and it’s kind of dangerous for them strategically to embrace a proprietary model.

You recently left Meta. What’s your view on the company and Mark Zuckerberg’s leadership? There’s a perception that Meta has fumbled its AI advantage.

I think FAIR [LeCun’s lab at Meta] was extremely successful in the research part. Where Meta was less successful is in picking up on that research and pushing it into practical technology and products. Mark made some choices that he thought were the best for the company. I may not have agreed with all of them. For example, the robotics group at FAIR was let go, which I think was a strategic mistake. But I’m not the director of FAIR. People make decisions rationally, and there’s no reason to be upset.

So, no bad blood? Could Meta be a future client for AMI?

Meta might be our first client! We’ll see. The work we are doing is not in direct competition. Our focus on world models for the physical world is very different from their focus on generative AI and LLMs.

You were working on AI long before LLMs became a mainstream approach. But since ChatGPT broke out, LLMs have become almost synonymous with AI.

Yes, and we are going to change that. The public face of AI, perhaps, is mostly LLMs and chatbots of various types. But the latest ones of those are not pure LLMs. They are LLM plus a lot of things, like perception systems and code that solves particular problems. So we are going to see LLMs as kind of the orchestrator in systems, a little bit.

Beyond LLMs, there is a lot of AI that is behind the scenes that runs a big chunk of our society. There are assistance driving programs in a car, quick-turn MRI images, algorithms that drive social media—that’s all AI. 

You have been vocal in arguing that LLMs can only get us so far. Do you think LLMs are overhyped these days? Can you summarize to our readers why you believe that LLMs are not enough?

There is a sense in which they have not been overhyped, which is that they are extremely useful to a lot of people, particularly if you write text, do research, or write code. LLMs manipulate language really well. But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.

The truly difficult part is understanding the real world. This is the Moravec Paradox (a phenomenon observed by the computer scientist Hans Moravec in 1988): What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car.

We are going to have AI systems that have humanlike and human-level intelligence, but they’re  not going to be built on LLMs, and it’s not going to happen next year or two years from now. It’s going to take a while. There are major conceptual breakthroughs that have to happen before we have AI systems that have human-level intelligence. And that is what I’ve been working on. And this company, AMI Labs, is focusing on the next generation.

And your solution is world models and JEPA architecture (JEPA, or “joint embedding predictive architecture,” is a learning framework that trains AI models to understand the world, created by LeCun while he was at Meta). What’s the elevator pitch?

The world is unpredictable. If you try to build a generative model that predicts every detail of the future, it will fail.  JEPA is not generative AI. It is a system that learns to represent videos really well. The key is to learn an abstract representation of the world and make predictions in that abstract space, ignoring the details you can’t predict. That’s what JEPA does. It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world. The most exciting work so far on this is coming from academia, not the big industrial labs stuck in the LLM world.

The lack of non-text data has been a problem in taking AI systems further in understanding the physical world. JEPA is trained on videos. What other kinds of data will you be using?

Our systems will be trained on video, audio, and sensor data of all kinds—not just text. We are working with various modalities, from the position of a robot arm to lidar data to audio. I’m also involved in a project using JEPA to model complex physical and clinical phenomena. 

What are some of the concrete, real-world applications you envision for world models?

The applications are vast. Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory. There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable. An agentic system that is supposed to take actions in the world cannot work reliably unless it has a world model to predict the consequences of its actions. Without it, the system will inevitably make mistakes. This is the key to unlocking everything from truly useful domestic robots to Level 5 autonomous driving.

Humanoid robots are all the rage recently, especially ones built by companies from China. What’s your take?

There are all these brute-force ways to get around the limitations of learning systems, which require inordinate amounts of training data to do anything. So the secret of all the companies getting robots to do kung fu or dance is they are all planned in advance. But frankly, nobody—absolutely nobody—knows how to make those robots smart enough to be useful. Take my word for it. 


You need an enormous amount of tele-operation training data for every single task, and when the environment changes a little bit, it doesn’t generalize very well. What this tells us is we are missing something very big. The reason why a 17-year-old can learn to drive in 20 hours is because they already know a lot about how the world behaves. If we want a generally useful domestic robot, we need systems to have a kind of good understanding of the physical world. That’s not going to happen until we have good world models and planning.

There’s a growing sentiment that it’s becoming harder to do foundational AI research in academia because of the massive computing resources required. Do you think the most important innovations will now come from industry?

No. LLMs are now technology development, not research. It’s true that it’s very difficult for academics to play an important role there because of the requirements for computation, data access, and engineering support. But it’s a product now. It’s not something academia should even be interested in. It’s like speech recognition in the early 2010s—it was a solved problem, and the progress was in the hands of industry. 

What academia should be working on is long-term objectives that go beyond the capabilities of current systems. That’s why I tell people in universities: Don’t work on LLMs. There is no point. You’re not going to be able to rival what’s going on in industry. Work on something else. Invent new techniques. The breakthroughs are not going to come from scaling up LLMs. The most exciting work on world models is coming from academia, not the big industrial labs. The whole idea of using attention circuits in neural nets came out of the University of Montreal. That research paper started the whole revolution. Now that the big companies are closing up, the breakthroughs are going to slow down. Academia needs access to computing resources, but they should be focused on the next big thing, not on refining the last one.

You wear many hats: professor, researcher, educator, public thinker … Now you just took on a new one. What is that going to look like for you?

I am going to be the executive chairman of the company, and Alex LeBrun [a former colleague from Meta AI] will be the CEO. It’s going to be LeCun and LeBrun—it’s nice if you pronounce it the French way.

I am going to keep my position at NYU. I teach one class per year, I have PhD students and postdocs, so I am going to be kept based in New York. But I go to Paris pretty often because of my lab. 

Does that mean that you won’t be very hands-on?

Well, there’s two ways to be hands-on. One is to manage people day to day, and another is to actually get your hands dirty in research projects, right? 

I can do management, but I don’t like doing it. This is not my mission in life. It’s really to make science and technology progress as far as we can, inspire other people to work on things that are interesting, and then contribute to those things. So that has been my role at Meta for the last seven years. I founded FAIR and led it for four to five years. I kind of hated being a director. I am not good at this career management thing. I’m much more visionary and a scientist.

What makes Alex LeBrun the right fit?

Alex is a serial entrepreneur; he’s built three successful AI companies. The first he sold to Microsoft; the second to Facebook, where he was head of the engineering division of FAIR in Paris. He then left to create Nabla, a very successful company in the health-care space. When I offered him the chance to join me in this effort, he accepted almost immediately. He has the experience to build the company, allowing me to focus on science and technology. 

You’re headquartered in Paris. Where else do you plan to have offices?

We are a global company. There’s going to be an office in North America.

New York, hopefully?

New York is great. That’s where I am, right? And it’s not Silicon Valley. Silicon Valley is a bit of a monoculture.

What about Asia? I’m guessing Singapore, too?

Probably, yeah. I’ll let you guess. 

And how are you attracting talent?

We don’t have any issue recruiting. There are a lot of people in the AI research community who think the future of AI is in world models. Those people, regardless of pay package, will be motivated to come work for us because they believe in the technological future we are building. We’ve already recruited people from places like OpenAI, Google DeepMind, and xAI.

I heard that Saining Xie, a prominent researcher from NYU and Google DeepMind, might be joining you as chief scientist. Any comments?

Saining is a brilliant researcher. I have a lot of admiration for him. I hired him twice already. I hired him at FAIR, and I convinced my colleagues at NYU that we should hire him there. Let’s just say I have a lot of respect for him.

When will you be ready to share more details about AMI Labs, like financial backing or other core members?

Soon—in February, maybe. I’ll let you know.

Why 2026 is a hot year for lithium

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now.

The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. Prices have been on quite the roller coaster over the last few years, and they’re ticking up again after a low period. What happens next could have big implications for mining and battery technology.

Before we look ahead, let’s take a quick trip down memory lane. In 2020, global EV sales started to really take off, driving up demand for the lithium used in their batteries. Because of that growing demand and a limited supply, prices shot up dramatically, with lithium carbonate going from under $10 per kilogram to a high of roughly $70 per kilogram in just two years.

And the tech world took notice. During those high points, there was a ton of interest in developing alternative batteries that didn’t rely on lithium. I was writing about sodium-based batteries, iron-air batteries, and even experimental ones that were made with plastic.

Researchers and startups were also hunting for alternative ways to get lithium, including battery recycling and processing methods like direct lithium extraction (more on this in a moment).

But soon, prices crashed back down to earth. We saw lower-than-expected demand for EVs in the US, and developers ramped up mining and processing to meet demand. Through late 2024 and 2025, lithium carbonate was back around $10 a kilogram again. Avoiding lithium or finding new ways to get it suddenly looked a lot less crucial.

That brings us to today: lithium prices are ticking up again. So far, it’s nowhere close to the dramatic rise we saw a few years ago, but analysts are watching closely. Strong EV growth in China is playing a major role—EVs still make up about 75% of battery demand today. But growth in stationary storage, batteries for the grid, is also contributing to rising demand for lithium in both China and the US.

Higher prices could create new opportunities. The possibilities include alternative battery chemistries, specifically sodium-ion batteries, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF. (I’ll note here that we recently named sodium-ion batteries to our 2026 list of 10 Breakthrough Technologies.)

It’s not just batteries, though. Another industry that could see big changes from a lithium price swing: extraction.

Today, most lithium is mined from rocks, largely in Australia, before being shipped to China for processing. There’s a growing effort to process the mineral in other places, though, as countries try to create their own lithium supply chains. Tesla recently confirmed that it’s started production at its lithium refinery in Texas, which broke ground in 2023. We could see more investment in processing plants outside China if prices continue to climb.

This could also be a key year for direct lithium extraction, as Katie Brigham wrote in a recent story for Heatmap. That technology uses chemical or electrochemical processes to extract lithium from brine (salty water that’s usually sourced from salt lakes or underground reservoirs), quickly and cheaply. Companies including Lilac Solutions, Standard Lithium, and Rio Tinto are all making plans or starting construction on commercial facilities this year in the US and Argentina. 

If there’s anything I’ve learned about following batteries and minerals over the past few years, it’s that predicting the future is impossible. But if you’re looking for tea leaves to read, lithium prices deserve a look. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: Yann LeCun’s new venture, and lithium’s on the rise

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Yann LeCun’s new venture is a contrarian bet against large language models    

Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems.  

Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. 

LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. Read the full interview

—Caiwei Chen

Why 2026 is a hot year for lithium

—Casey Crownhart

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now. The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. 

Prices have been on quite the roller coaster over the last few years, and they’re ticking up again. What happens next could have big implications for mining and battery technology. Read the full storyThis story first appeared in The Spark, our newsletter all about the tech we can use to combat the climate crisis. Sign up to receive it in your inbox every Wednesday.  

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump has climbed down from his plan for the US to take Greenland 
To the relief of many across Europe. (BBC)
Trump says he’s agreed a deal to access Greenland’s rare earths. Experts say that’s ‘bonkers.’ (CNN)
+ European leaders are feeling flummoxed about what’s going on. (FT $)

2 Apple is reportedly developing a wearable AI pin
It’s still in the very early stages—but this could be a huge deal if it makes it to launch. (The Information $)
+ It’s also planning to revamp Siri and turn it into an AI chatbot. (Bloomberg $)
Are we ready to trust AI with our bodies? (MIT Technology Review)

3 CEOs say AI saves people time. Their employees disagree.
Many even say that it’s currently dragging down their productivity. (WSJ $)
The AI boom will increase US carbon emissions—but it doesn’t have to. (Wired $)
+ Let’s also not forget that large language models remain a security nightmare. (IEEE Spectrum)

4 This chart shows how measles cases are exploding in America
They’ve hit a 30-year high, with the US on track to lose its ‘elimination status.’ (Axios $)
Things are poised to get even worse this year. (Wired $)

5 Your first humanoid robot coworker will almost definitely be Chinese
But will it be truly useful? That’s the even bigger question. (Wired $)
+ Nvidia CEO Jensen Huang says Europe could do more to compete in robotics and AI. (CNBC)

6 Bezos’ Blue Origin is about to compete with Starlink
It plans to send the first ‘TeraWave’ satellites into space next year. (Reuters $)
On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

7 Trump’s family made $1.4 billion off crypto last year 
Move along, no conflicts of interest to see here. (Bloomberg $)

8 Comic-Con has banned AI art
After an artist-led backlash last week. (404 Media)
Hundreds of creatives are warning against an AI future built on ‘theft on a grand scale’. (The Verge $)

9 What it’s like living without a smartphone for a month
Potentially blissful for you, but probably a bit annoying for everyone else. (The Guardian)
Why teens with ADHD are particularly vulnerable to the perils of social media. (Nature

10 Elon Musk is feuding with a budget airline 
The airline is winning, in case you wondered. (WP $)

Quote of the day

“I wouldn’t edit anything about Donald Trump, because the man makes me insane.”

—Wikipedia founder Jimmy Wales tells Wired why he’s steering clear of the US President’s page.   

One more thing

BOB O’CONNOR

How electricity could help tackle a surprising climate villain

Cement hides in plain sight—it’s used to build everything from roads and buildings to dams and basement floors. But it’s also a climate threat. Cement production accounts for more than 7% of global carbon dioxide emissions—more than sectors like aviation, shipping, or landfills.

One solution to this climate catastrophe might be coursing through the pipes at Sublime Systems. The startup is developing an entirely new way to make cement. Instead of heating crushed-up rocks in lava-hot kilns, Sublime’s technology zaps them in water with electricity, kicking off chemical reactions that form the main ingredients in its cement.

But it faces huge challenges: competing with established industry players, and persuading builders to use its materials in the first place. Read the full story.

—Casey Crownhart

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Earth may be a garbage fire, but space is beautiful
+ Do you know how to tie your shoelaces up properly? Are you sure?!
+ I defy British readers not to feel a pang of nostalgia at these crisp packets.
+ Going to bed around the same time every night seems to be a habit worth adopting. ($)

Dispatch from Davos: hot air, big egos and cold flexes

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

It’s supposed to be frigid in Davos this time of year. Part of the charm is seeing the world’s elite tromp through the streets in respectable suits and snow boots. But this year it’s positively balmy, with highs in the mid 30s, or a little over 1°C. The current conditions when I flew out of New York were colder, and definitely snowier. I’m told this is due to something called a föhn, a dry warm wind that’s been blowing across the Alps. 

I’m no meteorologist, but it’s true that there is a lot of hot air here. 

On Wednesday, President Donald Trump arrived in Davos to address the assembly, and held forth for more than 90 minutes, weaving his way through remarks about the economy, Greenland, windmills, Switzerland, Rolexes, Venezuela, and drug prices. It was a talk lousy with gripes, grievances and outright falsehoods. 

One small example: Trump made a big deal of claiming that China, despite being the world leader in manufacturing windmill componentry, doesn’t actually use them for energy generation itself. In fact, it is the world leader in generation, as well. 

I did not get to watch this spectacle from the room itself. Sad! 

By the time I got to the Congress Hall where the address was taking place, there was already a massive scrum of people jostling to get in. 

I had just wrapped up moderating a panel on “the intelligent co-worker,” ie: AI agents in the workplace. I was really excited for this one as the speakers represented a diverse cross-section of the AI ecosystem. Christoph Schweizer, CEO of BCG had the macro strategic view; Enrique Lores, HP CEO, could speak to both hardware and large enterprises, Workera CEO Kian Katanforoosh has the inside view on workforce training and transformation, Manjul Shah CEO of Hippocratic AI addressed working in the high stakes field of healthcare, and Kate Kallot CEO of Amini AI gave perspective on the global south and Africa in particular. 

Interestingly, most of the panel shied away from using the term co-worker, and some even rejected the term agent. But the view they painted was definitely one of humans working alongside AI and augmenting what’s possible. Shah, for example, talked about having agents call 16,000 people in Texas during a heat wave to perform a health and safety check. It was a great discussion. You can watch the whole thing here

But by the time it let out, the push of people outside the Congress Hall was already too thick for me to get in. In fact I couldn’t even get into a nearby overflow room. I did make it into a third overflow room, but getting in meant navigating my way through a mass of people, so jammed in tight together that it reminded me of being at a Turnstile concert. 

The speech blew way past its allotted time, and I had to step out early to get to yet another discussion. Walking through the halls while Trump spoke was a truly surreal experience. He had truly captured the attention of the gathered global elite. I don’t think I saw a single person not starting at a laptop, or phone or iPad, all watching the same video. 

Trump is speaking again on Thursday in a previously unscheduled address to announce his Board of Peace. As is (I heard) Elon Musk. So it’s shaping up to be another big day for elite attention capture. 

I should say, though, there are elites, and then there are elites. And there are all sorts of ways of sorting out who is who. Your badge color is one of them. I have a white participant badge, because I was moderating panels. This gets you in pretty much anywhere and therefore is its own sort of status symbol. Where you are staying is another. I’m in Klosters, a neighboring town that’s a 40 minute train ride away from the Congress Centre. Not so elite. 

There are more subtle ways of status sorting, too. Yesterday I learned that when people ask if this is your first time at Davos, it’s sometimes meant as a way of trying to figure out how important you are. If you’re any kind of big deal, you’ve probably been coming for years. 

But the best one I’ve yet encountered happened when I made small talk with the woman sitting next to me as I changed back into my snow boots. It turned out that, like me, she lived in California–at least part time. “But I don’t think I’ll stay there much longer,” she said, “due to the new tax law.” This was just an ice cold flex. 

Because California’s newly proposed tax legislation? It only targets billionaires. 

Welcome to Davos.

“Dr. Google” had its issues. Can ChatGPT Health do better?

<div data-chronoton-summary="

OpenAI’s health play The AI giant launched ChatGPT Health amid reports that 230 million people already ask ChatGPT health-related questions weekly. The new feature isn’t a separate model but rather a wrapper that can access medical records and fitness data when permitted.

  • Better than Dr. Google? Early research suggests LLMs might outperform traditional web searches for medical information. One study found GPT-4o, an earlier model, answered realistic health questions correctly about 85% of the time, potentially reducing misinformation compared to unfiltered internet searches.
  • Hallucination concerns persist Earlier versions of GPT have been shown to fabricate definitions for fake medical conditions and accept incorrect information in users’ prompts. This sycophantic tendency could be particularly dangerous when users seek to confirm biases against legitimate medical advice.
  • Trust vs. expertise The articulate, confident communication style of ChatGPT might lead users to trust it over qualified medical professionals. While OpenAI emphasizes the tool is meant to supplement rather than replace doctors, researchers worry some patients will rely too heavily on AI guidance.
  • ” data-chronoton-post-id=”1131692″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

    For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week. 

    That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. It landed at an inauspicious time: Two days earlier, the news website SFGate had broken the story of Sam Nelson, a teenager who died of an overdose last year after extensive conversations with ChatGPT about how best to combine various drugs. In the wake of both pieces of news, multiple journalists questioned the wisdom of relying for medical advice on a tool that could cause such extreme harm.

    Though ChatGPT Health lives in a separate sidebar tab from the rest of ChatGPT, it isn’t a new model. It’s more like a wrapper that provides one of OpenAI’s preexisting models with guidance and tools it can use to provide health advice—including some that allow it to access a user’s electronic medical records and fitness app data, if granted permission. There’s no doubt that ChatGPT and other large language models can make medical mistakes, and OpenAI emphasizes that ChatGPT Health is intended as an additional support, rather than a replacement for one’s doctor. But when doctors are unavailable or unable to help, people will turn to alternatives. 

    Some doctors see LLMs as a boon for medical literacy. The average patient might struggle to navigate the vast landscape of online medical information—and, in particular, to distinguish high-quality sources from polished but factually dubious websites—but LLMs can do that job for them, at least in theory. Treating patients who had searched for their symptoms on Google required “a lot of attacking patient anxiety [and] reducing misinformation,” says Marc Succi, an associate professor at Harvard Medical School and a practicing radiologist. But now, he says, “you see patients with a college education, a high school education, asking questions at the level of something an early med student might ask.”

    The release of ChatGPT Health, and Anthropic’s subsequent announcement of new health integrations for Claude, indicate that the AI giants are increasingly willing to acknowledge and encourage health-related uses of their models. Such uses certainly come with risks, given LLMs’ well-documented tendencies to agree with users and make up information rather than admit ignorance. 

    But those risks also have to be weighed against potential benefits. There’s an analogy here to autonomous vehicles: When policymakers consider whether to allow Waymo in their city, the key metric is not whether its cars are ever involved in accidents but whether they cause less harm than the status quo of relying on human drivers. If Dr. ChatGPT is an improvement over Dr. Google—and early evidence suggests it may be—it could potentially lessen the enormous burden of medical misinformation and unnecessary health anxiety that the internet has created.

    Pinning down the effectiveness of a chatbot such as ChatGPT or Claude for consumer health, however, is tricky. “It’s exceedingly difficult to evaluate an open-ended chatbot,” says Danielle Bitterman, the clinical lead for data science and AI at the Mass General Brigham health-care system. Large language models score well on medical licensing examinations, but those exams use multiple-choice questions that don’t reflect how people use chatbots to look up medical information.

    Sirisha Rambhatla, an assistant professor of management science and engineering at the University of Waterloo, attempted to close that gap by evaluating how GPT-4o responded to licensing exam questions when it did not have access to a list of possible answers. Medical experts who evaluated the responses scored only about half of them as entirely correct. But multiple-choice exam questions are designed to be tricky enough that the answer options don’t give them entirely away, and they’re still a pretty distant approximation for the sort of thing that a user would type into ChatGPT.

    A different study, which tested GPT-4o on more realistic prompts submitted by human volunteers, found that it answered medical questions correctly about 85% of the time. When I spoke with Amulya Yadav, an associate professor at Pennsylvania State University who runs the Responsible AI for Social Emancipation Lab and led the study, he made it clear that he wasn’t personally a fan of patient-facing medical LLMs. But he freely admits that, technically speaking, they seem up to the task—after all, he says, human doctors misdiagnose patients 10% to 15% of the time. “If I look at it dispassionately, it seems that the world is gonna change, whether I like it or not,” he says.

    For people seeking medical information online, Yadav says, LLMs do seem to be a better choice than Google. Succi, the radiologist, also concluded that LLMs can be a better alternative to web search when he compared GPT-4’s responses to questions about common chronic medical conditions with the information presented in Google’s knowledge panel, the information box that sometimes appears on the right side of the search results.

    Since Yadav’s and Succi’s studies appeared online, in the first half of 2025, OpenAI has released multiple new versions of GPT, and it’s reasonable to expect that GPT-5.2 would perform even better than its predecessors. But the studies do have important limitations: They focus on straightforward, factual questions, and they examine only brief interactions between users and chatbots or web search tools. Some of the weaknesses of LLMs—most notably their sycophancy and tendency to hallucinate—might be more likely to rear their heads in more extensive conversations and with people who are dealing with more complex problems. Reeva Lederman, a professor at the University of Melbourne who studies technology and health, notes that patients who don’t like the diagnosis or treatment recommendations that they receive from a doctor might seek out another opinion from an LLM—and the LLM, if it’s sycophantic, might encourage them to reject their doctor’s advice.

    Some studies have found that LLMs will hallucinate and exhibit sycophancy in response to health-related prompts. For example, one study showed that GPT-4 and GPT-4o will happily accept and run with incorrect drug information included in a user’s question. In another, GPT-4o frequently concocted definitions for fake syndromes and lab tests mentioned in the user’s prompt. Given the abundance of medically dubious diagnoses and treatments floating around the internet, these patterns of LLM behavior could contribute to the spread of medical misinformation, particularly if people see LLMs as trustworthy.

    OpenAI has reported that the GPT-5 series of models is markedly less sycophantic and prone to hallucination than their predecessors, so the results of these studies might not apply to ChatGPT Health. The company also evaluated the model that powers ChatGPT Health on its responses to health-specific questions, using their publicly available HeathBench benchmark. HealthBench rewards models that express uncertainty when appropriate, recommend that users seek medical attention when necessary, and refrain from causing users unnecessary stress by telling them their condition is more serious that it truly is. It’s reasonable to assume that the model underlying ChatGPT Health exhibited those behaviors in testing, though Bitterman notes that some of the prompts in HealthBench were generated by LLMs, not users, which could limit how well the benchmark translates into the real world.

    An LLM that avoids alarmism seems like a clear improvement over systems that have people convincing themselves they have cancer after a few minutes of browsing. And as large language models, and the products built around them, continue to develop, whatever advantage Dr. ChatGPT has over Dr. Google will likely grow. The introduction of ChatGPT Health is certainly a move in that direction: By looking through your medical records, ChatGPT can potentially gain far more context about your specific health situation than could be included in any Google search, although numerous experts have cautioned against giving ChatGPT that access for privacy reasons.

    Even if ChatGPT Health and other new tools do represent a meaningful improvement over Google searches, they could still conceivably have a negative effect on health overall. Much as automated vehicles, even if they are safer than human-driven cars, might still prove a net negative if they encourage people to use public transit less, LLMs could undermine users’ health if they induce people to rely on the internet instead of human doctors, even if they do increase the quality of health information available online.

    Lederman says that this outcome is plausible. In her research, she has found that members of online communities centered on health tend to put their trust in users who express themselves well, regardless of the validity of the information they are sharing. Because ChatGPT communicates like an articulate person, some people might trust it too much, potentially to the exclusion of their doctor. But LLMs are certainly no replacement for a human doctor—at least not yet.

    Marketing to Humans and Machines

    Agentic shopping presents ecommerce marketers with a familiar problem in a new form.

    The promise is simple enough. AI agents act on behalf of shoppers to search, compare, select, and even purchase products. These agents will use a shopper’s preferences — stated and inferred — rather than browsing products from digital shelves.

    McKinsey & Company describes it this way: “Companies have spent decades refining consumer journeys, fine-tuning every click, scroll, and tap. But in the era of agentic commerce, the consumer no longer travels alone. Their digital proxies now navigate the commerce ecosystem.”

    2 Targets

    Ecommerce marketers will target both people and AI in the era of agentic commerce.

    In effect, this means ecommerce marketers have two targets: a human and a machine.

    It’s a familiar scenario. Marketers seeking organic traffic have long sought shoppers and appeased machines, e.g., search engines.

    An online pet supply company wants Google to place its dripless water bowls at the top of search results and humans to click the listing.

    In much the same way, this retailer now wants an AI shopping agent to offer that dripless bowl when a consumer asks a genAI platform how to keep a Doberman puppy from sloshing water all over the kitchen.

    This two-prong approach paints a helpful picture, as many ecommerce businesses wonder how they will drive sales when chatbots do most of the shopping.

    Marketing to Machine

    For merchants, the most important component — shopping agents — will likely come via platforms.

    Few ecommerce businesses will integrate their catalogs directly into every LLM or shopping agent. Instead, commerce platforms and marketplaces will be the conduits. Merchants will publish structured product data once and let those intermediaries distribute it into agentic ecosystems.

    This is already happening. Shopify, for example, is building an agentic shopping infrastructure that allows agents to tap merchant catalogs and build carts.

    Marketplaces will play a similar role. Amazon and Walmart already serve as product discovery engines and have no incentive to surrender that position.

    A recent dispute between Amazon and Perplexity over agentic shopping tools underscores how aggressively marketplaces may defend their infrastructure and customer relationships.

    The implication for ecommerce marketers is practical. Marketing to machines will be a lot of structured data work. Product feeds, catalog hygiene, and API-ready commerce systems will become part of the visibility strategy, much as technical search engine optimization was necessary when Google dominated.

    Marketing to People

    With agentic commerce, marketers aim to influence the AI. The second tactic is influencing the person typing the prompt.

    AI agents select products based on users’ stated needs and inferred preferences. Merchants, then, have a clear objective: Shape what shoppers want, how they describe it, and which brands or shops they trust before asking.

    This, too, is not new. It resembles brand demand in Google search results. A shopper will get one set of results from typing “best dog bowl” and another for “best dripless dog bowl Chewy.”

    In agentic commerce, brand-building and preference-setting become even more valuable because they guide the shopper’s intent. And that intent, in turn, influences the agent.

    Here’s how merchants exert that influence.

    Advertising. Social and video ads foster familiarity, define product categories, and introduce specific terminology.

    In time, that language becomes prompt phrasing. A merchant may not control the AI’s model, but it can control whether its product name, differentiator, or problem statement becomes part of a shopper’s vocabulary.

    Content marketing. Buying guides, comparisons, and problem-solving articles seed the concepts that shoppers recall later in prompts.

    Personalized lifecycle marketing and email marketing may become even more critical because it represents an owned audience and an opportunity to identify shopper preferences.

    Merchant systems, including AI, can use purchase history, browsing signals, and customer data to anticipate needs and recommend actions. The better a merchant is at retention, the more likely it influences the prompt. Or, for that matter, bypass it altogether.

    Personalized lifecycle marketing emphasizes individuals, according to Matthew Fanelli, chief revenue officer at Digital Remedy. Shopppers, Fanelli said, are like snowflakes: beautiful and unique in their own ways.

    Influencer marketing is another prompt-shaper. Fanelli described it as a third prong, driven by peer behavior and social proof. “What is my peer group doing? What are they buying? How do I get in with them?” he said.

    Fanelli expects a trifecta of forces to reshape ecommerce: more choice, shorter attention spans, and more connected devices. “That’s when you start to get agents,” he said. For marketers, the response is not panic but discipline. Create demand from humans and structure data for machines.

    TikTok US Deal Closes After Years Of Regulatory Uncertainty via @sejournal, @MattGSouthern

    A White House official said the US and China have finalized a deal to spin off TikTok’s US business to a consortium led by Oracle and Silver Lake, Fox Business reported Thursday. CNN reported the joint venture has been formally established and announced its leadership team.

    The closing comes ahead of a January 23 deadline created by Trump’s September executive order, which set a 120-day enforcement pause on the divest-or-ban law.

    What’s New

    The joint venture has been formally established and announced its leadership team. TikTok said Adam Presser, previously the company’s head of operations and trust and safety, will be CEO. Will Farrell, who led privacy and security for the effort, will serve as Chief Security Officer.

    TikTok CEO Shou Chew outlined the ownership structure in a December internal memo to employees after signing binding agreements with investors.

    Under the new ownership structure, ByteDance retains just under 20% of the US business. Oracle, Silver Lake, and MGX, an Abu Dhabi-based AI investment firm, will each hold 15% stakes. Other investors in the consortium include Susquehanna, Dragoneer, and DFO, Michael Dell’s family office.

    A new seven-member board of directors with an American majority will govern the entity. The board will oversee data protection, content moderation, and algorithm security for US operations.

    Vice President JD Vance said in September the deal would value TikTok’s US operations at roughly $14 billion, though the final amount ByteDance received remains unclear.

    The algorithm question remains murky in public reporting. TikTok’s recommendation algorithm has been the central point of contention between the US and Chinese governments throughout the negotiations. The September executive order described US oversight of the technology, including requirements for algorithm retraining and monitoring, but specific implementation terms have not been publicly disclosed.

    Background

    The deal closes a chapter that spans two presidential administrations and multiple reversal points.

    President Biden signed a law in 2024 requiring ByteDance to divest TikTok’s US business or face a ban. The Supreme Court upheld that law in 2025. TikTok briefly went dark two days later before President Trump, on his first day in office, signed an executive order keeping the app running while his administration negotiated a sale.

    The current deal structure emerged from a framework announced in September, when the White House outlined terms that would create a US entity with majority American ownership while allowing ByteDance to maintain a minority stake.

    Why This Matters

    This should end more than five years of regulatory uncertainty for the 170 million Americans the White House says use TikTok and the businesses that depend on the platform for marketing and commerce.

    We first covered the TikTok ban timeline when the original executive order gave ByteDance 45 days to sell in August 2020. Then it was a potential Oracle deal that looked promising before falling apart. The pattern repeated through multiple administrations, executive orders, and court cases.

    For marketers who built strategies around TikTok, the resolution removes a persistent source of planning uncertainty. TikTok Shop, creator partnerships, and advertising campaigns can proceed without the backdrop of a potential shutdown.

    The ownership structure also creates a new dynamic. Oracle, which already provides data and computing services for TikTok’s US operations through Project Texas, now holds an equity stake and board-level oversight. That deeper integration could affect how the platform handles data practices and content policies going forward.

    Looking Ahead

    TikTok’s US operations will function as an independent entity responsible for data protection, algorithm security, and content moderation.

    TikTok has told employees that users and advertisers should see no immediate changes to the platform experience. Chew’s December memo indicated Americans would continue using TikTok as before and advertisers would maintain access to global audiences, according to multiple outlets that reviewed the document.

    The deal removes a sticking point in US-China relations at a time when tensions remain elevated on trade and technology issues. Whether this model becomes a template for other Chinese-owned platforms operating in the US remains to be seen.

    10Web WordPress Photo Gallery Plugin Vulnerability via @sejournal, @martinibuster

    A security advisory was published about a vulnerability in the Photo Gallery by 10Web plugin that has over 200,000 installations. The vulnerability affects how the plugin handles image comments, exposing some sites to unauthorized data modification by unauthenticated attackers (meaning that attackers do not need to register with the site).

    The Photo Gallery by 10Web plugin is used by WordPress sites to create and display image galleries, slideshows, and albums in a variety of layouts. It is used by photography sites, portfolios, and businesses that rely on visual content.

    About The Vulnerability

    The flaw can be exploited by unauthenticated visitors, meaning anyone can trigger the issue without logging in. This significantly increases exposure because there is no barrier to entry such as having to register with the website or attain a higher permission level.

    It is important to note that image comments, where the vulnerability exists, are only available in the Pro version of the plugin. Sites that do not use the comments feature are not affected by this specific issue.

    What Went Wrong

    The vulnerability is caused by a missing capability check in the plugin’s delete_comment() function.

    The plugin does not verify whether a request to delete an image comment is coming from someone who is allowed to perform that action. Normally, WordPress plugins are expected to confirm that a user has the appropriate permissions before modifying site content. That check is missing with this plugin.

    Because the plugin fails to perform this verification, it accepts deletion requests even when they come from unauthenticated users.

    What Attackers Can Do

    An attacker can delete arbitrary image comments from a site. This vulnerability has a severity level rating of 5.3, which is a medium threat level. This vulnerability does not enable a full website takeover or any other server compromise, but it does allow unauthorized deletion of image comments. For sites that rely on image comments for engagement, moderation history, or user interaction, this can result in data loss and disruption.

    The official Wordfence advisory explains the vulnerability:

    “The Photo Gallery by 10Web – Mobile-Friendly Image Gallery plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the delete_comment() function in all versions up to, and including, 1.8.36. This makes it possible for unauthenticated attackers to delete arbitrary image comments. Note: comments functionality is only available in the Pro version of the plugin.”

    Which Versions Can Be Exploited

    The vulnerability affects all versions of the plugin up to and including version 1.8.36.The issue is tied specifically to the comment deletion functionality. Since image comments are only available in the Pro version of the plugin, exploitation is limited to sites running that version with comments enabled.

    No special server configuration or user interaction is required beyond the plugin being active and vulnerable.

    What Site Owners Should Do

    A patch is available. Site owners should update the Photo Gallery by 10Web plugin to version 1.8.37 or later, which includes a security fix addressing this issue. If updating is not possible, disabling the plugin or the comments feature will prevent exploitation until the site can be patched.

    Keeping the plugin up to date is the only direct fix for this vulnerability.

    Featured Image by Shutterstock/Roman Samborskyi