When Advertising Shifts To Prompts, What Should Advertisers Do? via @sejournal, @siliconvallaeys

When I last wrote about Google AI Mode, my focus was on the big differentiators: conversational prompts, memory-driven personalization, and the crucial pivot from keywords to context.

As we see with the Q2 ad platform financial results below, this shift is rapidly reshaping performance advertising. While AI Mode means Google has to rethink how it makes money, it forces us advertisers to rethink something even more fundamental: our entire strategy.

In the article about AI Mode, I laid out how prompts are different from keywords, why “synthetic keywords” are really just a temporary band-aid, and how fewer clicks might just challenge the age-old cost-per-click (CPC) revenue model.

This follow-up is about what these changes truly mean for us as advertisers, and why holding onto that keyword-era mindset could cost us our competitive edge.

The Great Rewiring Of Search

The biggest shift since we first got keyword-targeted online advertising is now in full swing. People aren’t searching with those relatively concise keywords anymore, the ones we optimized for how Google used to weigh certain words in a query.

Large language models (LLMs) have pretty much removed the shackles from the search bar. Now, users can fire off prompts with hundreds of words, and add even more context.

Think about the 400,000 token context window of GPT-5, which is like tens of thousands of words. Thankfully, most people don’t need that much space to explain what they want, but they are speaking in full sentences now, stutters and all.

Google’s internal ads in AI Mode document shares that early testers of AI Mode are asking queries that are two to three times as long as traditional searches on Google.

And thanks to LLMs’ multi-modal capabilities, users are searching with images (Google reports 20 billion Lens searches per month), drawing sketches, and even sending video. They’re finding what they need in entirely new ways.

Increasingly, users aren’t just looking for a list of what might be relevant. They expect a guided answer from the AI, one that summarizes options based on their personal preferences. People are asking AI to help them decide, not just to find.

And that fundamental change in user behavior is now reshaping the very platforms where these searches happen, starting with Google.

The Impact On Google As The Main Ads Platform

All of this definitely poses a threat to Google’s primary revenue stream. But as I mentioned in a LinkedIn post, the traffic didn’t vanish; it just moved.

Users didn’t ditch Google; they simply stopped using it the way they did when keywords were king. Plus, we’re seeing new players emerge, and search itself has fragmented:

This creates a fresh challenge for us advertisers: How do we design campaigns that actually perform when intent originates in these wildly new ways?

What Q2 Earnings Reports Told Us About AI In Search

The Q2 earnings calls were packed with GenAI details. Some of the most jaw-dropping figures involved the expected infrastructure investments.

Microsoft announced plans to spend an eye-watering $30 billion on capital expenditures in the coming quarter, and Alphabet estimated an $85 billion budget for the next year. I guess we’ll all be clicking a lot of ads to help pay for that. So, where will those ads come from when keywords are slowly being replaced by prompts?

Google shared some numbers to illustrate the scale of this shift. AI Overviews already reach 2 billion users a month. AI Mode itself is up to 100 million. The real question is, how is AI actually enabling better ads, and thus improving monetization?

Google reports:

  • Over 90 Performance Max improvements in the past year drove 10%+ more conversions and value.
  • Google’s AI Max for Search campaigns show a 27% lift in conversions or value over exact or phrase matches.

Microsoft Ads tells a similar story. In Q2 2025, it reported:

  • $13 billion in AI-related ad revenue.
  • Copilot-powered ads drove 2.3 times more conversions than traditional formats.
  • Users were 53% more likely to convert within 30 minutes.

So, what’s an advertiser to do with all this?

What Advertisers Should Do

As shared recently in a conversation with Kasim Aslam, these ecosystems are becoming intent originators. That old “search bar” is now a conversation, a screenshot, or even a voice command.

If your campaigns are still relying on waiting for someone to type a query, you’re showing up to the party late. Smart advertisers don’t just respond to intent; they predict it and position for it.

But how? Well, take a look at the Google products that are driving results for advertisers: They’re the newest AI-first offerings. Performance Max, for example, is keywordless advertising driven by feeds, creative, and audiences.

Another vital step for adapting to this shift is AI Max, which I’d call the most unrestrictive form of keyword advertising.

It blends elements of Dynamic Search Ads (DSAs), automatically created assets, and super broad keywords. This allows your ads to show up no matter how people search, even if they’re using those sprawling, multi-part prompts.

Sure, advertisers can still use today’s best practices, like reviewing search term reports and automatically created assets, then adding negatives or exclusions for the irrelevant ones. But let’s be honest, that’s a short-term, old-model approach.

As AI gains memory and contextual understanding, ads will be shown based on scenarios and user intent that isn’t even explicitly expressed.

Relying solely on negatives won’t cut it. The future demands that advertisers focus on getting involved earlier in the decision-making process and making sure the AI has all the right information to advocate for their brand.

Keywords Aren’t The Lever They Once Were

In the AI Mode era, prompts aren’t just simple queries; they’re rich, multi-turn conversations packed with context.

As I outlined in my last article, these interactions can pull in past sessions, images, and deeply personal preferences. No keyword list in the world can capture that level of nuance.

Tinuiti’s Q2 benchmark report shows Performance Max accounts for 59% of Shopping ad spend and delivers 18% higher click-through rates. This is a clear illustration that the platform is taking control of targeting.

And when structured feeds plus dynamic creative drive a 27% lift in conversions according to Google data, it’s because the creative itself is doing the targeting.

Those journeys happen out of sight, which is the biggest threat to advertisers whose strategies aren’t evolving.

The Real Danger: Invisible Decisions

One of my key takeaways from the AI Mode discussion was the risk of “zero-click” journeys. If the assistant delivers what a user needs inside the conversation, your brand might never get a visit.

According to Adobe Analytics, AI-powered referrals to U.S. retail sites grew 1,200% between July 2024 and February 2025. Traffic from these sources now doubles every 60 days.

These users:

  • Visit 12% more pages per session.
  • Bounce 23% less often.
  • Spend 45% more time browsing (especially in travel and finance verticals).

Even more importantly, 53% of users say they plan to rely on AI tools for shopping going forward.

In short, users are starting their journeys before they reach a traditional search engine, and they’re more engaged when they do. And winning in this environment means rethinking our levers for influence.

Why This Is An Opportunity, Not A Death Sentence

As I argued before, platforms aren’t killing keyword advertising; they’re evolving it. The advertisers winning now are leaning into the new levers:

Signals Over Keywords

  • Use customer relationship management (CRM) data to build high-intent audience lists.
  • Layer first-party data into automated campaign types through conversion value adjustments, audiences, or budget settings.
  • Optimize your product feed with rich attributes so AI has more to work with and knows exactly which products to recommend.
  • Ensure feed hygiene so LLMs have the most current data about your offers.
  • Enhance your website with more data for the LLMs to work with, like data tables, and schema.

Creative As Targeting

  • Build modular ad assets that AI can assemble dynamically: multiple headlines, descriptions, and images tailored to different audiences.
  • Test variations that align with different stages of the buying journey so you’re likely to show in more contextual scenarios across the entire consumer journey, not only at the end.

Measurement Beyond Clicks

  • Frequently evaluate the new metrics in Google Ads for AI Max and Performance Max. Changes are rolling out frequently, enabling smarter optimizations.
  • Track feed impression share by enabling these extra columns in Google Ads.
  • Monitor how often your products are surfaced in AI-driven recommendations, as with the recently updated AI Max report for “search terms and landing pages from AI Max.”
  • Focus your measurement on how well users are able to complete tasks, not just clicks.

The future isn’t about bidding on a query. It’s about supplying the AI with the best “raw ingredients” so you win the recommendation at the exact moment of decision.

That mindset shift is the real competitive advantage in the AI-first era.

The Bottom Line

My previous AI Mode post was about the mechanics of the shift. This one is about the mindset change required to survive it.

Keywords aren’t vanishing, but their role is shrinking fast. In an AI-driven, context-first search landscape, the brands that thrive will stop obsessing over what the user types and start shaping what the AI recommends.

If you can win that moment, you won’t just get found. You’ll get chosen.

More Resources:


Featured Image: Smile Studio AP/Shutterstock

When AI gets your brand wrong: Real examples and how to fix it

We’ve all asked a chatbot about a company’s services and seen it respond inaccurately, right? These errors aren’t just annoying; they can seriously hurt a business. AI misrepresentation is real. LLMs could provide users with outdated information, or a virtual assistant might provide false information in your name. Your brand could be at stake. Find out how AI misrepresents brands and what you can do to prevent them.

Table of contents

How does AI misrepresentation work?

AI misrepresentation occurs when chatbots and large language models distort a brand’s message or identity. This could happen when these AI systems find and use outdated or incomplete data. As a result, they show incorrect information, which leads to errors and confusion.

It’s not hard to imagine a virtual assistant providing incorrect product details because it was trained on old data. It might seem like a minor issue, but incidents like this can quickly lead to reputation issues.

Many factors lead to these inaccuracies. Of course, the most important one is outdated information. AI systems use data that might not always reflect the latest changes in a business’s offerings or policy changes. When systems use that old data and return it to potential customers, it can lead to a serious disconnect between the two. Such incidents frustrate customers.

It’s not just outdated data; a lack of structured data on sites also plays a role. Search engines and AI technology like clear, easy-to-find, and understandable information that supports brands. Without solid data, an AI might misrepresent brands or fail to keep up with changes. Schema markup is one option to help systems understand content and ensure it is properly represented.

Next up is consistency in branding. If your brand messaging is all over the place, this could confuse AI systems. The clearer you are, the better. Inconsistent messaging confuses AI and your customers, so it’s important to be consistent with your brand message on various platforms and outlets.

Different AI brand challenges

There are various ways AI failures can impact brands. AI tools and large language models collect information from sources and present it to build a representation of your brand. That means they can misrepresent your brand when the information they use is outdated or plain wrong. These errors can lead to a real disconnect between reality and what users see in the LLMs. It could also be that your brand doesn’t appear in AI search engines or LLMs for the terms you need to appear.

It would hurt the ASICS brand if it weren’t mentioned in results like this

At the other end, chatbots and virtual assistants talk to users directly. This is a different risk. If a chatbot gives inaccurate answers, this could lead to serious issues with users and the outside world. Since chatbots interact directly with users, inaccurate responses can quickly damage trust and harm a brand’s reputation.

Real-world examples

AI misrepresenting brands is not some far-off theory because it has an impact now. We’ve collected some real-world cases that show brands being affected by AI errors.

All of these cases show how various types of AI technology, from chatbots to LLMs, can misrepresent and thus hurt brands. The stakes can be high, ranging from misleading customers to ruining reputations. It’s good to read these examples to get a sense of how widespread these issues are. It might help you avoid similar mistakes and set up better strategies to manage your brand.

You read stories like this every week

Case 1: Air Canada’s chatbot dilemma

  • Case summary: Air Canada faced a significant issue when its AI chatbot misinformed a customer regarding bereavement fare policies. The chatbot, intended to streamline customer service, instead created confusion by providing outdated information.
  • Consequences: This erroneous advice led to the customer taking action against the airline, and a tribunal eventually ruled that Air Canada was liable for negligent misrepresentation. This case emphasized the importance of maintaining accurate, up-to-date databases for AI systems to draw upon, illustrating a major AI error in alignment between marketing and customer service that could be costly in terms of both reputation and finances.
  • Sources: Read more in Lexology and CMSWire.

Case 2: Meta & Character.AI’s deceptive AI therapists

  • Case summary: In Texas, AI chatbots, including those accessible via Meta and Character.AI, were marketed as competent therapists or psychologists, offering generic advice to children. This situation arose from AI errors in marketing and implementation.
  • Consequences: Authorities investigated the practice because they were concerned about privacy breaches and the ethical implications of promoting such sensitive services without proper oversight. The case highlights how AI can overpromise and underdeliver, causing legal challenges and reputational damage.
  • Sources: Details of the investigation can be found in The Times.

Case 3: FTC’s action on deceptive AI claims

  • Case summary: An online business was found to have falsely claimed its AI tools could enable users to earn substantial income, leading to significant financial deception.
  • Consequences: The fraudulent claims defrauded consumers by at least $25 million. This prompted legal action by the FTC and served as a stark example of how deceptive AI marketing practices can have severe legal and financial repercussions.
  • Sources: The full press release from the FTC can be found here.

Case 4: Unauthorized AI chatbots mimicking real people

  • Case summary: Character.AI faced criticism for deploying AI chatbots that mimicked real people, including deceased individuals, without consent.
  • Consequences: These actions caused emotional distress and sparked ethical debates regarding privacy violations and the boundaries of AI-driven mimicry.
  • Sources: More on this issue is covered in Wired.

Case 5: LLMs generating misleading financial predictions

  • Case summary: Large Language Models (LLMs) have occasionally produced misleading financial predictions, influencing potentially harmful investment decisions.
  • Consequences: Such errors highlight the importance of critical evaluation of AI-generated content in financial contexts, where inaccurate predictions can have wide-reaching economic impacts.
  • Sources: Find further discussion on these issues in the Promptfoo blog.

Case 6: Cursor’s AI customer support glitch

  • Case summary: Cursor, an AI-driven coding assistant by Anysphere, encountered issues when its customer support AI gave incorrect information. Users were logged out unexpectedly, and the AI incorrectly claimed it was due to a new login policy that didn’t exist. This is one of those famous hallucinations by AI.
  • Consequences: The misleading response led to cancellations and user unrest. The company’s co-founder admitted to the error on Reddit, citing a glitch. This case highlights the risks of excessive dependence on AI for customer support, stressing the need for human oversight and transparent communication.
  • Sources: For more details, see the Fortune article.

All of these cases show what AI misrepresentation can do to your brand. There is a real need to properly manage and monitor AI systems. Each example shows that it can have a big impact, from huge financial loss to spoiled reputations. Stories like these show how important it is to monitor what AI says about your brand and what it does in your name.

How to correct AI misrepresentation

It’s not easy to fix complex issues with your brand being misrepresented by AI chatbots or LLMs. If a chatbot tells a customer to do something nasty, you could be in big trouble. Legal protection should be a given, of course. Other than that, try these tips:

Use AI brand monitoring tools

Find and start using tools that monitor your brand in AI and LLMs. These tools can help you study how AI describes your brand across various platforms. They can identify inconsistencies and offer suggestions for corrections, so your brand message remains consistent and accurate at all times.

One example is Yoast SEO AI Brand Insights, which is a great tool for monitoring brand mentions in AI search engines and large language models like ChatGPT. Enter your brand name, and it will automatically run an audit. After that, you’ll get information on brand sentiment, keyword usage, and competitor performance. Yoast’s AI Visibility Score combines mentions, citations, sentiment, and rankings to form a reliable overview of your brand’s visibility in AI.

Optimize content for LLMs

Optimize your content for inclusion in LLMs. Performing well in search engines is not a guarantee that you will also perform well in large language models. Make sure that your content is easy to read and accessible for AI bots. Build up your citations and mentions online. We’ve collected more tips on how to optimize for LLMs, including using the proposed llms.txt standard.

Get professional help

If nothing else, get professional help. Like we said, if you are dealing with complex brand issues or widespread misrepresentation, you should consult with professionals. Brand consultants and SEO experts can help fix misrepresentations and strengthen your brand’s online presence. Your legal team should also be kept in the loop.

Use SEO monitoring tools

Last but not least, don’t forget to use SEO monitoring tools. It goes without saying, but you should be using SEO tools like Moz, Semrush, or Ahrefs to track how well your brand is performing in search results. These tools provide analytics on your brand’s visibility and can help identify areas where AI might need better information or where structured data might enhance search performance.

Businesses of all types should actively manage how their brand is represented in AI systems. Carefully implementing these strategies helps minimize the risks of misrepresentation. In addition, it keeps a brand’s online presence consistent and helps build a more reliable reputation online and offline.

Conclusion to AI misrepresentation

AI misrepresentation is a real challenge for brands and businesses. It could harm your reputation and lead to serious financial and legal consequences. We’ve discussed a number of options brands have to fix how they appear in AI search engines and LLMs. Brands should start by proactively monitoring how they are represented in AI.

For one, that means regularly auditing your content to prevent errors from appearing in AI. Also, you should use tools like brand monitor platforms to manage and improve how your brand appears. If something goes wrong or you need instant help, consult with a specialist or outside experts. Last but not least, always make sure that your structured data is correct and aligns with the latest changes your brand has made.

Taking these steps reduces the risks of misrepresentation and enhances your brand’s overall visibility and trustworthiness. AI is moving ever more into our lives, so it’s important to ensure your brand is represented accurately and authentically. Accuracy is very important.

Keep a close eye on your brand. Use the strategies we’ve discussed to protect it from AI misrepresentation. This will ensure that your message comes across loud and clear.

Google-Criteo Deal Unlocks Retail Media

Google is about to give agencies and advertisers access to prime retail media placements on ecommerce sites such as Best Buy, Costco, and Target.

The deal, announced on September 10, 2025, connects Google’s Search Ads 360 (SA360) platform with a network of more than 200 enterprise-level retailers via Criteo’s advertising and commerce platform.

“With Criteo’s expansive network of retailer partners, we’re helping advertisers connect with customers at a critical moment in their shopping journey,” said Bill Reardon, general manager, enterprise platform at Google, via a press release.

Digital Retail Media

Retail media is exploding. Adtelligent estimated that worldwide retail media spend would reach $145 to $165 billion in 2025, up from $59 billion in 2019.

In the United States, the market is valued at more than $60 billion and is growing at roughly 20% per year, according to various sources.

Advertisers use digital retail media primarily to promote products sold on the given retailer’s website. Thus many retail media advertisers are actually the store’s suppliers, boosting sales via the retail channel.

Digital retail media has taken off, in part, because it relies on first-party customer data and does not require intrusive cookies or complex privacy protocols.

Walled Gardens

A few “walled gardens,” i.e., closed platforms or ecosystems operated by a single company, dominate the market. Of these, Amazon is far and away the leader.

In Q2 2025, Amazon’s ad revenue reached $15.69 billion, a 23% year-over-year increase, hitting a record 9.36% of the company’s total revenue and marking it as the fastest-growing segment.

Amazon has leveraged its massive ecommerce marketplace and best-in-class shopping data. Together, these create a self-reinforcing advertising flywheel that drives conversions for advertisers and revenue for Amazon.

Good Data

Amazon’s retail media flywheel works because the company controls the entire process, from initial customer acquisition to final purchase, collecting all the behavioral data along the way. It has first-party data that is real, recent, and relevant.

Compared to third-party data, Amazon and nearly any walled-garden advertising solution will be much more effective at targeting shoppers and producing sales.

These closed ecosystems also help with measurement. Since the ads are running and converting in a walled garden, advertising attribution is easy.

Enter Google

While Amazon is the undisputed leader in digital retail media, Google is the king of digital advertising generally. The company generated approximately $71.3 billion in advertising revenue during Q2 2025, representing a 10.4% year-over-year increase.

Some $54 billion of that Q2 revenue was specific to search advertising. A significant portion of that revenue passes through the company’s SA360 platform. That demand will now connect to Criteo and its retail media network.

This deal is a significant shift for the market. In the past, Google’s retail-related advertising products, such as AI-assisted PMax or Shopping Ads, have focused on driving traffic to retail websites. The idea was that someone would query on Google search, see an ad for a relevant product, and go to the retailer to buy it.

With Criteo’s help, Google can now offer a more complete way to advertise to consumers. Its platform not only guides advertisers to retail sites, but also shoppers on those sites to specific sponsored products.

Google vs. Amazon

In a sense, the Google and Criteo deal targets Amazon’s dominance in digital retail media and walled gardens generally.

Google gains a foothold in the digital retail media market, providing SA360 advertisers with the opportunity to extend search campaigns into high-intent shopping environments, all with unified measurement and attribution.

Brands that had been using retail media now have an alternative. Rather than concentrating advertising spend in a few dominant ecosystems, the Google-Criteo integration opens access to a broader range of retail ad placements.

It opens some access to some of those walled gardens, and, as several pundits have put it, fosters competition and “democratizes” retail media.

Antitrust

While Criteo and Google had been in discussions for some time, the deal’s announcement came just days after Google survived what could have been a devastating antitrust ruling. It was also less than two weeks before a second remedy hearing.

In August 2024, a U.S. District Court found Google guilty of maintaining an illegal monopoly over the general online search and text advertising markets.

Leading up to the September 2025 remedy hearing, some observers thought that Google would be required to divest some of its products, such as the Chrome browser.

No Breakup

Instead, the court, on September 2, 2025, ruled that Google change its behavior, including:

  • Refraining from entering into exclusive search engine contracts,
  • Sharing some search data with qualified competitors.

In a separate April 2025 case, a federal judge found Google guilty of illegally monopolizing key segments of the digital advertising market. The remedy hearing for this case is scheduled for September 22, 2025.

Google’s “democratizing” deal with Criteo could be an indication to the courts that the company aims to encourage competition.

How do AI models generate videos?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

It’s been a big year for video generation. In the last nine months OpenAI made Sora public, Google DeepMind launched Veo 3, the video startup Runway launched Gen-4. All can produce video clips that are (almost) impossible to distinguish from actual filmed footage or CGI animation. This year also saw Netflix debut an AI visual effect in its show The Eternaut, the first time video generation has been used to make mass-market TV.

Sure, the clips you see in demo reels are cherry-picked to showcase a company’s models at the top of their game. But with the technology in the hands of more users than ever before—Sora and Veo 3 are available in the ChatGPT and Gemini apps for paying subscribers—even the most casual filmmaker can now knock out something remarkable. 

The downside is that creators are competing with AI slop, and social media feeds are filling up with faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation. 

With AI-generated videos everywhere, let’s take a moment to talk about the tech that makes them work.

How do you generate a video?

Let’s assume you’re a casual user. There are now a range of high-end tools that allow pro video makers to insert video generation models into their workflows. But most people will use this technology in an app or via a website. You know the drill: “Hey, Gemini, make me a video of a unicorn eating spaghetti. Now make its horn take off like a rocket.” What you get back will be hit or miss, and you’ll typically need to ask the model to take another pass or 10 before you get more or less what you wanted. 

So what’s going on under the hood? Why is it hit or miss—and why does it take so much energy? The latest wave of video generation models are what’s known as latent diffusion transformers. Yes, that’s quite a mouthful. Let’s unpack each part in turn, starting with diffusion. 

What’s a diffusion model?

Imagine taking an image and adding a random spattering of pixels to it. Take that pixel-spattered image and spatter it again and then again. Do that enough times and you will have turned the initial image into a random mess of pixels, like static on an old TV set. 

A diffusion model is a neural network trained to reverse that process, turning random static into images. During training, it gets shown millions of images in various stages of pixelation. It learns how those images change each time new pixels are thrown at them and, thus, how to undo those changes. 

The upshot is that when you ask a diffusion model to generate an image, it will start off with a random mess of pixels and step by step turn that mess into an image that is more or less similar to images in its training set. 

But you don’t want any image—you want the image you specified, typically with a text prompt. And so the diffusion model is paired with a second model—such as a large language model (LLM) trained to match images with text descriptions—that guides each step of the cleanup process, pushing the diffusion model toward images that the large language model considers a good match to the prompt. 

An aside: This LLM isn’t pulling the links between text and images out of thin air. Most text-to-image and text-to-video models today are trained on large data sets that contain billions of pairings of text and images or text and video scraped from the internet (a practice many creators are very unhappy about). This means that what you get from such models is a distillation of the world as it’s represented online, distorted by prejudice (and pornography).

It’s easiest to imagine diffusion models working with images. But the technique can be used with many kinds of data, including audio and video. To generate movie clips, a diffusion model must clean up sequences of images—the consecutive frames of a video—instead of just one image. 

What’s a latent diffusion model? 

All this takes a huge amount of compute (read: energy). That’s why most diffusion models used for video generation use a technique called latent diffusion. Instead of processing raw data—the millions of pixels in each video frame—the model works in what’s known as a latent space, in which the video frames (and text prompt) are compressed into a mathematical code that captures just the essential features of the data and throws out the rest. 

A similar thing happens whenever you stream a video over the internet: A video is sent from a server to your screen in a compressed format to make it get to you faster, and when it arrives, your computer or TV will convert it back into a watchable video. 

And so the final step is to decompress what the latent diffusion process has come up with. Once the compressed frames of random static have been turned into the compressed frames of a video that the LLM guide considers a good match for the user’s prompt, the compressed video gets converted into something you can watch.  

With latent diffusion, the diffusion process works more or less the way it would for an image. The difference is that the pixelated video frames are now mathematical encodings of those frames rather than the frames themselves. This makes latent diffusion far more efficient than a typical diffusion model. (Even so, video generation still uses more energy than image or text generation. There’s just an eye-popping amount of computation involved.) 

What’s a latent diffusion transformer?

Still with me? There’s one more piece to the puzzle—and that’s how to make sure the diffusion process produces a sequence of frames that are consistent, maintaining objects and lighting and so on from one frame to the next. OpenAI did this with Sora by combining its diffusion model with another kind of model called a transformer. This has now become standard in generative video. 

Transformers are great at processing long sequences of data, like words. That has made them the special sauce inside large language models such as OpenAI’s GPT-5 and Google DeepMind’s Gemini, which can generate long sequences of words that make sense, maintaining consistency across many dozens of sentences. 

But videos are not made of words. Instead, videos get cut into chunks that can be treated as if they were. The approach that OpenAI came up with was to dice videos up across both space and time. “It’s like if you were to have a stack of all the video frames and you cut little cubes from it,” says Tim Brooks, a lead researcher on Sora.

A selection of videos generated with Veo 3 and Midjourney. The clips have been enhanced in postproduction with Topaz, an AI video-editing tool. Credit: VaigueMan

Using transformers alongside diffusion models brings several advantages. Because they are designed to process sequences of data, transformers also help the diffusion model maintain consistency across frames as it generates them. This makes it possible to produce videos in which objects don’t pop in and out of existence, for example. 

And because the videos are diced up, their size and orientation do not matter. This means that the latest wave of video generation models can be trained on a wide range of example videos, from short vertical clips shot with a phone to wide-screen cinematic films. The greater variety of training data has made video generation far better than it was just two years ago. It also means that video generation models can now be asked to produce videos in a variety of formats. 

What about the audio? 

A big advance with Veo 3 is that it generates video with audio, from lip-synched dialogue to sound effects to background noise. That’s a first for video generation models. As Google DeepMind CEO Demis Hassabis put it at this year’s Google I/O: “We’re emerging from the silent era of video generation.” 

The challenge was to find a way to line up video and audio data so that the diffusion process would work on both at the same time. Google DeepMind’s breakthrough was a new way to compress audio and video into a single piece of data inside the diffusion model. When Veo 3 generates a video, its diffusion model produces audio and video together in a lockstep process, ensuring that the sound and images are synched.  

You said that diffusion models can generate different kinds of data. Is this how LLMs work too? 

No—or at least not yet. Diffusion models are most often used to generate images, video, and audio. Large language models—which generate text (including computer code)—are built using transformers. But the lines are blurring. We’ve seen how transformers are now being combined with diffusion models to generate videos. And this summer Google DeepMind revealed that it was building an experimental large language model that used a diffusion model instead of a transformer to generate text. 

Here’s where things start to get confusing: Though video generation (which uses diffusion models) consumes a lot of energy, diffusion models themselves are in fact more efficient than transformers. Thus, by using a diffusion model instead of a transformer to generate text, Google DeepMind’s new LLM could be a lot more efficient than existing LLMs. Expect to see more from diffusion models in the near future!

The Download: America’s gun crisis, and how AI video models work

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

We can’t “make American children healthy again” without tackling the gun crisis

This week, the Trump administration released a strategy for improving the health and well-being of American children. The report was titled—you guessed it—Make Our Children Healthy Again. It suggests American children should be eating more healthily. And they should be getting more exercise.

But there’s a glaring omission. The leading cause of death for American children and teenagers isn’t ultraprocessed food or exposure to some chemical. It’s gun violence. 

This week’s news of yet more high-profile shootings at schools in the US throws this disconnect into even sharper relief. Experts believe it is time to treat gun violence in the US as what it is: a public health crisis. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How do AI models generate videos?

It’s been a big year for video generation. In the last nine months OpenAI made Sora public, Google DeepMind launched Veo 3, and the video startup Runway launched Gen-4. All can produce video clips that are (almost) impossible to distinguish from actual filmed footage or CGI animation.

The downside is that creators are competing with AI slop, and social media feeds are filling up with faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation.

With AI-generated videos everywhere, let’s take a moment to talk about the tech that makes them work. Read the full story.

—Will Douglas Heaven

This article is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Meet our 2025 Innovator of the Year: Sneha Goenka

Up to a quarter of children entering intensive care have undiagnosed genetic conditions. To be treated properly, they must first get diagnoses—which means having their genomes sequenced. This process typically takes up to seven weeks. Sadly, that’s often too slow to save a critically ill child.

Hospitals may soon have a faster option, thanks to a groundbreaking system built in part by Sneha Goenka, an assistant professor of electrical and computer engineering at Princeton—and MIT Technology Review’s 2025 Innovator of the Year. Read all about Goenka and her work in this profile.

—Helen Thomson

As well as our Innovator of the Year, Goenka is one of the biotech honorees on our 35 Innovators Under 35 list for 2025. Meet the rest of our biotech and materials science innovators, and the full list here

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI and Microsoft have agreed a revised deal
But haven’t actually revealed any details of said deal. (Axios)
+ The news comes as OpenAI keeps pursuing its for-profit pivot. (Ars Technica)
+ The world’s largest startup is going to need more paying users soon. (WSJ $)

2 A child has died from a measles complication in Los Angeles
They had contracted the virus before they were old enough to be vaccinated. (Ars Technica)
+ Infants are best protected by community immunity. (LA Times $)
+ They’d originally recovered from measles before developing the condition. (CNN)
+ Why childhood vaccines are a public health success story. (MIT Technology Review)

3 Ukrainian drone attacks triggered internet blackouts in Russia
The Kremlin cut internet access in a bid to thwart the mobile-guided drones. (FT $)
+ The UK is poised to mass-produce drones to aid Ukraine. (Sky News)
+ On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

4 Demis Hasabis says AI may slash drug discovery time to under a year
Or perhaps even faster. (Bloomberg $)
+ But there’s good reason to be skeptical of that claim. (FT $)
+ An AI-driven “factory of drugs” claims to have hit a big milestone. (MIT Technology Review)

5 How chatbots alter how we think
We shouldn’t outsource our critical thinking to them. (Undark)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

6 Fraudsters are threatening small businesses with one-star reviews
Online reviews can make or break fledgling enterprises, and scammers know it. (NYT $)

7 Why humanoid robots aren’t taking off any time soon
The industry has a major hype problem. (IEEE Spectrum)
+ Chinese tech giant Ant Group showed off its own humanoid machine. (The Verge)
+ Why the humanoid workforce is running late. (MIT Technology Review)

8 Encyclopedia Britannica and Merriam-Webster are suing Perplexity
In yet another case of alleged copyright infringement. (Reuters)
+ What comes next for AI copyright lawsuits? (MIT Technology Review)

9 Where we’re most likely to find extraterrestrial life in the next decade
Warning: Hollywood may have given us unrealistic expectations. (BBC)

10 Want to build a trillion-dollar company?
Then kiss your social life goodbye. (WSJ $)

Quote of the day

“Nooooo I’m going to have to use my brain again and write 100% of my code like a caveman from December 2024.”

—A Hacker News commenter jokes about a service outage that left Anthropic users unable to access its AI coding tools, Ars Technica reports.

One more thing


What Africa needs to do to become a major AI player

Africa is still early in the process of adopting AI technologies. But researchers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers.

However, ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. Read our story to learn what they are, and how they could be overcome.

—Abdullahi Tsanni

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The fascinating, unexpected origins of everyone’s favorite pastime—karaoke.
+ Why the Twilight juggernaut just refuses to die.
+ If you’re among the mass of excited Hollow Knight fans, here’s a few tips to get through the early stages of the new Silksong game.
+ A sloe gin bramble pie sounds like the perfect way to welcome fall.

36 Ways to Revive an Ecommerce Business

Listeners and readers of “Ecommerce Conversations” know I occasionally depart from interviews to share my experiences owning and operating Beardbrand, the direct-to-consumer brand I launched a decade ago. To date, I’ve addressed hiring, branding, profit-building, priority-setting, exiting, overcoming setbacks, and top business models.

This too is a solo episode, addressing entrepreneurial doldrums, when a business is seemingly stuck in no growth or worse. Certainly that’s the story of Beardbrand over the past couple of years.

So here are 36 ideas to jolt a company forward. Think of this as a checklist for tackling new projects, cutting costs, or simply resetting your focus.

My entire audio dialog is embedded below. The transcript is condensed and edited for clarity.

Operations

Build framework. Implement a clear operating framework, such as EOS — Entrepreneurial Operating System — to guide meetings, goal-setting, and accountability. It keeps everyone aligned and focused.

Define culture. Clarify why your company exists, who you serve, and how. If you haven’t done these things, you can feel lost, really quickly. Boundaries create focus, and focus strengthens customer relationships.

Define mission and core values. Create a memorable mission and concise core values for your team to live by. At Beardbrand, our values are freedom, hunger, and trust — balanced and reinforced through interviews, reviews, and everyday recognition.

Outsource when necessary. Regularly assess what to keep in-house and what to outsource. A smaller, focused team provides flexibility and freedom, while trusted external partners handle the rest.

Improve manufacturing. Continuously evaluate suppliers and get multiple quotes. Choose partners that meet quality, timing, and minimum order quantities, and stay ready for changes in pricing or management.

Implement better shipping. Re-quote carriers such as FedEx, UPS, USPS, and DHL to maintain competitive rates. Review box sizes, packaging, and 3PL processes to optimize costs, minimize errors, and enhance the customer experience.

Mitigating Risk

ADA compliance. Keep your site accessible and up to date with the requirements of the Americans with Disabilities Act to protect customers with disabilities and avoid lawsuits. Maintain a clear process for regular audits to defend your business when necessary.

Terms and privacy. Have a lawyer review your terms and privacy policy instead of relying on boilerplate text. Ensure compliance with privacy laws, including the E.U.’s General Data Protection Regulation and state-specific regulations in jurisdictions such as California and Virginia. Use pop-ups to obtain consent and avoid tracking visitors who decline.

Insurance. Verify that your coverage aligns with revenue and risk. Shop multiple providers each year to confirm you’re getting the best rate and protection.

Pay off debt. Run lean. Keep debt as low as possible so you can scale down if times get tough and borrow only when necessary.

Trademarks. Register your brand name and unique product names. Regularly search for copycats and address violations promptly, ideally with a polite initial approach before escalating to legal action.

Copyrighted photography. Use only images and text you own or license. AI art isn’t always immune to claims, and some creators are aggressive about enforcing their rights. Remind your team that use carries real legal risk.

Product claims. Avoid guaranteed-result language. You can say a product “helps” or “improves appearance,” but words like “cures” or “heals” trigger regulatory oversight.

Two-factor authentication. Enable 2FA for every employee and account to guard against phishing and unauthorized logins.

Secure email. Set up DMARC, SPF, and DKIM to prevent spoofing or impersonation of your domain.

Unused apps. Remove apps you no longer use. Old, unsupported apps can become back doors for hackers or leak your data.

Unused subscriptions. Audit credit cards and recurring charges. Cancel forgotten subscriptions and consider issuing new cards yearly to keep hidden costs and risks low.

Marketing

New customer strategy. Explore new channels. Are you on Amazon, Walmart, Etsy, and eBay? Know where your prospects are shopping, especially if you’re product-focused versus brand-focused.

In-person channels. Consider B2B and niche retailers, from independent pharmacies to mass-market stores. Smaller markets or events, such as marathons or trade shows, can offer stable, untapped opportunities.

Expand your reach. Google and Meta are common, but don’t forget TikTok, Snapchat, Pinterest, X, and YouTube. Other platforms and plugins can amplify reach.

Direct mail. Use direct mail for customers who have unsubscribed from emails. It’s another owned channel to reach potential buyers.

SMS. Similar to email, SMS is a direct and effective means of communication.

Advertising on other platforms. Market on email newsletters, websites, or programmatic TV if your budget allows. Even magazines can offer last-minute ad opportunities.

SEO/GEO. Search engine optimization may feel outdated, but it still matters to drive generative engine optimization. Ensure your site adheres to solid SEO fundamentals, establish a strong public relations presence, and remain active on Reddit, which feeds AI crawlers. Keep your brand visible as user search behavior shifts.

Influencer marketing. Work with micro or mega influencers. Utilize TikTok Shops or user-generated content to expand reach and create authentic content.

Organic social. Build your brand with organic social content. Use it to increase awareness, create authenticity, and enhance your ads.

Global markets. Expand internationally only after significant sales. Start with English-speaking countries, then Europe or China. Consider regulatory and operational costs.

AOV. Increase average order value with bundling, price testing, and shipping thresholds. Promotions and quiet price adjustments can drive higher orders.

Post-purchase upsells. Offer complementary products immediately after purchase to increase revenue per customer.

Category expansion. Launch related products that pair with existing items to encourage multiple purchases.

A/B testing. Optimize and test website layout, marketing copy, promotions, pricing, and more to increase conversion rates.

Repeat orders. Encourage repeat purchases, especially for consumables. For slower-turnover items, target niche buyers, such as developers or bulk purchasers.

Loyalty programs. Be cautious with formal programs — they can backfire. Consider offering informal rewards for milestones, such as gifts after multiple orders.

Post-purchase flow. Ensure emails and communications reach customers, and use small surprises to delight them and create loyalty.

Surprise and delight. Over-deliver on promises. Include small gifts, such as planners or notes, to enhance the customer experience, especially for higher-end products.

Subscriptions. Optimize subscription offerings to keep customers engaged and revenue flowing.

3 Different Ways To Do Bulk Updates On WordPress

One of the strengths of WordPress is its extensibility. You can run everything from e-shops and booking systems to massive WordPress multisites from one instance of WordPress.

Another is that it’s a database and robust PHP-based programming language, which means that running a bunch of updates on a site is remarkably straightforward.

In this post, I’m going to present three different ways to bulk update WordPress.

A quick word of caution before starting to look at this: Things like misaligned fields or plugin conflicts could result in unintended results, so if you’re doing any large-scale updates, be sure to back up beforehand.

Also, for the content updates, it’s worth running a small test. Ten or so posts as a tester is a good way to start, before running it through the entire site.

1. How To Bulk Update Content On A WordPress Website

Simple Changes To Existing Content

If you want to make simple changes to existing content, such as bulk change the author, status, or taxonomies on a number of pieces of content, one thing you can do is use WordPress’ pre-existing bulk editing component.

From the edit posts/pages page, you can tick individual posts and pages and select “Edit.”

From there, you can set all posts’ categories, tags, statuses, and other information quickly and easily. Once done, click the “Update” button.

The WordPress bulk category and tag editorScreenshot from WordPress, August 2025

Please note: This will replace all categories, but tags will be added. This is probably the most common way of editing content, which you probably already know about!

Importing And Exporting Content

Let’s say you want to bulk add WordPress content on a WordPress website.

The most common version is that you want to import a set of blog posts, or indeed, you have a list of products within a spreadsheet that you want to import into a system like WooCommerce; it depends on where you’re importing from.

If you’re combining a WordPress export and importing it into another blog, the best way is to use the default WordPress Importer plugin.

If you’re moving content between WordPress sites, use the default WordPress Importer. It reads WXR (.xml) export files and can optionally download and import file attachments.

If you are using WooCommerce, then the best course of action would be to use the default WooCommerce product importer.

It’s pretty robust and can take a standard CSV, XML file, or spreadsheet and import it. You can map these fields to WooCommerce fields, which is a bit more work.

A screenshot of the WooCommerce Importer, the mapping fields page.Screenshot from WordPress, August 2025

For WooCommerce products, use the built-in Product CSV Importer/Exporter and map your columns to product fields.

Should you be importing content from a non-standard source (like a CSV or a feed), a great plugin to use is WP All Import.

For non-standard sources (CSV, XML, Excel, Google Sheets), WP All Import can map fields to any post type and even run custom PHP during import. Add-ons cover ACF, Yoast, and WooCommerce.

It’s a freemium plugin, with the premium version allowing integrations with ACF, Yoast & WooCommerce. To talk through the power of WP All Import is a blog post in itself. However, I can share a common usage.

Say you wish to update all blog posts with new standardised title tags, you can use the companion plugin WP All Export to export all post data.

Then, within Excel or Google Sheets, you can change individual values, and then use WP All Import to import the blog posts back in.

2. How To Handle Bulk Plugin Updates On A WordPress Website

Of course, behind the scenes – and one of the most common tasks with WordPress blogs – is making sure that plugins are all up to date.

Keeping plugins up to date is a crucial task in keeping your site secure and running smoothly. Thankfully, if you only have one site, it’s very easy to do a bulk update.

Log in to your WordPress site as an administrator, and under Dashboard, there’s a heading entitled “Updates.” Click it to take you to the updates screen.

The WordPress plugin updates with 18 plugins that need updating.Screenshot from WordPress, August 2025

Scroll down a bit, and you should have a list of plugins towards the bottom that need an update. Similarly to the bulk editing, there will be a checkbox next to each element.

Select all checkboxes for the plugins you wish to update (and – in all reality – you’d want to make sure you select all plugins).

Click “Update Plugins,” and then all plugins will be brought up to date!

Your site is unlikely to break even with a large number of backups. However, in the extremely unlikely event the site breaks after updating a bunch of plugins, there are ways to recover, which you can read in the article “How Do You Resolve A Plugin Conflict.” Go to the log files and deactivate the plugin via FTP.

Alternatively, here are a few other techniques to do bulk updates successfully:

  • Update in small batches (e.g. split by functionality, or by letter). Update, reload key pages, then move on.
  • Back up and test on staging before production.
  • If you use a maintenance dashboard like ManageWP, run Safe Updates (it creates a restore point, runs the updates, visually compares pages, and rolls back if something looks wrong).
  • WordPress Command Line Interface (WP-CLI) lets you preview or update plugins individually:
    • Preview: wp plugin update yoast-seo --dry-run
    • Update one: wp plugin update yoast-seo
    • Update all (use with care): wp plugin update --all

3. How To Handle Bulk Plugin Updates On Multiple WordPress Websites

That’s all fine for one WordPress site. However, if you are managing multiple WordPress sites, then it can be a bit time-consuming to handle plugin updates on multiple WordPress sites.

Thankfully, I covered this in a previous article, “How to manage multiple websites on WordPress.”

In that article, I shared a number of WordPress maintenance dashboard services that exist, which will allow you to log in and update multiple WordPress sites from one singular location.

Here are some of the most popular:

Although each of these platforms has premium offerings that vary with cost and features, they also offer plugin and theme updates for free.

I use ManageWP, so once you connect your site to ManageWP, you should see a dashboard with the number of plugin updates you need to do spread over a number of sites. Simply click “Update All” to update all plugins on all installations. Alternatively you can tick the checkboxes and “Update” to select individual plugins to install.

The ManageWP DashboardScreenshot from WordPress, August 2025.

You can also filter by sites and severity of updates within ManageWP. There is a premium option to do a “safe” update, which will allow you to run an update, check the site, and roll back if anything breaks.

There’s a good selection of ways to carry out bulk updates within WordPress. There are also command-like tools like WP CLI (mentioned above) to build scripts to run on sites. However, that is worth an article in itself.

To bulk update all plugins in WP CLI, you can use this command:

wp plugin update --all

This will update all plugins on an individual site and you can expand that to a script to run on multiple sites.

WP CLI is so powerful and really should be used for agencies to manage multiple websites quickly and easily.

Wrapping Up: Bulk Updates For A Smooth-Running WordPress Site

WordPress makes it straightforward to handle bulk updates, whether you’re tweaking content, importing products, or keeping plugins in check.

Across the built-in tools and available plugins, there’s a solution for just about every scenario. The key is to test changes in small batches and always keep a backup handy.

With a little prep, you can save hours of manual work and keep your site (or sites) running smoothly and efficiently.

More Resources:


Featured Image: GaudiLab/Shutterstock

Texas banned lab-grown meat. What’s next for the industry?

Last week, a legal battle over lab-grown meat kicked off in Texas. On September 1, a two-year ban on the technology went into effect across the state; the following day, two companies filed a lawsuit against state officials.

The two companies, Wildtype Foods and Upside Foods, are part of a growing industry that aims to bring new types of food to people’s plates. These products, often called cultivated meat by the industry, take live animal cells and grow them in the lab to make food products without the need to slaughter animals.

Texas joins six other US states and the country of Italy in banning these products. These legal challenges are adding barriers to an industry that’s still in its infancy and already faces plenty of challenges before it can reach consumers in a meaningful way.

The agriculture sector makes up a hefty chunk of global greenhouse-gas emissions, with livestock alone accounting for somewhere between 10% and 20% of climate pollution. Alternative meat products, including those grown in a lab, could help cut the greenhouse gases from agriculture.

The industry is still in its early days, though. In the US, just a handful of companies can legally sell products including cultivated chicken, pork fat, and salmon. Australia, Singapore, and Israel also allow a few companies to sell within their borders.

Upside Foods, which makes cultivated chicken, was one of the first to receive the legal go-ahead to sell its products in the US, in 2022. Wildtype Foods, one of the latest additions to the US market, was able to start selling its cultivated salmon in June.

Upside, Wildtype, and other cultivated-meat companies are still working to scale up production. Products are generally available at pop-up events or on special menus at high-end restaurants. (I visited San Francisco to try Upside’s cultivated chicken at a Michelin-starred restaurant a few years ago.)

Until recently, the only place you could reliably find lab-grown meat in Texas was a sushi restaurant in Austin. Otoko featured Wildtype’s cultivated salmon on a special tasting menu starting in July. (The chef told local publication Culture Map Austin that the cultivated fish tastes like wild salmon, and it was included in a dish with grilled yellowtail to showcase it side-by-side with another type of fish.)

The as-yet-limited reach of lab-grown meat didn’t stop state officials from moving to ban the technology, effective from now until September 2027.

The office of state senator Charles Perry, the author of the bill, didn’t respond to requests for comment. Neither did the Texas and Southwestern Cattle Raisers Association, whose president, Carl Ray Polk Jr., testified in support of the bill in a March committee hearing.

“The introduction of lab-grown meat could disrupt traditional livestock markets, affecting rural communities and family farms,” Perry said during the meeting.

In an interview with the Texas Tribune, Polk said the two-year moratorium would help the industry put checks and balances in place before the products could be sold. He also expressed concern about how clearly cultivated-meat companies will be labeling their products.

“The purpose of these bans is to try to kill the cultivated-meat industry before it gets off the ground,” said Myra Pasek, general counsel of Upside Foods, via email. The company is working to scale up its manufacturing and get the product on the market, she says, “but that can’t happen if we’re not allowed to compete in the marketplace.”

Others in the industry have similar worries. “Moratoriums on sale like this not only deny Texans new choices and economic growth, but they also send chilling signals to researchers and entrepreneurs across the country,” said Pepin Andrew Tuma, the vice president of policy and government relations for the Good Food Institute, a nonprofit think tank focused on alternative proteins, in a statement. (The group isn’t involved in the lawsuit.) 

One day after the moratorium took effect on September 1, Wildtype Foods and Upside Foods filed a lawsuit challenging the ban, naming Jennifer Shuford, commissioner of the Texas Department of State Health Services, among other state officials.

A lawsuit wasn’t necessarily part of the scale-up plan. “This was really a last resort for us,” says Justin Kolbeck, cofounder and CEO of Wildtype.

Growing cells to make meat in the lab isn’t easy—some companies have spent a decade or more trying to make significant amounts of a product that people want to eat. These legal battles certainly aren’t going to help. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: Trump’s impact on science, and meet our climate and energy honorees

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How Trump’s policies are affecting early-career scientists—in their own words

Every year MIT Technology Review celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list. We’ve just published the 2025 edition. This year, though, the context is different: The US scientific community is under attack.

Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities and academia, and made substantial funding cuts to the country’s science and technology infrastructure.

We asked our six most recent cohorts about both positive and negative impacts of the administration’s new policies. Their responses provide a glimpse into the complexities of building labs, companies, and careers in today’s political climate. Read the full story.

—Eileen Guo & Amy Nordrum

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

This Ethiopian entrepreneur is reinventing ammonia production

In the small town in Ethiopia where he grew up, Iwnetim Abate’s family had electricity, but it was unreliable. So, for several days each week when they were without power, Abate would finish his homework by candlelight.

Growing up without the access to electricity that many people take for granted shaped the way Abate thinks about energy issues. Today, the 32-year old is an assistant professor at MIT in the department of materials science and engineering. 

Part of his research focuses on sodium-ion batteries, which could be cheaper than the lithium-based ones that typically power electric vehicles and grid installations. He’s also pursuing a new research path, examining how to harness the heat and pressure under the Earth’s surface to make ammonia, a chemical used in fertilizer and as a green fuel. Read the full story.

—Casey Crownhart

Abate is one of the climate and energy honorees on our 35 Innovators Under 35 list for 2025. Meet the rest of our climate and energy innovators here, and the full list—including our innovator of the year—here

Texas banned lab-grown meat. What’s next for the industry?

Last week, a legal battle over lab-grown meat kicked off in Texas. On September 1, a two-year ban on the technology went into effect across the state; the following day, two companies filed a lawsuit against state officials.

The two companies, Wildtype Foods and Upside Foods, are part of a growing industry that aims to bring new types of food to people’s plates. These products, often called cultivated meat by the industry, take live animal cells and grow them in the lab to make food products without the need to slaughter animals.

Texas joins six other US states and the country of Italy in banning these products—adding barriers to an industry that’s still in its infancy, and already faces plenty of challenges before it can reach consumers in a meaningful way. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Videos of Charlie Kirk’s shooting are everywhere on social media
It demonstrates just how poorly equipped platforms are to stop the spread of violent material. (NYT $)
+ Why social media can’t get on top of its graphic video problem. (NY Mag $)
+ Here’s how platforms say they’ll treat the videos. (The Verge)
+ Far-right communities reacted to Kirk’s murder by calling for more violence. (Wired $)

2 NASA has uncovered the clearest sign of life on Mars to date
Some unusual rocks may have been formed by ancient microbes. (WP $)
+ Scientists are very excited by the possibility they were created by living organisms. (New Scientist $)

3 A California bill to regulate AI companion chatbots is close to passing
It would become the first US state to make chatbot operators legally accountable. (TechCrunch)
+ Wall Street is only now starting to worry about “AI psychosis.” (Insider $)
+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)

4 Larry Ellison briefly overtook Elon Musk as the world’s richest person
His firm Oracle reported far better-than expected results. (The Guardian)
+ Oracle is riding high on a surge of demand for its data centers. (BBC)
+ But its continued success will depend on its ability to deliver promised hardware. (FT $)

5 The ousted CDC director is set to testify before the US Senate
RFK Jr repeatedly called Susan Monarez a liar during a hearing last week. (Ars Technica)
+ The backlash to Kennedy’s actions is intensifying. (NY Mag $)

6 A new system can pinpoint the best spot to hit an asteroid
Making destroying them a whole lot safer, in theory. (New Scientist $)
+ Meet the researchers testing the “Armageddon” approach to asteroid defense. (MIT Technology Review)

7 Saudi Arabia is building some of the world’s biggest solar farms ☀
It needs plenty more electricity for its new resorts and data centers. (WSJ $)
+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review)

8 CRISPR could help to combat diabetes
Scientists successfully implanted insulin-producing edited cells into a man’s pancreas. (Wired $)
+ A US court just put ownership of CRISPR back in play. (MIT Technology Review)

9 How to save oyster reefs 🦪
Conservation projects are helping to rebuild destroyed populations. (Knowable Magazine)
+ How the humble sea creature could hold the key to restoring coastal waters. (MIT Technology Review)

10 Bluesky is not as fun as it should be
It fosters a culture of reactionary scolding that’s driving some users back to X. (New Yorker $)

Quote of the day

“For the love of God and Charlie’s family, just stop.”

—A poster on X begs fellow social media users to stop sharing images and videos of conservative activist Charlie Kirk’s murder online, the Associated Press reports.

One more thing

This giant microwave may change the future of war

Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.

The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.

The US armed forces are now hunting for a solution—and they want it fast. One of these is microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. Read the full story.

—Sam Dean

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ They’ve finally done it—the Stephen King novel they claimed was impossible to adapt is coming to the big screen.
+ Do you have more zucchinis than you know what to do with? This tasty bread is one solution.
+ How The Penguin’s production designers transformed NYC into spooky, dirty Gotham.
+ This fascinating website shows you what today’s date looks like on dozens of different calendars and clocks.

Partnering with generative AI in the finance function

Generative AI has the potential to transform the finance function. By taking on some of the more mundane tasks that can occupy a lot of time, generative AI tools can help free up capacity for more high-value strategic work. For chief financial officers, this could mean spending more time and energy on proactively advising the business on financial strategy as organizations around the world continue to weather ongoing geopolitical and financial uncertainty.

CFOs can use large language models (LLMs) and generative AI tools to support everyday tasks like generating quarterly reports, communicating with investors, and formulating strategic summaries, says Andrew W. Lo, Charles E. and Susan T. Harris professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “LLMs can’t replace the CFO by any means, but they can take a lot of the drudgery out of the role by providing first drafts of documents that summarize key issues and outline strategic priorities.”

Generative AI is also showing promise in functions like treasury, with use cases including cash, revenue, and liquidity forecasting and management, as well as automating contracts and investment analysis. However, challenges still remain for generative AI to contribute to forecasting due to the mathematical limitations of LLMs. Regardless, Deloitte’s analysis of its 2024 State of Generative AI in the Enterprise survey found that one-fifth (19%) of finance organizations have already adopted generative AI in the finance function.

Despite return on generative AI investments in finance functions being 8 points below expectations so far for surveyed organizations (see Figure 1), some finance departments appear to be moving ahead with investments. Deloitte’s fourth-quarter 2024 North American CFO Signals survey found that 46% of CFOs who responded expect deployment or spend on generative AI in finance to increase in the next 12 months (see Figure 2). Respondents cite the technology’s potential to help control costs through self-service and automation and free up workers for higher-level, higher-productivity tasks as some of the top benefits of the technology.

“Companies have used AI on the customer-facing side of the house for a long time, but in finance, employees are still creating documents and presentations and emailing them around,” says Robyn Peters, principal in finance transformation at Deloitte Consulting LLP. “Largely, the human-centric experience that customers expect from brands in retail, transportation, and hospitality haven’t been pulled through to the finance organization. And there’s no reason we cannot do that—and, in fact, AI makes it a lot easier to do.”

If CFOs think they can just sit by for the next five years and watch how AI evolves, they may lose out to more nimble competitors that are actively experimenting in the space. Future finance professionals are growing up using generative AI tools too. CFOs should consider reimagining what it looks like to be a successful finance professional, in collaboration with AI.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.