Anthropic Agrees To $1.5B Settlement Over Pirated Books via @sejournal, @MattGSouthern

Anthropic agreed to a proposed $1.5 billion settlement in Bartz v. Anthropic over claims it downloaded pirated books to help train Claude.

If approved, plaintiffs’ counsel says it would be the largest U.S. copyright recovery to date. A preliminary approval hearing is set for today.

In June, Judge William Alsup held that training on lawfully obtained books can qualify as fair use, while copying and storing millions of pirated books is infringement. That order set the stage for settlement talks.

Settlement Details

The deal would pay about $3,000 per eligible title, with an estimated class size of roughly 500,000 books. Plaintiffs allege Anthropic pulled at least 7 million copies from piracy sites Library Genesis and Pirate Library Mirror.

Justin Nelson, counsel for the authors, said:

“As best as we can tell, it’s the largest copyright recovery ever.”

How Payouts Would Work

According to the Authors Guild’s summary, the fund is paid in four tranches after court approvals: $300M soon after preliminary approval, $300M after final approval, then $450M at 12 months and 450M at 24 months, with interest accruing in escrow.

A final “Works List” is due October 10, which will drive a searchable database for claimants.

The Guild notes the agreement requires destruction of pirated copies and resolves only past conduct.

Why This Matters

If you rely on AI tools in content workflows, provenance now matters more. Expect more licensing deals and clearer disclosures from vendors about training data sources.

For publishers and creators, the per-work payout sets a reference point that may strengthen negotiating leverage in future licensing talks.

Looking Ahead

The judge will consider preliminary approval today. If granted, the notice process begins this fall and payments to rightsholders would follow final approval and claims processing, funded on the installment schedule above.


Featured Image: Tigarto/Shutterstock

Google Publishes Exact Gemini Usage Limits Across All Tiers via @sejournal, @MattGSouthern

Google has published exact usage limits for Gemini Apps across the free tier and paid Google AI plans, replacing earlier vague language with concrete numbers marketers can plan around.

The Help Center update covers daily caps for prompts, images, Deep Research, video generation, and context windows, and notes that you’ll see in-product notices when you’re close to a limit.

What’s New

Until recently, Google’s documentation used general phrasing about “limited access” without specifying amounts.

The Help Center page now lists per-tier allowances for Gemini 2.5 Pro prompts, image generation, Deep Research, and more. It also clarifies that practical caps can vary with prompt complexity, file sizes, and conversation length, and that limits may change over time.

Google’s Help Center states:

“Gemini Apps has usage limits designed to ensure an optimal experience for everyone… we may at times have to cap the number of prompts, conversations, and generated assets that you can have within a specific timeframe.”

Free vs. Paid Tiers

On the free experience, you can use Gemini 2.5 Pro for up to five prompts per day.

The page lists general access to 2.5 Flash and includes:

  • 100 images per day
  • 20 Audio Overviews per day
  • Five Deep Research reports per month using 2.5 Flash).

Because overall app limits still apply, actual throughput depends on how long and complex your prompts are and how many files you attach.

Google AI Pro increases ceilings to:

  • 100 prompts per day on Gemini 2.5 Pro
  • 1,000 images per day
  • 20 Deep Research reports per day (using 2.5 Pro).

Google AI Ultra raises those to

  • 500 prompts per day
  • 200 Deep Research reports per day
  • Includes Deep Think with 10 prompts per day at a 192,000-token context window for more complex reasoning tasks.

Context Windows and Advanced Features

Context windows differ by tier. The free tier lists a 32,000-token context size, while Pro and Ultra show 1 million tokens, which is helpful when you need longer conversations or to process large documents in one go.

Ultra’s Deep Think is separate from the 1M context and is capped at 192k tokens for its 10 daily prompts.

Video generation is currently in preview with model-specific limits. Pro shows up to three videos per day with Veo 3 Fast (preview), while Ultra lists up to five videos per day with Veo 3 (preview).

Google indicates some features receive priority or early access on paid plans.

Availability and Requirements

The Gemini app in Google AI Pro and Ultra is available in 150+ countries and territories for users 18 or older.

Upgrades are tied to select Google One paid plans for personal accounts, which consolidate billing with other premium Google services.

Why This Matters

Clear ceilings make it easier to scope deliverables and budgets.

If you produce a steady stream of social or ad creative, the image caps and prompt totals are practical planning inputs.

Teams doing competitive analysis or longer-form research can evaluate whether the free tier’s five Deep Research reports per month cover occasional needs or if Pro’s daily allotment, Ultra’s higher limit, and Deep Think are a better fit for heavier workloads.

The documentation also emphasizes that caps can vary with usage patterns, so it’s worth watching the in-app limit warnings on busy days.

Looking Ahead

Google notes that limits may evolve. If your workflows depend on specific daily counts or large context windows, it’s sensible to review the Help Center page periodically and adjust plans as features move from preview to general availability.


Featured Image: Evolf/Shutterstock

Google’s Antitrust Ruling: What The Remedies Really Mean For Search, SEO, And AI Assistants via @sejournal, @gregjarboe

When Judge Amit P. Mehta issued his long-awaited remedies decision in the Google search antitrust case, the industry exhaled a collective sigh of relief. There would be no breakup of Google, no forced divestiture of Chrome or Android, and no user-facing “choice screen” like the one that reshaped Microsoft’s browser market two decades ago. But make no mistake – this ruling rewrites the playbook for search distribution, data access, and competitive strategy over the next six years.

This article dives into what led to the decision, what it actually requires, and – most importantly – what it means for SEO, PPC, publishers, and the emerging generation of AI-driven search assistants.

What Led To The Decision

The Department of Justice and a coalition of states sued Google in 2020, alleging that the company used exclusionary contracts and massive payments to cement its dominance in search. In August 2024, Judge Mehta ruled that Google had indeed violated antitrust law, writing, “Google is a monopolist, and it has acted as one to maintain its monopoly.” The question then became: what remedies would actually restore competition?

The DOJ and states pushed for sweeping measures – including a breakup of Google’s Chrome browser or Android operating system, and mandatory choice screens on devices. Google countered that such steps would harm consumers and innovation. By the time remedies hearings wrapped, generative AI had exploded into the mainstream, shifting the court’s sense of what competition in search could look like.

What The Court Decided

Judge Mehta’s ruling, issued September 2, 2025, imposed a mix of behavioral remedies:

  • Exclusive contracts banned. Google can no longer strike deals that make it the sole default search engine on browsers, phones, or carriers. That means Apple, Samsung, Mozilla, and mobile carriers can now entertain offers from rivals like Microsoft Bing or newer AI entrants.
  • Payments still allowed. Crucially, the court did not ban Google from paying for placement. Judge Mehta explained that removing payments altogether would “impose substantial harms on distribution partners.” In other words, the checks will keep flowing – but without exclusivity.
  • Index and data sharing. Google must share portions of its search index and some user interaction data with “qualified competitors” on commercial terms. Ads data, however, is excluded. This creates a potential on-ramp for challengers, but it doesn’t hand them the secret sauce of Google’s ranking systems.
  • No breakup, no choice screen. Calls to divest Chrome or Android were rejected as overreach. Similarly, the court declined to mandate a consumer-facing choice screen. Change will come instead through contracts and UX decisions by distribution partners.
  • Six-year oversight. Remedies will be overseen by a technical committee for six years. A revised judgment is due September 10, with remedies taking effect roughly 60 days after final entry.

As Judge Mehta put it, “Courts must… craft remedies with a healthy dose of humility,” noting that generative AI has already “changed the course of this case.”

How The Market Reacted

Investors immediately signaled relief. Alphabet shares jumped ~8% after hours, while Apple gained ~4%. The lack of a breakup, and the preservation of lucrative search placement payments, reassured Wall Street that Google’s search empire was not being dismantled overnight.

But beneath the relief lies a new strategic reality: Google’s moat of exclusivity has been replaced with a marketplace for defaults.

Strategic Insights: Beyond The Headlines

Most coverage of the decision has focused on what didn’t happen – the absence of a breakup or a choice screen. But the deeper story is how distribution, data, and AI will interact under the new rules.

1. Defaults Move From Moat To Marketplace

Under the old model, Google’s exclusive deals ensured it was the default on Safari, Android, and beyond. Now, partners can take money from multiple providers. That turns the default position into a marketplace, not a moat.

Apple, in particular, gains leverage. Court records revealed that Google paid Apple $20 billion in 2022 and paid $26.3 billion in 2021  – the figure is not to any one company, but Apple likely represents the largest recipient – to remain Safari’s default search engine. Without exclusivity, Apple can entertain bids from Microsoft, OpenAI, or others – potentially extracting even more money by selling multiple placements or rotating defaults.

We may see new UX experiments: rotating search tiles, auction-based setup flows, or AI assistant shortcuts integrated into operating systems. Distribution partners like Samsung or Mozilla could pilot “multi-home defaults,” where Google, Bing, and an AI engine all coexist in visible slots.

2. Data Access Opens An On-Ramp For Challengers

Index-sharing and limited interaction data access lower barriers for rivals. Crawling the web is expensive; licensing Google’s index could accelerate challengers like Bing, Perplexity, or OpenAI’s rumored search product.

But it’s not full parity. Without ads data and ranking signals, competitors must still differentiate on product experience. Think faster answers, vertical specialization, or superior AI integration. As I like to put it: Index access gives challengers legs, not lungs.

Much depends on how “qualified competitor” is defined. A narrow definition could limit access to a token few; a broad one could empower a new wave of vertical and AI-driven search entrants.

3. AI Is Already Shifting The Game

The court acknowledged that generative AI reshaped its view of competition. Assistants like Copilot, Gemini, or Perplexity are increasingly acting as intent routers – answering directly, citing sources, or routing users to transactions without a traditional SERP.

That means the battle for distribution may shift from browsers and search bars to AI copilots embedded in operating systems, apps, and devices. If users increasingly ask their assistant instead of typing a query, exclusivity deals matter less than who owns the assistant.

For SEO and SEM professionals, this accelerates the shift toward zero-click answers, assistant-ready content, and schema that supports citations.

4. Financial Dynamics: Relief Today, Pressure Tomorrow

Yes, investors cheered. But over time, Google could face rising traffic acquisition costs (TAC) as Apple, Samsung, and carriers auction off default positions. Defending its distribution may get more expensive, eating into margins.

At the same time, without a choice screen, search market share is likely to shift gradually, not collapse. Expect Google’s U.S. query share to remain in the high 80s in the near term, with only single-digit erosion as rivals experiment with new models.

5. Knock-On Effects: The Ad-Tech Case Looms

Don’t overlook the second front: the DOJ’s separate antitrust case against Google’s ad-tech stack, now moving toward remedies hearings in Virginia. If that case results in structural changes – say, forcing Google to separate its publisher ad server from its exchange – it could reshape how search ads are bought, measured, and monetized.

For publishers, both cases matter. If rivals gain traction with AI-driven assistants, referral traffic could diversify – but also become more volatile, depending on how assistants handle citations and click-throughs.

What Happens Next

  • September 10, 2025: DOJ and Google file a revised judgment.
  • ~60 days later: Remedies begin taking effect.
  • Six years: Oversight period, with ongoing compliance monitoring.

Key Questions To Watch:

  • How will Apple implement non-exclusive search defaults in Safari?
  • Who qualifies as a “competitor” for index/data access, and on what terms?
  • Will rivals like Microsoft, Perplexity, or OpenAI buy into distribution slots aggressively?
  • How will AI assistants evolve as distribution front doors?

What This Means For SEO And PPC

This ruling isn’t just about contracts in Silicon Valley – it has practical consequences for marketers everywhere.

  • Distribution volatility planning. SEM teams should budget for a world where Safari queries become more contestable. Test Bing Ads, Copilot Ads, and assistant placements.
  • Assistant-ready content. Optimize for concise, cite-worthy answers with schema markup. Publish FAQs, data tables, and source-friendly content that large language models (LLMs) like to quote.
  • Syndication hedge. If new index-sharing programs emerge, explore partnerships with vertical search startups. Early pilots could deliver traffic streams outside the Google ecosystem.
  • Attribution resilience. As assistants mediate more traffic, referral strings will get messy. Double down on UTM governance, server-side tracking, and marketing mix models to parse signal from noise.
  • Creative testing. Build two-tier content: a punchy, fact-dense abstract that assistants can lift, and a deeper explainer for human readers.

Market Scenarios

  • Base Case (Most Likely): Google retains high-80s market share. TAC costs rise gradually. AI assistants siphon a modest share of informational queries by 2027. Impact: margin pressure more than market share loss.
  • Upside for Rivals: If index access is broad and AI assistants nail UX, Bing, Perplexity, and others could win five to 10 points combined in specific verticals. Impact: SEM arbitrage opportunities emerge, and SEO adapts to answer-first surfaces.
  • Regulatory Cascade: If the ad-tech remedies impose structural changes, Google’s measurement edge narrows, and OEMs test choice-like UX voluntarily. Impact: more fragmentation, more testing for marketers.

Final Takeaway

Judge Mehta summed up the challenge well: “Courts must craft remedies with a healthy dose of humility.” The ruling doesn’t topple Google, but it does force the search giant to compete on more open terms. Exclusivity is gone; auctions and assistants are in.

For marketers, the message is clear: Don’t wait for regulators to rebalance the playing field. Diversify now – across engines, assistants, and ad formats. Optimize for answerability as much as for rankings. And be ready: The real competition for search traffic is just beginning.

More Resources:


Featured Image: beast01/Shutterstock

Google PMax Unveils Optimization Tools

Google’s Performance Max campaigns place responsive ads across all Google channels based on audience signals. The search giant automatically determines an ad’s headlines, descriptions, and images across, say, Search, Display, and YouTube to deliver top results.

Yet PMax campaigns lack transparency and restrict options.

The encouraging news is that Google is listening to advertisers and has rolled out PMax reporting and flexibility updates in the past year. These include reports for asset-level conversions and Search category theme volume and conversions, as well as the ability to exclude devices where your ads can appear.

More recently, Google has provided new PMax optimization features. I’ll address those in this post.

Channel performance

At a Performance Max campaign level, advertisers can now see which channels drive traffic and conversions. In the example below, traffic from Google Discover accounts for 5.36% of total spend and one conversion.

Google Ads Performance Max report with 34,306 impressions, 3,740 interactions, and 58.22 conversions. Visualization shows conversions by channel, including Discover and Display, with costs and conversion values for contact and purchase goals.

Performance Max advertisers can now see, at the campaign level, which channels drive traffic and conversions. Click image to enlarge.

Performance Max ads can show in these Google channels:

  • Discover
  • Display
  • Gmail
  • Maps
  • Search
  • YouTube

Advertisers cannot exclude specific channels, but the new visibility is helpful and may determine PMax’s overall viability. Advertisers can exclude non-converting ads and keywords to assess further whether PMax is the right option.

Final URL expansion

By default, new Performance Max campaigns turn on Final URL expansion. This means that Google can send searchers to a different landing page for better conversions. Expanding the Final URL can be worthwhile, but it’s important to see which pages are converting. An option in the “Assets” tab provides the Final URL expansion assets.

Advertisers can exclude irrelevant URLs in “Asset Optimization” within the campaign settings. Click on the “Customization” option to activate “Final URL expansion.”

Google Ads admin panel showing automated text asset options. Customization and Final URL Expansion toggles are enabled, with two URL exclusions listed: example.com and example2.com.

Advertisers can exclude irrelevant URLs in “Asset Optimization” within the campaign settings. Click image to enlarge.

Asset Optimization

Speaking of Asset Optimization, advertisers can see the asset source for the many components of Performance Max data. For example, a Google-created headline may convert at twice the rate of an advertiser’s version. Advertisers can pause automatically created assets, similar to pausing keywords.

Advertisers can disable automated assets at the account level, but not for campaigns. Turn off the option, for example, if you don’t want Google-created sitelinks to show. Remember that turning off an automated asset impacts the entire account.

Negative keywords

Performance Max campaigns have always allowed negative keywords. However, the setup was cumbersome, requiring either implementation by a Google rep or the creation of an account-level negative keyword list.

Now, adding negative keywords is easy. Discovering the keywords is also easy, as search queries are available as a separate option in the “Insights and reports” tab to view the data and select terms to exclude.

Search Themes

Google introduced Search Themes in 2023 to help guide its AI. The Themes work similarly to keywords. For example, a retailer selling winter jackets could provide Search Themes of:

  • “Winter jackets,”
  • “Men’s winter jackets,”
  • “Women’s winter jackets,”

Searchers don’t need to type these keywords for ads to show. Instead, the ads show if an advertiser’s site content or the searcher’s query history indicates relevance. Along with audience signals, Search Themes helps Google know a searcher’s profile.

Google now allows up to 50 Search Themes per asset group, an increase from 10.

Putin says organ transplants could grant immortality. Not quite.

This week I’m writing from Manchester, where I’ve been attending a conference on aging. Wednesday was full of talks and presentations by scientists who are trying to understand the nitty-gritty of aging—all the way down to the molecular level. Once we can understand the complex biology of aging, we should be able to slow or prevent the onset of age-related diseases, they hope.

Then my editor forwarded me a video of the leaders of Russia and China talking about immortality. “These days at 70 years old you are still a child,” China’s Xi Jinping, 72, was translated as saying, according to footage livestreamed by CCTV to multiple media outlets.

“With the developments of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Russia’s Vladimir Putin, also 72, is reported to have replied.

Russian President Vladimir Putin, Chinese President Xi Jinping and North Korean leader Kim Jong Un walk side by side

SERGEI BOBYLEV, SPUTNIK, KREMLIN POOL PHOTO VIA AP

There’s a striking contrast between that radical vision and the incremental longevity science presented at the meeting. Repeated rounds of organ transplantation surgery aren’t likely to help anyone radically extend their lifespan anytime soon.

First, back to Putin’s proposal: the idea of continually replacing aged organs to stay young. It’s a simplistic way to think about aging. After all, aging is so complicated that researchers can’t agree on what causes it, why it occurs, or even how to define it, let alone “treat” it.

Having said that, there may be some merit to the idea of repairing worn-out body parts with biological or synthetic replacements. Replacement therapies—including bioengineered organs—are being developed by multiple research teams. Some have already been tested in people. This week, let’s take a look at the idea of replacement therapies.

No one fully understands why our organs start to fail with age. On the face of it, replacing them seems like a good idea. After all, we already know how to do organ transplants. They’ve been a part of medicine since the 1950s and have been used to save hundreds of thousands of lives in the US alone.

And replacing old organs with young ones might have more broadly beneficial effects. When a young mouse is stitched to an old one, the older mouse benefits from the arrangement, and its health seems to improve.

The problem is that we don’t really know why. We don’t know what it is about young body tissues that makes them health-promoting. We don’t know how long these effects might last in a person. We don’t know how different organ transplants will compare, either. Might a young heart be more beneficial than a young liver? No one knows.

And that’s before you consider the practicalities of organ transplantation. There is already a shortage of donor organs—thousands of people die on waiting lists. Transplantation requires major surgery and, typically, a lifetime of prescription drugs that damp down the immune system, leaving a person more susceptible to certain infections and diseases.

So the idea of repeated organ transplantations shouldn’t really be a particularly appealing one. “I don’t think that’s going to happen anytime soon,” says Jesse Poganik, who studies aging at Brigham and Women’s Hospital in Boston and is also in Manchester for the meeting.

Poganik has been collaborating with transplant surgeons in his own research. “The surgeries are good, but they’re not simple,” he tells me. And they come with real risks. His own 24-year-old cousin developed a form of cancer after a liver and heart transplant. She died a few weeks ago, he says.

So when it comes to replacing worn-out organs, scientists are looking for both biological and synthetic alternatives.  

We’ve been replacing body parts for centuries. Wooden toes were used as far back as the 15th century. Joint replacements have been around for more than a hundred years. And major innovations over the last 70 years have given us devices like pacemakers, hearing aids, brain implants, and artificial hearts.

Scientists are exploring other ways to make tissues and organs, too. There are different approaches here, but they include everything from injecting stem cells to seeding “scaffolds” with cells in a lab.

In 1999, researchers used volunteers’ own cells to seed bladder-shaped collagen scaffolds. The resulting bioengineered bladders went on to be transplanted into seven people in an initial trial

Now scientists are working on more complicated organs. Jean Hébert, a program manager at the US government’s Advanced Research Projects Agency for Health, has been exploring ways to gradually replace the cells in a person’s brain. The idea is that, eventually, the recipient will end up with a young brain.

Hébert showed my colleague Antonio Regalado how, in his early experiments, he removed parts of mice’s brains and replaced them with embryonic stem cells. That work seems a world away from the biochemical studies being presented at the British Society for Research on Ageing annual meeting in Manchester, where I am now.

On Wednesday, one scientist described how he’d been testing potential longevity drugs on the tiny nematode worm C. elegans. These worms live for only about 15 to 40 days, and his team can perform tens of thousands of experiments with them. About 40% of the drugs that extend lifespan in C. elegans also help mice live longer, he told us.

To me, that’s not an amazing hit rate. And we don’t know how many of those drugs will work in people. Probably less than 40% of that 40%.

Other scientists presented work on chemical reactions happening at the cellular level. It was deep, basic science, and my takeaway was that there’s a lot aging researchers still don’t fully understand.

It will take years—if not decades—to get the full picture of aging at the molecular level. And if we rely on a series of experiments in worms, and then mice, and then humans, we’re unlikely to make progress for a really long time. In that context, the idea of replacement therapy feels like a shortcut.

“Replacement is a really exciting avenue because you don’t have to understand the biology of aging as much,” says Sierra Lore, who studies aging at the University of Copenhagen in Denmark and the Buck Institute for Research on Aging in Novato, California.

Lore says she started her research career studying aging at the molecular level, but she soon changed course. She now plans to focus her attention on replacement therapies. “I very quickly realized we’re decades away [from understanding the molecular processes that underlie aging],” she says. “Why don’t we just take what we already know—replacement—and try to understand and apply it better?”

So perhaps Putin’s straightforward approach to delaying aging holds some merit. Whether it will grant him immortality is another matter.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: longevity myths, and sewer-cleaning robots

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Putin says organ transplants could grant immortality. Not quite.

—Jessica Hamzelou

Earlier this week, my editor forwarded me a video of the leaders of Russia and China talking about immortality. “These days at 70 years old you are still a child,” China’s Xi Jinping, 72, was translated as saying.

“With the developments of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Russia’s Vladimir Putin, also 72, is reported to have replied.

In reality, rounds of organ transplantation surgery aren’t likely to help anyone radically extend their lifespan anytime soon. And it’s a simplistic way to think about aginga process so complicated that researchers can’t agree on what causes it, why it occurs, or even how to define it, let alone “treat” it. Read the full story.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

India is using robots to clean sewer pipes so humans no longer have to

When Jitender was a child in New Delhi, both his parents worked as manual scavengers—a job that involved clearing the city’s sewers by hand. Now, he is among almost 200 contractors involved in the Delhi government’s effort to shift from this manual process to safer mechanical methods.

Although it has been outlawed since 1993, manual scavenging—the practice of extracting human excreta from toilets, sewers, or septic tanks—is still practiced widely in India. And not only is the job undignified, but it can be extremely dangerous.

Now, several companies have emerged to offer alternatives at a wide range of technical complexity. Read the full story.

—Hamaad Habibullah

This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 RFK Jr buried a major study linking alcohol and cancer
Clearly, the alcohol industry’s intense lobbying of the Trump administration is working. (Vox)
+ RFK Jr repeated health untruths during a marathon Senate hearing yesterday. (Mother Jones)
+ His anti-vaccine stance alarmed Democrats and Republicans alike. (The Atlantic $)

2 US tech giants want to embed AI in education
They’re backing a vaguely worded initiative to that effect launched by Melania Trump. (Rolling Stone $)
+ Tech leaders took it in turns to praise Trump during dinner. (WSJ $)
+ Elon Musk was nowhere to be seen. (The Guardian)
+ AI’s giants want to take over the classroom. (MIT Technology Review)

3 The FTC will probe AI companies over their impact on children 
In a bid to evaluate whether chatbots are harming their mental health. (WSJ $)
+ An AI companion site is hosting sexually charged conversations with underage celebrity bots. (MIT Technology Review)

4 Podcasting giant Joe Rogan has been spreading climate misinformation
He’s grossly misinterpreted scientists’ research—and they’re exasperated. (The Guardian)
+ Rogan claims the Earth’s temperature is plummeting. It isn’t.  (Forbes)
+ Why climate researchers are taking the temperature of mountain snow. (MIT Technology Review)

5 DeepSeek is working on its own advanced AI agent
Watch out, OpenAI. (Bloomberg $)

6 OpenAI will start making its own AI chips next year
In a bid to lessen its reliance on Nvidia. (FT $)

7 Warner Bros is suing Midjourney
The AI startup used the likenesses of characters including Superman without permission, it alleges. (Bloomberg $)
+ What comes next for AI copyright lawsuits? (MIT Technology Review)

8 Rivers and lakes are being used to cool down buildings
But networks in Paris, Toronto, the US are facing a looming problem. (Wired $)
+ The future of urban housing is energy-efficient refrigerators. (MIT Technology Review)

9 How high school reunions survive in the age of social media
Curiosity is a powerful driving force, it seems. (The Atlantic $)

10 Facebook’s poke feature is back 👈
If I still used Facebook, I’d be thrilled. (TechCrunch)

Quote of the day

“Even if it doesn’t turn you into the alien if you eat this stuff, I guarantee you’ll grow an extra ear.”

—Senator John Kennedy, a Republican from Louisiana, warns of dire consequences if Americans eat shrimp from countries other than the US, Gizmodo reports.

One more thing

Why one developer won’t quit fighting to connect the US’s grids

Michael Skelly hasn’t learned to take no for an answer. For much of the last 15 years, the energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest. But so far, he has little to show for the effort.

Skelly has long argued that building such lines and linking together the nation’s grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business shut down in 2019, after halting two of its projects and selling off interests in three more.

Skelly contends he was early, not wrong. And he has a point: market and policymakers are increasingly coming around to his perspective. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Paper, the new mockumentary from the makers of the American Office, looks interesting.
+ Giorgio Armani was a true maestro of menswear.
+ The phases of the moon are pretty fascinating 🌕
+ The Damien Hirst-directed video for Blur’s classic Country House has been given a 4K makeover.

5 Content Marketing Ideas for October 2025

October 2025 presents content marketers with a rich mix of content themes and topics. Halloween headlines the month, but there are also inspirational cultural observances, industry celebrations, and seasonal transitions.

Content marketing is the process of creating, publishing, and promoting content such as articles, videos, or podcasts to attract, engage, and retain customers.

Content marketing is closely associated with search engine optimization, generative engine optimization, and social media marketing. While so-called evergreen content has its place, in 2025 search engines, large language models, and shoppers often seek fresh stories and angles.

What follows are five content marketing ideas your business can try in October 2025.

AI-Generated Halloween Fun

AI image of grade-school-age kids trick or treating.

Marketers can feature AI tools prominently for Halloween 2025. This image is AI-generated.

Halloween is a key retail sales event. In 2024, for example, U.S. shoppers spent nearly $12 billion on costumes, candy, and decorations.

For content marketing, Halloween shopping guides and party suggestions are staples. So add artificial intelligence to freshen things up and expedite the process!

Merchants can employ AI for Halloween content in at least three ways:

  • Interactive AI-powered tools. Imagine an online party supply shop that “vibe codes” an AI-powered Halloween party planning tool. The tool asks shoppers questions, and based on the answers, it delivers a full party plan, complete with games and a shopping list.
  • Entertaining articles. Just about any merchant can publish articles with themes of “We Asked AI for the Most Outrageous…” or “We Asked AI to Design the Spookiest Costumes of 2025.”
  • Offer AI prompts. The same party supply shop could publish a list of the 10 best Halloween party planning prompts, such as “10 ChatGPT Prompts for the Perfect Halloween Party.”

National Manufacturing Day

Screenshot of Origin's home page showing a male outdoors

Origin, a direct-to-consumer apparel brand, appeals to shoppers seeking U.S.-made products.

National Manufacturing Day, observed on the first Friday in October, began in 2012 to showcase modern manufacturing and inspire skilled workers.

In 2025, the occasion falls on October 3 and is part of a broader Manufacturing Month coordinated by the National Association of Manufacturers.

For ecommerce businesses, Manufacturing Day is an opportunity to showcase suppliers and how they make products. Shoppers like this sort of supply chain transparency. For example, Origin is a direct-to-consumer apparel brand with an engaging manufacturing story. How and where it produces products is a vital part of the brand’s appeal to shoppers seeking U.S.-made products.

National Manufacturing Day is beneficial for seemingly all direct-to-consumer brands and an opportunity to share a founding story.

Italian-American Heritage Month

Mr Porter is a leading example of retail content marketing. The site’s articles align with engaging topics and popular products.

Every October since 1989, the United States has observed Italian-American Heritage Month, acknowledging the profound impact of Italian immigrants and their descendants on American culture.

From cuisine and fashion to construction and music, Italian-American contributions weave into the fabric of daily life.

For ecommerce businesses, the observance inspires content that connects products to heritage and the Italian-American experience.

Kitchen supply stores could particularly benefit. Italian cuisine has a broad appeal worldwide. Imagine showcasing how to make a ragu or pizza while promoting cookware, utensils, or specialty ingredients.

There are certainly other ways to connect products sold to Italian culture. Menswear retailer and marketplace Mr Porter has a history of producing content related to Italy. Here are some examples.

In each article, Mr Porter promotes between 12 and 20 products.

World Space Week

Illustration of rockets in space

World Space Week is a chance to engage with tech-savvy shoppers and space enthusiasts.

The United Nations established World Space Week in 1999. More than 90 countries now recognize it.

World Space Week takes place from October 4 to 10 each year, commemorating the Soviet Union’s launch of Sputnik in 1957 and the signing of the Outer Space Treaty in 1967.

The week provides many content opportunities.

Educational retailers could publish activity guides that highlight space-themed toys, puzzles, and kits. A home decor shop might curate collections of space-themed bedding, wall art, or lighting. Hobby stores and craft shops could capitalize, too.

Winterization Listicles

AI-produced image of a residential house in the snow.

October is a time to get ready for the cold season.

October is the heart of autumn in the Northern Hemisphere. Temperatures creep downward, and the trees blaze with fall colors. It is time to prepare for winter.

That preparation presents an opportunity to publish helpful winterization listicles. These lists should offer practical, scannable guides that help consumers prepare for the cold.

Here are a few example headlines.

  • A home improvement retailer could publish “10 Steps to Winterize Your Home.”
  • An auto parts store could create the list “15 Essentials to Prepare Your Car for Winter.”
  • An online outfitter might write “12 Gear Must-Haves for Cold-Weather Adventures.”

Checklists and practical advice position merchants as problem solvers. It also nudges shoppers toward timely seasonal purchases they may not have planned, potentially increasing basket size and driving early Q4 revenue.

The Problem With Always-On SEO: Why You Need Sprints, Not Checklists via @sejournal, @coreydmorris

There’s a lot that goes into SEO. And, now, more broadly into being found online and online visibility overall, whether we’re talking about an organic result in a search engine, an AI Overview, or through a large language model (LLM).

With SEO being a discipline that often takes a long time (compared to ads and some other channels and platforms), with a large amount of complexity, technical aspects, contradictions of how it works, and even disagreements, it has to be organized in a way that can be implemented.

Over the years and decades, this has resulted in the acceptance of specific “best practices,” along with the fact that it is a longer-term commitment. That, ultimately, has led to the use of checklists and specific cadences to accomplish what is typically seen as an “ongoing” and never-ending discipline.

In full disclosure, you’ll find articles written by me that talk about checklists and ways to structure the work that is important to be visible and found online. I’m not saying we have to throw them out, but we can’t simply do the list or activities.

“Always-on SEO” sounds great in theory: ongoing optimization, constant monitoring, and steady progress. But in reality, it often becomes a nebulous set of tasks without priority, strategy, or momentum.

This article challenges the default mindset of treating SEO as a perpetual checklist and proposes a sprint-based approach, where work is grouped into focused time blocks with measurable goals.

By approaching SEO in strategic sprints, teams can prioritize, measure, adapt, and improve – all while staying aligned with larger business goals.

The Problem With Perpetual SEO Checklists

What I often see with SEO checklists is a lack of prioritization. Everything becomes a task, but nothing is deemed critical.

The checklist might have “right” and “good” things in it, but it isn’t weighted or prioritized based on any level of strategic approach or potential level of impact.

And, when there’s a lack of direction, we often can end up with a set of actions, activities, or tactics that have no clear end or evaluation defined. This ends up getting us into a place of just “doing SEO” without being able to objectively say what the result was or how things were improved.

Like any digital marketing channel, activity without the right anchor or foundation, in SEO, can result in wasted effort.

Technical fixes and content updates may not support meaningful business goals and can be a huge investment of time and money that ultimately don’t impact the business. And, activity without results or clear direction can drive SEO teams and professionals to boredom or burnout.

I’ve taken over a number of situations where a business thought SEO didn’t work for them or that the team was not competent enough due to stakeholder confusion.

When activity doesn’t generate results and you find it out a year into an investment, it is hard to recover, especially when no one really knows what “done” or what success looks like in the effort.

I say all of this not to bring up pain, say that checklists aren’t good, or even that the ongoing tactics aren’t right. I’m simply saying we have to have a deeper understanding and meaning behind what we’re doing in SEO.

What Sprint-Based SEO Looks Like

SEO sprints are focused and time-bound (e.g., four weeks) efforts with specific goals tied to strategy. Rather than working on everything at once, you work on the highest-impact priorities in chunks.

Common sprint types:

  • Content optimization sprints.
  • Technical SEO fix sprints.
  • Internal linking improvement sprints.
  • New content creation sprints.
  • Authority/link building sprints.

You can also combine types into a custom sprint. Regardless of whether you stay in a category or make one that contains blended themes or tactics, it needs to be anchored to an initial strategy, plan, or audit for your first one.

Each sprint ends with measurable outputs, documented outcomes, and clear learnings. The first one might be rooted in an initial plan, but each subsequent sprint will include a retrospective review from the previous one to help fuel continuous learning, efficiencies, improvements, and ultimate impact.

Benefits Of SEO Sprints

A quick win benefit is gaining focus. Pivoting away from a generic checklist to sprint structure results in solving a defined problem, not tackling a vague backlog.

As noted earlier, sprints are time-based as well. By having the right length (not too short or small of a sample size, yet too long and repeating tactics that aren’t effective), you gain the benefits of agility and an adaptable longer-term approach overall.

Agility in sprints allows you to adjust based on performance and new insights. Checklists are not only generic or often disconnected from strategy, but are getting out of date constantly with shifts in online visibility optimization sources and methods.

Accountability and team clarity come more naturally as well. It’s easier to report on and justify value with clear before/after comparisons and to keep people engaged and in the know on what’s happening now and what’s next.

This matters for overall business alignment of key performance indicators (KPIs) and not getting too deep and lost in the jargon, technical aspects, and “hope” for return on investment (ROI) versus seeing shorter-term, higher-impact efforts.

Sprints can be tied directly to goals (revenue, lead generation, funnel support) and not just rankings or other KPIs that are upstream and further removed from business outcomes, and shorter-term expectations can take pressure off of long-term waiting for something to happen.

How To Implement Sprint-Based SEO

Start with strategy. Identify what matters to the business and where SEO fits. Define sprint themes and objectives, and make them specific enough to be meaningful and measurable.

Example: “Improve organic conversions for top 5 services pages” vs. “Improve rankings.”

Build a backlog or tactics plan, but don’t treat it like a checklist. Use it to feed sprint plans, but not overwhelm day-to-day work.

In short:

  • Plan your first sprint: Choose one clear objective, timeline, and outcome.
  • Track and review: Report on progress, document what was done, and define what’s next.
  • Iterate: Use learnings from each sprint to improve the next.

When (And Where) “Always-On” SEO Still Applies

Certain things do need continuous attention. I’m not saying that it is right for 100% of your sprints to be 100% custom.

There are recurring things that could, or likely should, go into sprints or be monitored and maintained by regular or routine audits or checklists, e.g., crawl errors, broken links, technical issues, etc.

But, this maintenance work shouldn’t be the SEO strategy. It should support it. Use “always-on” as infrastructure or basics, not direction, and remember that the checklist isn’t the strategy, and if you have one, it is a planning tool, not necessarily your tactical plan and roadmap to ultimate SEO ROI.

Why It’s Time To Rethink “Always-On” SEO

I’ve hit on it enough, but I will wrap up by reminding you that endless to-do lists don’t move the needle.

Checklists can be good things and full of the “right” tactics. However, they often lack strategy and don’t serve shorter attention spans or allow for enough agility.

Sprint-based SEO helps teams be more strategic, productive, and aligned with the business overall, with room to implement prioritized tactics, tied to overall goals, and adjust to market and business needs and conditions.

Shifting your team from “always-on” to “intentionally paced” is a move to start seeing results and not just activity.

More Resources:


Featured Image: wenich_mit/Shutterstock

How Trump is helping China extend its massive lead in clean energy 

On a spring day in 1954, Bell Labs researchers showed off the first practical solar panels at a press conference in Murray Hill, New Jersey, using sunlight to spin a toy Ferris wheel before a stunned crowd.

The solar future looked bright. But in the race to commercialize the technology it invented, the US would lose resoundingly. Last year, China exported $40 billion worth of solar panels and modules, while America shipped just $69 million, according to the New York Times. It was a stunning forfeit of a huge technological lead. 

And now the US seems determined to repeat the mistake. In its quest to prop up aging fossil-fuel industries, the Trump administration has slashed federal support for the emerging cleantech sector, handing his nation’s chief economic rival the most generous of gifts: an unobstructed path to locking in its control of emerging energy technologies, and a leg up in inventing the industries of the future.

China’s dominance of solar was no accident. In the late 2000s, the government simply determined that the sector was a national priority. Then it leveraged deep subsidies, targeted policies, and price wars to scale up production, drive product improvements, and slash costs. It’s made similar moves in batteries, electric vehicles, and wind turbines. 

Meanwhile, President Donald Trump has set to work unraveling hard-won clean-energy achievements in the US, snuffing out the gathering momentum to rebuild the nation’s energy sector in cleaner, more sustainable ways.

The tax and spending bill that Trump signed into law in early July wound down the subsidies for solar and wind power contained in the Inflation Reduction Act of 2022. The legislation also cut off federal support for cleantech projects that rely too heavily on Chinese materials—a hamfisted bid to punish Chinese industries that will instead make many US projects financially unworkable.

Meanwhile, the administration has slashed federal funding for science and attacked the financial foundations of premier research universities, pulling up the roots of future energy innovations and industries.

A driving motivation for many of these policies is the quest to protect the legacy energy industry based on coal, oil, and natural gas, all of which the US is geologically blessed with. But this strategy amounts to the innovator’s dilemma playing out at a national scale—a country clinging to its declining industries rather than investing in the ones that will define the future.

It does not particularly matter whether Trump believes in or cares about climate change. The economic and international security imperatives to invest in modern, sustainable industries are every bit as indisputable as the chemistry of greenhouse gases.

Without sustained industrial policies that reward innovation, American entrepreneurs and investors won’t risk money and time creating new businesses, developing new products, or building first-of-a-kind projects here. Indeed, venture capitalists have told me that numerous US climate-tech companies are already looking overseas, seeking markets where they can count on government support. Some fear that many other companies will fail in the coming months as subsidies disappear, developments stall, and funding flags. 

All of which will help China extend an already massive lead.

The nation has installed nearly three times as many wind turbines as the US, and it generates more than twice as much solar power. It boasts five of the 10 largest EV companies in the world, and the three largest wind turbine manufacturers. China absolutely dominates the battery market, producing the vast majority of the anodes, cathodes, and battery cells that increasingly power the world’s vehicles, grids, and gadgets.

China harnessed the clean-energy transition to clean up its skies, upgrade its domestic industries, create jobs for its citizens, strengthen trade ties, and build new markets in emerging economies. In turn, it’s using those business links to accrue soft power and extend its influence—all while the US turns it back on global institutions.

These widening relationships increasingly insulate China from external pressures, including those threatened by Trump’s go-to tactic: igniting or inflaming trade wars. 

But stiff tariffs and tough talk aren’t what built the world’s largest economy and established the US as the global force in technology for more than a century. What did was deep, sustained federal investment into education, science, and research and development—the very budget items that Trump and his party have been so eager to eliminate. 

Another thing

Earlier this summer, the EPA announced plans to revoke the Obama-era “endangerment finding,” the legal foundation for regulating the nation’s greenhouse-gas pollution. 

The agency’s argument leans heavily on a report that rehashes decades-old climate-denial talking points to assert that rising emissions haven’t produced the harms that scientists expected. It’s a wild, Orwellian plea for you to reject the evidence of your eyes and ears in a summer that saw record heat waves in the Midwest and East and is now blanketing the West in wildfire smoke.

Over the weekend, more than 85 scientists sent a point-by-point, 459-page rebuttal to the federal government, highlighting myriad ways in which the report “is biased, full of errors, and not fit to inform policy making,” as Bob Kopp, a climate scientist at Rutgers, put it on Bluesky.

“The authors reached these flawed conclusions through selective filtering of evidence (‘cherry picking’), overemphasis of uncertainties, misquoting peer-reviewed research, and a general dismissal of the vast majority of decades of peer-reviewed research,” the dozens of reviewers found.

The Trump administration handpicked researchers who would write the report it wanted to support its quarrel with thermometers and justify its preordained decision to rescind the endangerment finding. But it’s legally bound to hear from others as well, notes Karen McKinnon, a climate researcher at the University of California, Los Angeles.

“Luckily, there is time to take action,” McKinnon said in a statement. “Comment on the report, and contact your representatives to let them know we need to take action to bring back the tolerable summers of years past.”

You can read the full report here, or NPR’s take here. And be sure to read Casey Crownhart’s earlier piece in The Spark on the endangerment finding.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

Earlier this summer, I walked through the glassy lobby of a fancy office in London, into an elevator, and then along a corridor into a clean, carpeted room. Natural light flooded in through its windows, and a large pair of umbrella-like lighting rigs made the room even brighter. I tried not to squint as I took my place in front of a tripod equipped with a large camera and a laptop displaying an autocue. I took a deep breath and started to read out the script.

I’m not a newsreader or an actor auditioning for a movie—I was visiting the AI company Synthesia to give it what it needed to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me. 

When Synthesia launched in 2017, its primary purpose was to match AI versions of real human faces—for example, the former footballer David Beckham—with dubbed voices speaking in different languages. A few years later, in 2020, it started giving the companies that signed up for its services the opportunity to make professional-level presentation videos starring either AI versions of staff members or consenting actors. But the technology wasn’t perfect. The avatars’ body movements could be jerky and unnatural, their accents sometimes slipped, and the emotions indicated by their voices didn’t always match their facial expressions.

Now Synthesia’s avatars have been updated with more natural mannerisms and movements, as well as expressive voices that better preserve the speaker’s accent—making them appear more humanlike than ever before. For Synthesia’s corporate clients, these avatars will make for slicker presenters of financial results, internal communications, or staff training videos.

I found the video demonstrating my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was. This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?  

The creation process

When my former colleague Melissa visited Synthesia’s London studio to create an avatar of herself last year, she had to go through a long process of calibrating the system, reading out a script in different emotional states, and mouthing the sounds needed to help her avatar form vowels and consonants. As I stand in the brightly lit room 15 months later, I’m relieved to hear that the creation process has been significantly streamlined. Josh Baker-Mendoza, Synthesia’s technical supervisor, encourages me to gesture and move my hands as I would during natural conversation, while simultaneously warning me not to move too much. I duly repeat an overly glowing script that’s designed to encourage me to speak emotively and enthusiastically. The result is a bit as if if Steve Jobs had been resurrected as a blond British woman with a low, monotonous voice. 

It also has the unfortunate effect of making me sound like an employee of Synthesia.“I am so thrilled to be with you today to show off what we’ve been working on. We are on the edge of innovation, and the possibilities are endless,” I parrot eagerly, trying to sound lively rather than manic. “So get ready to be part of something that will make you go, ‘Wow!’ This opportunity isn’t just big—it’s monumental.”

Just an hour later, the team has all the footage it needs. A couple of weeks later I receive two avatars of myself: one powered by the previous Express-1 model and the other made with the latest Express-2 technology. The latter, Synthesia claims, makes its synthetic humans more lifelike and true to the people they’re modeled on, complete with more expressive hand gestures, facial movements, and speech. You can see the results for yourself below. 

COURTESY SYNTHESIA

Last year, Melissa found that her Express-1-powered avatar failed to match her transatlantic accent. Its range of emotions was also limited—when she asked her avatar to read a script angrily, it sounded more whiny than furious. In the months since, Synthesia has improved Express-1, but the version of my avatar made with the same technology blinks furiously and still struggles to synchronize body movements with speech.

By way of contrast, I’m struck by just how much my new Express-2 avatar looks like me: Its facial features mirror my own perfectly. Its voice is spookily accurate too, and although it gesticulates more than I do, its hand movements generally marry up with what I’m saying. 

But the tiny telltale signs of AI generation are still there if you know where to look. The palms of my hands are bright pink and as smooth as putty. Strands of hair hang stiffly around my shoulders instead of moving with me. Its eyes stare glassily ahead, rarely blinking. And although the voice is unmistakably mine, there’s something slightly off about my digital clone’s intonations and speech patterns. “This is great!” my avatar randomly declares, before slipping back into a saner register.

Anna Eiserbeck, a postdoctoral psychology researcher at the Humboldt University of Berlin who has studied how humans react to perceived deepfake faces, says she isn’t sure she’d have been able to identify my avatar as a deepfake at first glance.

But she would eventually have noticed something amiss. It’s not just the small details that give it away—my oddly static earring, the way my body sometimes moves in small, abrupt jerks. It’s something that runs much deeper, she explains.

“Something seemed a bit empty. I know there’s no actual emotion behind it— it’s not a conscious being. It does not feel anything,” she says. Watching the video gave her “this kind of uncanny feeling.” 

My digital clone, and Eiserbeck’s reaction to it, make me wonder how realistic these avatars really need to be. 

I realize that part of the reason I feel disconcerted by my avatar is that it behaves in a way I rarely have to. Its oddly upbeat register is completely at odds with how I normally speak; I’m a die-hard cynical Brit who finds it difficult to inject enthusiasm into my voice even when I’m genuinely thrilled or excited. It’s just the way I am. Plus, watching the videos on a loop makes me question if I really do wave my hands about that way, or move my mouth in such a weird manner. If you thought being confronted with your own face on a Zoom call was humbling, wait until you’re staring at a whole avatar of yourself. 

When Facebook was first taking off in the UK almost 20 years ago, my friends and I thought illicitly logging into each other’s accounts and posting the most outrageous or rage-inducing status updates imaginable was the height of comedy. I wonder if the equivalent will soon be getting someone else’s avatar to say something truly embarrassing: expressing support for a disgraced politician or (in my case) admitting to liking Ed Sheeran’s music. 

Express-2 remodels every person it’s presented with into a polished professional speaker with the body language of a hyperactive hype man. And while this makes perfect sense for a company focused on making glossy business videos, watching my avatar doesn’t feel like watching me at all. It feels like something else entirely.

How it works

The real technical challenge these days has less to do with creating avatars that match our appearance than with getting them to replicate our behavior, says Björn Schuller, a professor of artificial intelligence at Imperial College London. “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” he says. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.”

To achieve an improved level of realism, Synthesia developed a number of new audio and video AI models. The team created a voice cloning model to preserve the human speaker’s accent, intonation, and expressiveness—unlike other voice models, which can flatten speakers’ distinctive accents into generically American-sounding voices.

When a user uploads a script to Express-1, its system analyzes the words to infer the correct tone to use. That information is then fed into a diffusion model, which renders the avatar’s facial expressions and movements to match the speech. 

Alongside the voice model, Express-2 uses three other models to create and animate the avatars. The first generates an avatar’s gestures to accompany the speech fed into it by the Express-Voice model. A second evaluates how closely the input audio aligns with the multiple versions of the corresponding generated motion before selecting the best one. Then a final model renders the avatar with that chosen motion. 

This third rendering model is significantly more powerful than its Express-1 predecessor. Whereas the previous model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. This means it takes less time to create the avatar, says Youssef Alami Mejjati, Synthesia’s head of research and development:

“With Express-1, it needed to first see someone expressing emotions to be able to render them. Now, because we’ve trained it on much more diverse data and much larger data sets, with much more compute, it just learns these associations automatically without needing to see them.” 

Narrowing the uncanny valley

Although humanlike AI-generated avatars have been around for years, the recent boom in generative AI is making it increasingly easier and more affordable to create lifelike synthetic humans—and they’re already being put to work. Synthesia isn’t alone: AI avatar companies like Yuzu Labs, Creatify, Arcdads, and Vidyard give businesses the tools to quickly generate and edit videos starring either AI actors or artificial versions of members of staff, promising cost-effective ways to make compelling ads that audiences connect with. Similarly, AI-generated clones of livestreamers have exploded in popularity across China in recent years, partly because they can sell products 24/7 without getting tired or needing to be paid. 

For now at least, Synthesia is “laser focused” on the corporate sphere. But it’s not ruling out expanding into new sectors such as entertainment or education, says Peter Hill, the company’s chief technical officer. In an apparent step toward this, Synthesia recently partnered with Google to integrate Google’s powerful new generative video model Veo 3 into its platform, allowing users to directly generate and embed clips into Synthesia’s videos. It suggests that in the future, these hyperrealistic artificial humans could take up starring roles in detailed universes with ever-changeable backdrops. 

At present this could, for example, involve using Veo 3 to generate a video of meat-processing machinery, with a Synthesia avatar next to the machines talking about how to use them safely. But future versions of Synthesia’s technology could result in educational videos customizable to an individual’s level of knowledge, says Alex Voica, head of corporate affairs and policy at Synthesia. For example, a video about the evolution of life on Earth could be tweaked for someone with a biology degree or someone with high-school-level knowledge. “It’s going to be such a much more engaging and personalized way of delivering content that I’m really excited about,” he says. 

The next frontier, according to Synthesia, will be avatars that can talk back, “understanding” conversations with users and responding in real time Think ChatGPT, but with a lifelike digital human attached. 

Synthesia has already added an interactive element by letting users click through on-screen questions during quizzes presented by its avatars. But it’s also exploring making them truly interactive: Future users could ask their avatar to pause and expand on a point, or ask it a question. “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive,” says Alami Mejjati. “This, for me, is the missing part in online learning experiences today. And I know we’re very close to solving that.”

We already know that humans can—and do—form deep emotional bonds with AI systems, even with basic text-based chatbots. Combining agentic technology—which is already capable of navigating the web, coding, and playing video games unsupervised—with a realistic human face could usher in a whole new kind of AI addiction, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab.  

“If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” he says. “We’ve seen many cases where AI companions have influenced dangerous behavior even when they are basically texting. If an avatar had a talking head, it would be even more addictive.”

Schuller agrees that avatars in the near future will be perfectly optimized to adjust their projected levels of emotion and charisma so that their human audiences will stay engaged for as long as possible. “It will be very hard [for humans] to compete with charismatic AI of the future; it’s always present, always has an ear for you, and is always understanding,” he says. “Al will change that human-to-human connection.”

As I pause and replay my Express-2 avatar, I imagine holding conversations with it—this uncanny, permanently upbeat, perpetually available product of pixels and algorithms that looks like me and sounds like me, but fundamentally isn’t me. Virtual Rhiannon has never laughed until she’s cried, or fallen in love, or run a marathon, or watched the sun set in another country. 

But, I concede, she could deliver a damned good presentation about why Ed Sheeran is the greatest musician ever to come out of the UK. And only my closest friends and family would know that it’s not the real me.