How Yichao “Peak” Ji became a global AI app hitmaker

Yichao “Peak” Ji is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production—it was directed by cofounder Zhang Tao and filmed in a corner of their Beijing office. But something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people. 

At first sight, Manus works like most chatbots: Users can ask it questions in a chat window. However, besides providing answers, it can also carry out tasks (for example, finding an apartment that meets specified criteria within a certain budget). It does this by breaking tasks down into steps, then using a cloud-based virtual machine equipped with a browser and other tools to execute them—perusing websites, filling in forms, and so on.

Ji is the technical core of the team. Now based in Singapore, he leads product and infrastructure development as the company pushes forward with its global expansion. 

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility among both engineers and investors—and put him at the forefront of a rising class of Chinese technologists with AI products and global ambitions. 

Serial builder

The son of a professor and an IT professional, Ji moved to Boulder, Colorado, at age four for his father’s visiting scholar post, returning to Beijing in second grade.

His fluent English set him apart early on, but it was an elementary school robotics team that sparked his interest in programming. By high school, he was running the computer club, teaching himself how to build operating systems, and drawing inspiration from Bill Gates, Linux, and open-source culture. He describes himself as a lifelong Apple devotee, and it was Apple’s launch of the App Store in 2008 that ignited his passion for development.

In 2010, as a high school sophomore, Ji created the Mammoth browser, a customizable third-party iPhone browser. It quickly became the most-downloaded third-party browser developed by an individual in China and earned him the Macworld Asia Grand Prize in 2011. International tech site AppAdvice called it a product that “redefined the way you browse the internet.” At age 20, he was on the cover of Forbes magazine and made its “30 Under 30” list. 

Meet the rest of this year’s 
Innovators Under 35.

During his teenage years, Ji developed several other iOS apps, including a budgeting tool designed for Hasbro’s Monopoly game, which sold well—until it attracted a legal notice for using the trademarked name. But Ji wasn’t put off a career in tech by that early brush with a multinational legal team. If anything, he says, it sharpened his instincts for both product and risk. 

In 2012, Ji launched his own company, Peak Labs, and later led the development of Magi, a search engine. The tool extracted information from across the web to answer queries—conceptually similar to today’s AI-powered search, but powered by a custom language model. 

​​Magi was briefly popular, drawing millions of users in its first month, but consumer adoption didn’t stick. It did, however, attract enterprise interest, and Ji adapted it for B2B use, before selling it in 2022. 

AI acumen 

Manus would become his next act—and a more ambitious one. His cofounders, Zhang Tao and Xiao Hong, complement Ji’s technical core with product know-how, storytelling, and organizational savvy. Both Xiao and Ji are serial entrepreneurs who have been backed by venture capital firm ZhenFund multiple times. Together, they represent the kind of long-term collaboration and international ambition that increasingly defines China’s next wave of entrepreneurs.

JULIANA TAN

People who have worked with Ji describe him as a clear thinker, a fast talker, and a tireless, deeply committed builder who thinks in systems, products, and user flows. He represents a new generation of Chinese technologists: equally at home coding or in pitch meetings, fluent in both building and branding. He’s also a product of open-source culture, and remains an active contributor whose projects regularly garner attention—and GitHub stars—across developer communities.

With new funding led by US venture capital firm Benchmark, Ji and his team are taking Manus to the wider world, relocating operations outside of China, to Singapore, and actively targeting consumers around the world. The product is built on US-based infrastructure, drawing on technologies like Claude Sonnet, Microsoft Azure, and open-source tools such as Browser Use. It’s a distinctly global setup: an AI agent developed by a Chinese team, powered by Western platforms, and designed for international users. That isn’t incidental; it reflects the more fluid nature of AI entrepreneurship today, where talent, infrastructure, and ambition move across borders just as quickly as the technology itself.

For Ji, the goal isn’t just building a global company—it’s building a legacy. “I hope Manus is the last product I’ll ever build,” Ji says. “Because if I ever have another wild idea—(I’ll just) leave it to Manus!”

How Trump’s policies are affecting early-career scientists—in their own words

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

Every year MIT Technology Review celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list. We’ve just published the 2025 edition. This year, though, the context is pointedly different: The US scientific community finds itself in an unprecedented position, with the very foundation of its work under attack

Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities individually and academia more broadly, and made substantial funding cuts to the country’s science and technology infrastructure. It has also upended longstanding rights and norms related to free speech, civil rights, and immigration—all of which further affects the overall environment for research and innovation in science and technology. 

We wanted to understand how these changes are affecting the careers and work of our most recent classes of innovators. The US government is the largest source of research funding at US colleges and universities, and many of our honorees are new professors and current or recent graduate or PhD students, while others work with government-funded entities in other ways. Meanwhile, about 16% of those in US graduate programs are international students. 

We sent surveys to the six most recent cohorts, which include 210 people. We asked people about both positive and negative impacts of the administration’s new policies and invited them to tell us more in an optional interview. Thirty-seven completed our survey, and we spoke with 14 of them in follow-up calls. Most respondents are academic researchers (about two-thirds) and are based in the US (81%); 11 work in the private sector (six of whom are entrepreneurs). Their responses provide a glimpse into the complexities of building their labs, companies, and careers in today’s political climate. 

Twenty-six people told us that their work has been affected by the Trump administration’s changes; only one of them described those effects as “mostly positive.” The other 25 reported primarily negative effects. While a few agreed to be named in this story, most asked to be identified only by their job titles and general areas of work, or wished to remain anonymous, for fear of retaliation. “I would not want to flag the ire of the US government,” one interviewee told us. 

Across interviews and surveys, certain themes appeared repeatedly: the loss of jobs, funding, or opportunities; restrictions on speech and research topics; and limits on who can carry out that research. These shifts have left many respondents deeply concerned about the “long-term implications in IP generation, new scientists, and spinout companies in the US,” as one respondent put it. 

One of the things we heard most consistently is that the uncertainty of the current moment is pushing people to take a more risk-averse approach to their scientific work—either by selecting projects that require fewer resources or that seem more in line with the administration’s priorities, or by erring on the side of hiring fewer people. “We’re not thinking so much about building and enabling … we’re thinking about surviving,” said one respondent. 

Ultimately, many are worried that all the lost opportunities will result in less innovation overall—and caution that it will take time to grasp the full impact. 

“We’re not going to feel it right now, but in like two to three years from now, you will feel it,” said one entrepreneur with a PhD who started his company directly from his area of study. “There are just going to be fewer people that should have been inventing things.”

The money: “Folks are definitely feeling the pressure”

The most immediate impact has been financial. Already, the Trump administration has pulled back support for many areas of science—ending more than a thousand awards by the National Institutes of Health and over 100 grants for climate-related projects by the National Science Foundation. The rate of new awards granted by both agencies has slowed, and the NSF has cut the number of graduate fellowships it’s funding by half for this school year. 

The administration has also cut or threatened to cut funding from a growing number of universities, including Harvard, Columbia, Brown, and UCLA, for supposedly not doing enough to combat antisemitism.

As a result, our honorees said that finding funding to support their work has gotten much harder—and it was already a big challenge before. 

A biochemist at a public university told us she’d lost a major NIH grant. Since it was terminated earlier this year, she’s been spending less time in the lab and more on fundraising. 

Others described uncertainty about the status of grants from a wide range of agencies, including NSF, the Advanced Research Projects Agency for Health, the Department of Energy, and the Centers for Disease Control and Prevention, which collectively could pay out more than $44 million to the researchers we’ve recognized. Several had waited months for news on an application’s status or updates on when funds they had already won would be disbursed. One AI researcher who studies climate-related issues is concerned that her multiyear grant may not be renewed, even though renewal would have been “fairly standard” in the past.

Two individuals lamented the cancellation of 24 awards in May by the DOE’s Office of Clean Energy Demonstrations, including grants for carbon capture projects and a clean cement plant. One said the decision had “severely disrupted the funding environment for climate-tech startups” by creating “widespread uncertainty,” “undermining investor confidence,” and “complicating strategic planning.” 

Climate research and technologies have been a favorite target of the Trump administration: The recently passed tax and spending bill put stricter timelines in place that make it harder for wind and solar installations to qualify for tax credits via the Inflation Reduction Act. Already, at least 35 major commercial climate-tech projects have been canceled or downsized this year. 

In response to a detailed list of questions, a DOE spokesperson said, “Secretary [Chris] Wright and President Trump have made it clear that unleashing American scientific innovation is a top priority.” They pointed to “robust investments in science” in the president’s proposed budget and the spending bill and cited special areas of focus “to maintain America’s global competitiveness,” including nuclear fusion, high-performance computing, quantum computing, and AI. 

Other respondents cited tighter budgets brought on by a change in how the government calculates indirect costs, which are funds included in research grants to cover equipment, institutional overhead, and in some cases graduate students’ salaries. In February, the NIH instituted a 15% cap on indirect costs—which ran closer to 28% of the research funds the NIH awarded in 2023. The DOE, DOD, and NSF all soon proposed similar caps. This collective action has sparked lawsuits, and indirect costs remain in limbo. (MIT, which owns MIT Technology Review, is involved in several of these lawsuits; MIT Technology Review is editorially independent from the university.) 

Looking ahead, an academic at a public university in Texas, where the money granted for indirect costs funds student salaries, said he plans to hire fewer students for his own lab. “It’s very sad that I cannot promise [positions] at this point because of this,” he told us, adding that the cap could also affect the competitiveness of public universities in Texas, since schools elsewhere may fund their student researchers differently. 

At the same time, two people with funding through the Defense Department—which could see a surge of investment under the president’s proposed budget—said their projects were moving forward as planned. A biomedical engineer at a public university in the Midwest expressed excitement about what he perceives as a fresh surge of federal interest in industrial and defense applications of synthetic biology. Still, he acknowledged colleagues working on different projects don’t feel as optimistic: “Folks are definitely feeling the pressure.”

Many who are affected by cuts or delays are now looking for new funding sources in a bid to become less reliant on the federal government. Eleven people said they are pursuing or plan to pursue philanthropic and foundation funding or to seek out industry support. However, the amount of private funding available can’t begin to make up the difference in federal funds lost, and investors often focus more on low-risk, short-term applications than on open scientific questions. 

The NIH responded to a detailed list of questions with a statement pointing to unspecified investments in early-career researchers. “Recent updates to our priorities and processes are designed to broaden scientific opportunity rather than restrict it, ensuring that taxpayer-funded research is rigorous, reproducible, and relevant to all Americans,” it reads. The NSF declined a request for comment from MIT Technology Review

Further complicating this financial picture are tariffs—some of which are already in effect, and many more of which have been threatened. Nine people who responded to our survey said their work is already being affected by these taxes imposed on goods imported into the US. For some scientists, this has meant higher operating costs for their labs: An AI researcher said tariffs are making computational equipment more expensive, while the Texas academic said the cost of buying microscopes from a German firm had gone up by thousands of dollars since he first budgeted for them. (Neither the White House press office nor the White House Office of Science and Technology Policy responded to requests for comment.) 

One cleantech entrepreneur saw a positive impact on his business as more US companies reevaluated their supply chains and sought to incorporate more domestic suppliers. The entrepreneur’s firm, which is based in the US, has seen more interest for its services from potential customers seeking “tariff-proof vendors.”  

“Everybody is proactive on tariffs and we’re one of these solutions—we’re made in America,” he said. 

Another person, who works for a European firm, is factoring potential tariffs into decisions about where to open new production facilities. Though the Trump administration has said the taxes are meant to reinvigorate US manufacturing, she’s now less inclined to build out a significant presence in the US because, she said, tariffs may drive up the costs of importing raw materials that are required to make the company’s product. 

What’s more, financial backers have encouraged her company to stay rooted abroad because of the potential impact of tariffs for US-based facilities: “People who invest worldwide—they are saying it’s reassuring for them right now to consider investing in Europe,” she said.

The climate of fear: “It will impact the entire university if there is retaliation” 

Innovators working in both academia and the private sector described new concerns about speech and the politicization of science. Many have changed how they describe their work in order to better align with the administration’s priorities—fearing funding cuts, job terminations, immigration action, and other potential retaliation. 

This is particularly true for those who work at universities. The Trump administration has reached deals with some institutions, including Columbia and Brown, that would restore part of the funding it slashed—but only after the universities agreed to pay hefty fines and abide by terms that, critics say, hand over an unprecedented level of oversight to administration officials. 

Some respondents had received guidance on what they could or couldn’t say from program managers at their funding agencies or their universities or investors; others had not received any official guidance but made personal decisions on what to say and share publicly based on recent news of grant cancellations.

Both on and off campus, there is substantial pressure on diversity, equity, and inclusion (DEI) initiatives, which have been hit particularly hard as the administration seeks to eliminate what it called “illegal and immoral discrimination programs” in one of the first executive orders of President Trump’s second term.  

One respondent, whose work focuses on fighting child sexual abuse materials, recalled rewriting a grant abstract “3x to remove words banned” by Senator Ted Cruz of Texas, an administration ally; back in February, Cruz identified 3,400 NSF grants as “woke DEI” research advancing “neo-Marxist class warfare propaganda.” (His list includes grants to research self-driving cars and solar eclipses. His office did not respond to a request for comment.) 

Many other researchers we spoke with are also taking steps to avoid being put in the DEI bucket. A technologist at a Big Tech firm whose work used to include efforts to provide more opportunities for marginalized communities to get into computing has stopped talking about those recruiting efforts. One biologist described hearing that grant applications for the NIH now have to avoid words like “cell type diversity” for “DEI reasons”—no matter that “cell type diversity” is, she said, a common and “neutral” scientific term in microbiology. (In its statement, the NIH said: “To be clear, no scientific terms are banned, and commonly used terms like ‘cell type diversity’ are fully acceptable in applications and research proposals.”) 

Plenty of other research has also gotten caught up in the storm

One person who works in climate technology said that she now talks about “critical minerals,” “sovereignty,” and “energy independence” or “dominance” rather than “climate” or “industrial decarbonization.” (Trump’s Energy Department has boosted investment in critical minerals, pledging nearly $1 billion to support related projects.) Another individual working in AI said she has been instructed to talk less about “regulation,” “safety,” or “ethics” as they relate to her work. One survey respondent described the language shift as “definitely more red-themed.”

Some said that shifts in language won’t change the substance of their work, but others feared they will indeed affect the research itself. 

Emma Pierson, an assistant professor of computer science at the University of California, Berkeley, worried that AI companies may kowtow to the administration, which could in turn “influence model development.” While she noted that this fear is speculative, the Trump administration’s AI Action Plan contains language that directs the federal government to purchase large language models that generate “truthful responses” (by the administration’s definition), with a goal of “preventing woke AI in the federal government.” 

And one biomedical researcher fears that the administration’s effective ban on DEI will force an end to outreach “favoring any one community” and hurt efforts to improve the representation of women and people of color in clinical trials. The NIH and the Food and Drug Administration had been working for years to address the historic underrepresentation of these groups through approaches including specific funding opportunities to address health disparities; many of these efforts have recently been cut

Respondents from both academia and the private sector told us they’re aware of the high stakes of speaking out. 

“As an academic, we have to be very careful about how we voice our personal opinion because it will impact the entire university if there is retaliation,” one engineering professor told us. 

“I don’t want to be a target,” said one cleantech entrepreneur, who worries not only about reprisals from the current administration but also about potential blowback from Democrats if he cooperates with it. 

“I’m not a Trumper!” he said. “I’m just trying not to get fined by the EPA.” 

The people: “The adversarial attitude against immigrants … is posing a brain drain”

Immigrants are crucial to American science, but what one respondent called a broad “persecution of immigrants,” and an increasing climate of racism and xenophobia, are matters of growing concern. 

Some people we spoke with feel vulnerable, particularly those who are immigrants themselves. The Trump administration has revoked 6,000 international student visas (causing federal judges to intervene in some cases) and threatened to “aggressively” revoke the visas of Chinese students in particular. In recent months, the Justice Department has prioritized efforts to denaturalize certain citizens, while similar efforts to revoke green cards granted decades ago were shut down by court order. One entrepreneur who holds a green card told us, “I find myself definitely being more cognizant of what I’m saying in public and certainly try to stay away from anything political as a result of what’s going on, not just in science but in the rest of the administration’s policies.” 

On top of all this, federal immigration raids and other enforcement actions—authorities have turned away foreign academics upon arrival to the US and detained others with valid academic visas, sometimes because of their support for Palestine—have created a broad climate of fear.  

Four respondents said they were worried about their own immigration status, while 16 expressed concerns about their ability to attract or retain talent, including international students. More than a million international students studied in the US last year, with nearly half of those enrolling in graduate programs, according to the Institute of International Education

“The adversarial attitude against immigrants, especially those from politically sensitive countries, is posing a brain drain,” an AI researcher at a large public university on the West Coast told us. 

This attack on immigration in the US can be compounded by state-level restrictions. Texas and Florida both restrict international collaborations with and recruitment of scientists from countries including China, even though researchers told us that international collaborations could help mitigate the impacts of decreased domestic funding. “I cannot collaborate at this point because there’s too many restrictions and Texas also can limit us from visiting some countries,” the Texas academic said. “We cannot share results. We cannot visit other institutions … and we cannot give talks.”

All this is leading to more interest in positions outside the United States. One entrepreneur, whose business is multinational, said that their company has received a much higher share of applications from US-based candidates to openings in Europe than it did a year ago, despite the lower salaries offered there. 

“It is becoming easier to hire good people in the UK,” confirmed Karen Sarkisyan, a synthetic biologist based in London. 

At least one US-based respondent, an academic in climate technology, accepted a tenured position in the United Kingdom. Another said that she was looking for positions in other countries, despite her current job security and “very good” salary. “I can tell more layoffs are coming, and the work I do is massively devalued. I can’t stand to be in a country that treats their scientists and researchers and educated people like this,” she told us. 

Some professors reported in our survey and interviews that their current students are less interested in pursuing academic careers because graduate and PhD students are losing offers and opportunities as a result of grant cancellations. So even as the number of international students dwindles, there may also be “shortages in domestic grad students,” one mechanical engineer at a public university said, and “research will fall behind.”  

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

In the end, this will affect not just academic research but also private-sector innovation. One biomedical entrepreneur told us that academic collaborators frequently help his company generate lots of ideas: “We hope that some of them will pan out and become very compelling areas for us to invest in.” Particularly for small startups without large research budgets, having fewer academics to work with will mean that “we just invest less, we just have fewer options to innovate,” he said. “The level of risk that industry is willing to take is generally lower than academia, and you can’t really bridge that gap.” 

Despite it all, a number of researchers and entrepreneurs who generally expressed frustration about the current political climate said they still consider the US the best place to do science. 

Pierson, the AI researcher at Berkeley, described staying committed to her research into social inequities despite the political backlash: “I’m an optimist. I do believe this will pass, and these problems are not going to pass unless we work on them.” 

And a biotech entrepreneur pointed out that US-based scientists can still command more resources than those in most other countries. “I think the US still has so much going for it. Like, there isn’t a comparable place to be if you’re trying to be on the forefront of innovation—trying to build a company or find opportunities,” he said.

Several academics and founders who came to the US to pursue scientific careers spoke about still being drawn to America’s spirit of invention and the chance to advance on their own merits. “For me, I’ve always been like, the American dream is something real,” said one. They said they’re holding fast to those ideals—for now.

Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

The Download: introducing our 35 Innovators Under 35 list for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: our 35 Innovators Under 35 list for 2025

The world is full of extraordinary young people brimming with ideas for how to crack tough problems. Every year, we recognize 35 such individuals from around the world—all of whom are under the age of 35.

These scientists, inventors, and entrepreneurs are working to help mitigate climate change, accelerate scientific progress, and alleviate human suffering from disease. Some are launching companies while others are hard at work in academic labs. They were selected from hundreds of nominees by expert judges and our newsroom staff. 

Get to know them all—including our 2025 Innovator of the Year—in these profiles.

Why basic science deserves our boldest investment

—Julia R. Greer is a materials scientist at the California Institute of Technology, a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

A modern chip the size of a human fingernail contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches form the infrastructure behind nearly every digital device in use today.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. But that funding is under increasing pressure, thanks to deep budget cuts proposed by the White House.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. 

In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is considering annual chip supply permits in China
For South Korean companies Samsung and SK Hynix, specifically. (Bloomberg $)
+ US lawmakers still hold power over chips in China. (CNN)

2 America has recorded its first case of screwworm in over 50 years
And the warming climate is making it easier for the flies to thrive. (Vox)
+ Experts fear an approaching public health emergency. (The Guardian)

3 Drone warfare is dominating Ukraine’s frontline
Amid relentless assaults, overhead and land drones are being put to work. (The Guardian)
+ How cutting-edge drones forced land-locked tanks to evolve. (NYT $)
+ On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

4 OpenAI is working out why chatbots hallucinate so much
Examining a model’s incentives provides some clues. (Insider $)
+ Models’ tendency to confidently present falsehoods as fact is a big problem. (TechCrunch)
+ Why does AI hallucinate? (MIT Technology Review)

5 How one man is connecting Silicon Valley to the Middle East’s AI boom
If you want to build a data center, Zachary Cefaratti is your man. (FT $)
+ The data center boom in the desert. (MIT Technology Review)

6 The first OpenAI-backed movie is coming to theaters next year
The animated Critterz is hoping for a Cannes Film Festival debut. (WSJ $)
+ A Disney director tried—and failed—to use an AI Hans Zimmer to create a soundtrack. (MIT Technology Review)

7 Who wants to live forever?
These billionaires are confident their cash will pave the way to longer lives. (WSJ $)
+ Putin says organ transplants could grant immortality. Not quite. (MIT Technology Review)

8 Tesla isn’t focused on selling cars any more
The company’s latest Master Plan is all about humanoid robots. (The Atlantic $)
+ The board is willing to offer Musk a $1 trillion pay package if he delivers. (Wired $)
+ Uber is gearing up to test driverless cars in Germany. (The Verge)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

9 Do aliens go on holiday?
Scientists wonder whether tourism could be a potential drive for them to visit us. (New Yorker $)
+ How these two UFO hunters became go-to experts on America’s “mystery drone” invasion. (MIT Technology Review)

10 Vodafone’s new TikTok influencer isn’t real
It’s yet another example of AI avatars being used in ads. (The Verge)
+ Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back. (MIT Technology Review)

Quote of the day

“Silicon Valley totally effed up in overhyping LLMs.”

—Palantir CEO Alex Karp criticizes those who fueled the AI hype around large language models, Semafor reports.

One more thing

Puerto Rico’s power struggles

On the southeastern coast of Puerto Rico lies the country’s only coal-fired power station, flanked by a mountain of toxic ash. The plant, owned by the utility giant AES, has long plagued this part of Puerto Rico with air and water pollution.

Before the coal plant opened Guayama had on average just over 103 cancer cases per year. In 2003, the year after the plant opened, the number of cancer cases in the municipality surged by 50%, to 167. 

In 2022, the most recent year with available data, cases hit a new high of 209. The question is: How did it get this bad? Read the full story.

—Alexander C. Kaufman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ What’s up with tennis players’ strange serving rituals?
+ If constant scrolling is turning your hands into gnarled claws, this stretch should help.
+ How to land a genuine bargain on Facebook Marketplace.
+ This photographer tracks down people who featured in pictures decades before, and persuades them to recreate their poses. Heartwarming stuff ❤

Can AI Send the Perfect Ecommerce Promo?

The combination of first-party behavioral data and artificial intelligence may transform ecommerce outbound marketing.

Called “AI individualization,” the goal is to create a personalized shopping experience tailored to an individual’s preferences, behaviors, and buying history.

The Perfect Send

“Internally, we strive for the ‘perfect send,’ when 100 percent of the people who get the message click or engage, and no one opts out,” said Alex Campbell, the chief innovation officer and co-founder at Vibes, a mobile marketing platform.

Campbell was discussing the potential for AI individualization (AI-I), Rich Communication Services, and mobile marketing in the retail sector when he described this 100% engagement, 0% opt-out scenario.

Ecommerce marketers might modify that definition, but the perfect send is when messaging meets a shopper’s need at the moment.

Shopper Expectations

Image showing human hands holding a smartphone.

Shoppers who opt-in to email, text, or push messaging want relevant offers.

“We do a customer survey every year…and we always ask a question like, ‘What would make you opt out?’ Two years ago was the first time we heard, ‘You are not sending me enough messages,’” said Campbell.

The folks surveyed had signed up to receive mobile marketing. They wanted to receive relevant and timely product notifications and discount offers.

AI-I can help.

First Party Data

Ecommerce AI-I is possible because online stores can collect first-party data — purchase history, browsing behavior, engagement data — without relying on third-party cookies or providers.

Humans cannot sort through all the data. Even rules and automations would struggle to reveal individual preferences in real-time.

An AI layer, however, can apply even during the deployment of the messages.

Not Merely Segments

Ecommerce marketers typically segment shoppers around common behaviors. A wine merchant, for example, might have a segment for “value wine shoppers” or “premium wine collectors.”

AI-I creates segments of one, such as a customer who buys red wine under $20, prefers Rhône varietals, responds to Friday sends, and often redeems mobile offers.

Composing the perfect send is much easier with a single segment.

Say the wine merchant implements an AI-I tool. This tool can send shoppers Rich Communication Services (RCS) messages and can access both the product catalog and shopper behavioral data.

Testing can lead to the perfect send.

The AI broadcasts an RCS message containing a product carousel. (RCS has app-like features.) The message has two offers: (i) an Argentine Malbec for $18, as recommended by AI based on the data, and (ii) a Portuguese red blend for $17, meant to introduce new wines to this shopper.

The shopper swipes, taps, visits the site, clicks a “Malbecs Under $20” filter, and ultimately makes a purchase. The AI adds the data from these touchpoints to the customer profile, recording the purchase under $20 or adding a note to test copy around value.

Each new message is an experiment, bringing the AI-I closer to discerning what a shopper wants and when.

That process is nothing new. Data scientists might describe it as “individualized multivariate tests” or a “contextual bandit.” It is an established way to identify individual preferences.

What is different is AI’s speed and scale.

Process Details

For the hypothetical wine shop, harnessing AI-I would require initial setup for more granular data collection, data normalization, and integration.

Once it’s up and running, however, the AI-I tool would likely follow a simple workflow for each new customer.

  • Base segmentation. Start with broad wine categories based on the initial purchase, such as red or white, sparkling or still, and high-end or value.
  • Early engagement. Begin sending messages and track, for example, whether the shopper clicks a Bordeaux at $40, ignores rosé, but buys a Malbec at $15.
  • Individual testing. Generate shopper-specific messages. Each one is an experiment. Offer a Bordeaux at $35 or a Syrah at $18. Continue tracking engagement and behavior. Repeat.
  • Refine the profile. Over time, the AI-I system identifies probabilities, such as “the customer is 70% likely to purchase when the price is under $20 and the varietal is bold red.”
  • Balance with discovery. Introduce a “wild card” wine every few sends — perhaps a Spanish white or sparkling wine — to extend the system’s knowledge of the customer and prevent marketing fatigue.
  • Feedback. All clicks, purchases, and opt-outs feed the AI model, both for the individual and to perfect the overall system.

With each iteration, the AI-I gets closer to the perfect send.

Anthropic Agrees To $1.5B Settlement Over Pirated Books via @sejournal, @MattGSouthern

Anthropic agreed to a proposed $1.5 billion settlement in Bartz v. Anthropic over claims it downloaded pirated books to help train Claude.

If approved, plaintiffs’ counsel says it would be the largest U.S. copyright recovery to date. A preliminary approval hearing is set for today.

In June, Judge William Alsup held that training on lawfully obtained books can qualify as fair use, while copying and storing millions of pirated books is infringement. That order set the stage for settlement talks.

Settlement Details

The deal would pay about $3,000 per eligible title, with an estimated class size of roughly 500,000 books. Plaintiffs allege Anthropic pulled at least 7 million copies from piracy sites Library Genesis and Pirate Library Mirror.

Justin Nelson, counsel for the authors, said:

“As best as we can tell, it’s the largest copyright recovery ever.”

How Payouts Would Work

According to the Authors Guild’s summary, the fund is paid in four tranches after court approvals: $300M soon after preliminary approval, $300M after final approval, then $450M at 12 months and 450M at 24 months, with interest accruing in escrow.

A final “Works List” is due October 10, which will drive a searchable database for claimants.

The Guild notes the agreement requires destruction of pirated copies and resolves only past conduct.

Why This Matters

If you rely on AI tools in content workflows, provenance now matters more. Expect more licensing deals and clearer disclosures from vendors about training data sources.

For publishers and creators, the per-work payout sets a reference point that may strengthen negotiating leverage in future licensing talks.

Looking Ahead

The judge will consider preliminary approval today. If granted, the notice process begins this fall and payments to rightsholders would follow final approval and claims processing, funded on the installment schedule above.


Featured Image: Tigarto/Shutterstock

Google Publishes Exact Gemini Usage Limits Across All Tiers via @sejournal, @MattGSouthern

Google has published exact usage limits for Gemini Apps across the free tier and paid Google AI plans, replacing earlier vague language with concrete numbers marketers can plan around.

The Help Center update covers daily caps for prompts, images, Deep Research, video generation, and context windows, and notes that you’ll see in-product notices when you’re close to a limit.

What’s New

Until recently, Google’s documentation used general phrasing about “limited access” without specifying amounts.

The Help Center page now lists per-tier allowances for Gemini 2.5 Pro prompts, image generation, Deep Research, and more. It also clarifies that practical caps can vary with prompt complexity, file sizes, and conversation length, and that limits may change over time.

Google’s Help Center states:

“Gemini Apps has usage limits designed to ensure an optimal experience for everyone… we may at times have to cap the number of prompts, conversations, and generated assets that you can have within a specific timeframe.”

Free vs. Paid Tiers

On the free experience, you can use Gemini 2.5 Pro for up to five prompts per day.

The page lists general access to 2.5 Flash and includes:

  • 100 images per day
  • 20 Audio Overviews per day
  • Five Deep Research reports per month using 2.5 Flash).

Because overall app limits still apply, actual throughput depends on how long and complex your prompts are and how many files you attach.

Google AI Pro increases ceilings to:

  • 100 prompts per day on Gemini 2.5 Pro
  • 1,000 images per day
  • 20 Deep Research reports per day (using 2.5 Pro).

Google AI Ultra raises those to

  • 500 prompts per day
  • 200 Deep Research reports per day
  • Includes Deep Think with 10 prompts per day at a 192,000-token context window for more complex reasoning tasks.

Context Windows and Advanced Features

Context windows differ by tier. The free tier lists a 32,000-token context size, while Pro and Ultra show 1 million tokens, which is helpful when you need longer conversations or to process large documents in one go.

Ultra’s Deep Think is separate from the 1M context and is capped at 192k tokens for its 10 daily prompts.

Video generation is currently in preview with model-specific limits. Pro shows up to three videos per day with Veo 3 Fast (preview), while Ultra lists up to five videos per day with Veo 3 (preview).

Google indicates some features receive priority or early access on paid plans.

Availability and Requirements

The Gemini app in Google AI Pro and Ultra is available in 150+ countries and territories for users 18 or older.

Upgrades are tied to select Google One paid plans for personal accounts, which consolidate billing with other premium Google services.

Why This Matters

Clear ceilings make it easier to scope deliverables and budgets.

If you produce a steady stream of social or ad creative, the image caps and prompt totals are practical planning inputs.

Teams doing competitive analysis or longer-form research can evaluate whether the free tier’s five Deep Research reports per month cover occasional needs or if Pro’s daily allotment, Ultra’s higher limit, and Deep Think are a better fit for heavier workloads.

The documentation also emphasizes that caps can vary with usage patterns, so it’s worth watching the in-app limit warnings on busy days.

Looking Ahead

Google notes that limits may evolve. If your workflows depend on specific daily counts or large context windows, it’s sensible to review the Help Center page periodically and adjust plans as features move from preview to general availability.


Featured Image: Evolf/Shutterstock

Google’s Antitrust Ruling: What The Remedies Really Mean For Search, SEO, And AI Assistants via @sejournal, @gregjarboe

When Judge Amit P. Mehta issued his long-awaited remedies decision in the Google search antitrust case, the industry exhaled a collective sigh of relief. There would be no breakup of Google, no forced divestiture of Chrome or Android, and no user-facing “choice screen” like the one that reshaped Microsoft’s browser market two decades ago. But make no mistake – this ruling rewrites the playbook for search distribution, data access, and competitive strategy over the next six years.

This article dives into what led to the decision, what it actually requires, and – most importantly – what it means for SEO, PPC, publishers, and the emerging generation of AI-driven search assistants.

What Led To The Decision

The Department of Justice and a coalition of states sued Google in 2020, alleging that the company used exclusionary contracts and massive payments to cement its dominance in search. In August 2024, Judge Mehta ruled that Google had indeed violated antitrust law, writing, “Google is a monopolist, and it has acted as one to maintain its monopoly.” The question then became: what remedies would actually restore competition?

The DOJ and states pushed for sweeping measures – including a breakup of Google’s Chrome browser or Android operating system, and mandatory choice screens on devices. Google countered that such steps would harm consumers and innovation. By the time remedies hearings wrapped, generative AI had exploded into the mainstream, shifting the court’s sense of what competition in search could look like.

What The Court Decided

Judge Mehta’s ruling, issued September 2, 2025, imposed a mix of behavioral remedies:

  • Exclusive contracts banned. Google can no longer strike deals that make it the sole default search engine on browsers, phones, or carriers. That means Apple, Samsung, Mozilla, and mobile carriers can now entertain offers from rivals like Microsoft Bing or newer AI entrants.
  • Payments still allowed. Crucially, the court did not ban Google from paying for placement. Judge Mehta explained that removing payments altogether would “impose substantial harms on distribution partners.” In other words, the checks will keep flowing – but without exclusivity.
  • Index and data sharing. Google must share portions of its search index and some user interaction data with “qualified competitors” on commercial terms. Ads data, however, is excluded. This creates a potential on-ramp for challengers, but it doesn’t hand them the secret sauce of Google’s ranking systems.
  • No breakup, no choice screen. Calls to divest Chrome or Android were rejected as overreach. Similarly, the court declined to mandate a consumer-facing choice screen. Change will come instead through contracts and UX decisions by distribution partners.
  • Six-year oversight. Remedies will be overseen by a technical committee for six years. A revised judgment is due September 10, with remedies taking effect roughly 60 days after final entry.

As Judge Mehta put it, “Courts must… craft remedies with a healthy dose of humility,” noting that generative AI has already “changed the course of this case.”

How The Market Reacted

Investors immediately signaled relief. Alphabet shares jumped ~8% after hours, while Apple gained ~4%. The lack of a breakup, and the preservation of lucrative search placement payments, reassured Wall Street that Google’s search empire was not being dismantled overnight.

But beneath the relief lies a new strategic reality: Google’s moat of exclusivity has been replaced with a marketplace for defaults.

Strategic Insights: Beyond The Headlines

Most coverage of the decision has focused on what didn’t happen – the absence of a breakup or a choice screen. But the deeper story is how distribution, data, and AI will interact under the new rules.

1. Defaults Move From Moat To Marketplace

Under the old model, Google’s exclusive deals ensured it was the default on Safari, Android, and beyond. Now, partners can take money from multiple providers. That turns the default position into a marketplace, not a moat.

Apple, in particular, gains leverage. Court records revealed that Google paid Apple $20 billion in 2022 and paid $26.3 billion in 2021  – the figure is not to any one company, but Apple likely represents the largest recipient – to remain Safari’s default search engine. Without exclusivity, Apple can entertain bids from Microsoft, OpenAI, or others – potentially extracting even more money by selling multiple placements or rotating defaults.

We may see new UX experiments: rotating search tiles, auction-based setup flows, or AI assistant shortcuts integrated into operating systems. Distribution partners like Samsung or Mozilla could pilot “multi-home defaults,” where Google, Bing, and an AI engine all coexist in visible slots.

2. Data Access Opens An On-Ramp For Challengers

Index-sharing and limited interaction data access lower barriers for rivals. Crawling the web is expensive; licensing Google’s index could accelerate challengers like Bing, Perplexity, or OpenAI’s rumored search product.

But it’s not full parity. Without ads data and ranking signals, competitors must still differentiate on product experience. Think faster answers, vertical specialization, or superior AI integration. As I like to put it: Index access gives challengers legs, not lungs.

Much depends on how “qualified competitor” is defined. A narrow definition could limit access to a token few; a broad one could empower a new wave of vertical and AI-driven search entrants.

3. AI Is Already Shifting The Game

The court acknowledged that generative AI reshaped its view of competition. Assistants like Copilot, Gemini, or Perplexity are increasingly acting as intent routers – answering directly, citing sources, or routing users to transactions without a traditional SERP.

That means the battle for distribution may shift from browsers and search bars to AI copilots embedded in operating systems, apps, and devices. If users increasingly ask their assistant instead of typing a query, exclusivity deals matter less than who owns the assistant.

For SEO and SEM professionals, this accelerates the shift toward zero-click answers, assistant-ready content, and schema that supports citations.

4. Financial Dynamics: Relief Today, Pressure Tomorrow

Yes, investors cheered. But over time, Google could face rising traffic acquisition costs (TAC) as Apple, Samsung, and carriers auction off default positions. Defending its distribution may get more expensive, eating into margins.

At the same time, without a choice screen, search market share is likely to shift gradually, not collapse. Expect Google’s U.S. query share to remain in the high 80s in the near term, with only single-digit erosion as rivals experiment with new models.

5. Knock-On Effects: The Ad-Tech Case Looms

Don’t overlook the second front: the DOJ’s separate antitrust case against Google’s ad-tech stack, now moving toward remedies hearings in Virginia. If that case results in structural changes – say, forcing Google to separate its publisher ad server from its exchange – it could reshape how search ads are bought, measured, and monetized.

For publishers, both cases matter. If rivals gain traction with AI-driven assistants, referral traffic could diversify – but also become more volatile, depending on how assistants handle citations and click-throughs.

What Happens Next

  • September 10, 2025: DOJ and Google file a revised judgment.
  • ~60 days later: Remedies begin taking effect.
  • Six years: Oversight period, with ongoing compliance monitoring.

Key Questions To Watch:

  • How will Apple implement non-exclusive search defaults in Safari?
  • Who qualifies as a “competitor” for index/data access, and on what terms?
  • Will rivals like Microsoft, Perplexity, or OpenAI buy into distribution slots aggressively?
  • How will AI assistants evolve as distribution front doors?

What This Means For SEO And PPC

This ruling isn’t just about contracts in Silicon Valley – it has practical consequences for marketers everywhere.

  • Distribution volatility planning. SEM teams should budget for a world where Safari queries become more contestable. Test Bing Ads, Copilot Ads, and assistant placements.
  • Assistant-ready content. Optimize for concise, cite-worthy answers with schema markup. Publish FAQs, data tables, and source-friendly content that large language models (LLMs) like to quote.
  • Syndication hedge. If new index-sharing programs emerge, explore partnerships with vertical search startups. Early pilots could deliver traffic streams outside the Google ecosystem.
  • Attribution resilience. As assistants mediate more traffic, referral strings will get messy. Double down on UTM governance, server-side tracking, and marketing mix models to parse signal from noise.
  • Creative testing. Build two-tier content: a punchy, fact-dense abstract that assistants can lift, and a deeper explainer for human readers.

Market Scenarios

  • Base Case (Most Likely): Google retains high-80s market share. TAC costs rise gradually. AI assistants siphon a modest share of informational queries by 2027. Impact: margin pressure more than market share loss.
  • Upside for Rivals: If index access is broad and AI assistants nail UX, Bing, Perplexity, and others could win five to 10 points combined in specific verticals. Impact: SEM arbitrage opportunities emerge, and SEO adapts to answer-first surfaces.
  • Regulatory Cascade: If the ad-tech remedies impose structural changes, Google’s measurement edge narrows, and OEMs test choice-like UX voluntarily. Impact: more fragmentation, more testing for marketers.

Final Takeaway

Judge Mehta summed up the challenge well: “Courts must craft remedies with a healthy dose of humility.” The ruling doesn’t topple Google, but it does force the search giant to compete on more open terms. Exclusivity is gone; auctions and assistants are in.

For marketers, the message is clear: Don’t wait for regulators to rebalance the playing field. Diversify now – across engines, assistants, and ad formats. Optimize for answerability as much as for rankings. And be ready: The real competition for search traffic is just beginning.

More Resources:


Featured Image: beast01/Shutterstock

Google PMax Unveils Optimization Tools

Google’s Performance Max campaigns place responsive ads across all Google channels based on audience signals. The search giant automatically determines an ad’s headlines, descriptions, and images across, say, Search, Display, and YouTube to deliver top results.

Yet PMax campaigns lack transparency and restrict options.

The encouraging news is that Google is listening to advertisers and has rolled out PMax reporting and flexibility updates in the past year. These include reports for asset-level conversions and Search category theme volume and conversions, as well as the ability to exclude devices where your ads can appear.

More recently, Google has provided new PMax optimization features. I’ll address those in this post.

Channel performance

At a Performance Max campaign level, advertisers can now see which channels drive traffic and conversions. In the example below, traffic from Google Discover accounts for 5.36% of total spend and one conversion.

Google Ads Performance Max report with 34,306 impressions, 3,740 interactions, and 58.22 conversions. Visualization shows conversions by channel, including Discover and Display, with costs and conversion values for contact and purchase goals.

Performance Max advertisers can now see, at the campaign level, which channels drive traffic and conversions. Click image to enlarge.

Performance Max ads can show in these Google channels:

  • Discover
  • Display
  • Gmail
  • Maps
  • Search
  • YouTube

Advertisers cannot exclude specific channels, but the new visibility is helpful and may determine PMax’s overall viability. Advertisers can exclude non-converting ads and keywords to assess further whether PMax is the right option.

Final URL expansion

By default, new Performance Max campaigns turn on Final URL expansion. This means that Google can send searchers to a different landing page for better conversions. Expanding the Final URL can be worthwhile, but it’s important to see which pages are converting. An option in the “Assets” tab provides the Final URL expansion assets.

Advertisers can exclude irrelevant URLs in “Asset Optimization” within the campaign settings. Click on the “Customization” option to activate “Final URL expansion.”

Google Ads admin panel showing automated text asset options. Customization and Final URL Expansion toggles are enabled, with two URL exclusions listed: example.com and example2.com.

Advertisers can exclude irrelevant URLs in “Asset Optimization” within the campaign settings. Click image to enlarge.

Asset Optimization

Speaking of Asset Optimization, advertisers can see the asset source for the many components of Performance Max data. For example, a Google-created headline may convert at twice the rate of an advertiser’s version. Advertisers can pause automatically created assets, similar to pausing keywords.

Advertisers can disable automated assets at the account level, but not for campaigns. Turn off the option, for example, if you don’t want Google-created sitelinks to show. Remember that turning off an automated asset impacts the entire account.

Negative keywords

Performance Max campaigns have always allowed negative keywords. However, the setup was cumbersome, requiring either implementation by a Google rep or the creation of an account-level negative keyword list.

Now, adding negative keywords is easy. Discovering the keywords is also easy, as search queries are available as a separate option in the “Insights and reports” tab to view the data and select terms to exclude.

Search Themes

Google introduced Search Themes in 2023 to help guide its AI. The Themes work similarly to keywords. For example, a retailer selling winter jackets could provide Search Themes of:

  • “Winter jackets,”
  • “Men’s winter jackets,”
  • “Women’s winter jackets,”

Searchers don’t need to type these keywords for ads to show. Instead, the ads show if an advertiser’s site content or the searcher’s query history indicates relevance. Along with audience signals, Search Themes helps Google know a searcher’s profile.

Google now allows up to 50 Search Themes per asset group, an increase from 10.

Putin says organ transplants could grant immortality. Not quite.

This week I’m writing from Manchester, where I’ve been attending a conference on aging. Wednesday was full of talks and presentations by scientists who are trying to understand the nitty-gritty of aging—all the way down to the molecular level. Once we can understand the complex biology of aging, we should be able to slow or prevent the onset of age-related diseases, they hope.

Then my editor forwarded me a video of the leaders of Russia and China talking about immortality. “These days at 70 years old you are still a child,” China’s Xi Jinping, 72, was translated as saying, according to footage livestreamed by CCTV to multiple media outlets.

“With the developments of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Russia’s Vladimir Putin, also 72, is reported to have replied.

Russian President Vladimir Putin, Chinese President Xi Jinping and North Korean leader Kim Jong Un walk side by side

SERGEI BOBYLEV, SPUTNIK, KREMLIN POOL PHOTO VIA AP

There’s a striking contrast between that radical vision and the incremental longevity science presented at the meeting. Repeated rounds of organ transplantation surgery aren’t likely to help anyone radically extend their lifespan anytime soon.

First, back to Putin’s proposal: the idea of continually replacing aged organs to stay young. It’s a simplistic way to think about aging. After all, aging is so complicated that researchers can’t agree on what causes it, why it occurs, or even how to define it, let alone “treat” it.

Having said that, there may be some merit to the idea of repairing worn-out body parts with biological or synthetic replacements. Replacement therapies—including bioengineered organs—are being developed by multiple research teams. Some have already been tested in people. This week, let’s take a look at the idea of replacement therapies.

No one fully understands why our organs start to fail with age. On the face of it, replacing them seems like a good idea. After all, we already know how to do organ transplants. They’ve been a part of medicine since the 1950s and have been used to save hundreds of thousands of lives in the US alone.

And replacing old organs with young ones might have more broadly beneficial effects. When a young mouse is stitched to an old one, the older mouse benefits from the arrangement, and its health seems to improve.

The problem is that we don’t really know why. We don’t know what it is about young body tissues that makes them health-promoting. We don’t know how long these effects might last in a person. We don’t know how different organ transplants will compare, either. Might a young heart be more beneficial than a young liver? No one knows.

And that’s before you consider the practicalities of organ transplantation. There is already a shortage of donor organs—thousands of people die on waiting lists. Transplantation requires major surgery and, typically, a lifetime of prescription drugs that damp down the immune system, leaving a person more susceptible to certain infections and diseases.

So the idea of repeated organ transplantations shouldn’t really be a particularly appealing one. “I don’t think that’s going to happen anytime soon,” says Jesse Poganik, who studies aging at Brigham and Women’s Hospital in Boston and is also in Manchester for the meeting.

Poganik has been collaborating with transplant surgeons in his own research. “The surgeries are good, but they’re not simple,” he tells me. And they come with real risks. His own 24-year-old cousin developed a form of cancer after a liver and heart transplant. She died a few weeks ago, he says.

So when it comes to replacing worn-out organs, scientists are looking for both biological and synthetic alternatives.  

We’ve been replacing body parts for centuries. Wooden toes were used as far back as the 15th century. Joint replacements have been around for more than a hundred years. And major innovations over the last 70 years have given us devices like pacemakers, hearing aids, brain implants, and artificial hearts.

Scientists are exploring other ways to make tissues and organs, too. There are different approaches here, but they include everything from injecting stem cells to seeding “scaffolds” with cells in a lab.

In 1999, researchers used volunteers’ own cells to seed bladder-shaped collagen scaffolds. The resulting bioengineered bladders went on to be transplanted into seven people in an initial trial

Now scientists are working on more complicated organs. Jean Hébert, a program manager at the US government’s Advanced Research Projects Agency for Health, has been exploring ways to gradually replace the cells in a person’s brain. The idea is that, eventually, the recipient will end up with a young brain.

Hébert showed my colleague Antonio Regalado how, in his early experiments, he removed parts of mice’s brains and replaced them with embryonic stem cells. That work seems a world away from the biochemical studies being presented at the British Society for Research on Ageing annual meeting in Manchester, where I am now.

On Wednesday, one scientist described how he’d been testing potential longevity drugs on the tiny nematode worm C. elegans. These worms live for only about 15 to 40 days, and his team can perform tens of thousands of experiments with them. About 40% of the drugs that extend lifespan in C. elegans also help mice live longer, he told us.

To me, that’s not an amazing hit rate. And we don’t know how many of those drugs will work in people. Probably less than 40% of that 40%.

Other scientists presented work on chemical reactions happening at the cellular level. It was deep, basic science, and my takeaway was that there’s a lot aging researchers still don’t fully understand.

It will take years—if not decades—to get the full picture of aging at the molecular level. And if we rely on a series of experiments in worms, and then mice, and then humans, we’re unlikely to make progress for a really long time. In that context, the idea of replacement therapy feels like a shortcut.

“Replacement is a really exciting avenue because you don’t have to understand the biology of aging as much,” says Sierra Lore, who studies aging at the University of Copenhagen in Denmark and the Buck Institute for Research on Aging in Novato, California.

Lore says she started her research career studying aging at the molecular level, but she soon changed course. She now plans to focus her attention on replacement therapies. “I very quickly realized we’re decades away [from understanding the molecular processes that underlie aging],” she says. “Why don’t we just take what we already know—replacement—and try to understand and apply it better?”

So perhaps Putin’s straightforward approach to delaying aging holds some merit. Whether it will grant him immortality is another matter.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.