The Download: meet our AI innovators, and what happens when therapists use AI covertly

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the AI honorees on our 35 Innovators Under 35 list for 2025

Each year, we select 35 outstanding individuals under the age of 35 who are using technology to tackle tough problems in their respective fields.

Our AI honorees include people who steer model development at Silicon Valley’s biggest tech firms and academic researchers who develop new techniques to improve AI’s performance.

Check out all of our AI innovators here, and the full list—including our innovator of the year—here.

How Yichao “Peak” Ji became a global AI app hitmaker

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production but something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people.

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility—and put him at the forefront of a rising class of Chinese technologists with global ambitions. Read the full story.

—Caiwei Chen

Help! My therapist is secretly using ChatGPT

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. But the secretive use by therapists of AI models that are not vetted for mental health is something very different. James O’Donnell, our senior AI reporter, had a conversation with Clarke to hear more about what she found.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

What’s next in tech: the breakthroughs that matter

Some technologies reshape industries, whether we’re ready or not.

Join us for our next LinkedIn Live event on September 10 as our editorial team explores the breakthroughs defining this moment and the ones on the horizon that demand our attention. 

From quantum computing to humanoid robotics, AI agents to climate tech, we’ll explore the innovations that excite us, the challenges they may bring, and why they’re worth watching now. It kicks off at 12.30pm ET tomorrow—register here to join us.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is abandoning its international push against disinformation 
The State Department will no longer collaborate with Europe to combat malicious information spread by foreign governments. (FT $)
+ It comes as Russia is increasing its efforts to interfere overseas. (NYT $)

2 The judge overseeing Anthropic’s copyright case isn’t happy
Judge William Alsup says a $1.5 billion out-of-court settlement may not be in the authors’ best interests. (Bloomberg $)

3 WhatsApp’s former head of security is suing Meta
Attaullah Baig is accusing the company of failing to protect user data. (WP $)
+ He claims he uncovered systemic security failures, but was ignored. (Bloomberg $)
+ Meta maintains that Baig was dismissed for poor performance, not whistleblowing. (NYT $)

4 DOGE’s acting head is urging the US government to start hiring again 
Following months of widespread firings and resignations. (Fast Company $)
+ How DOGE wreaked havoc in Social Security. (ProPublica)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

5 OpenAI is weighing up leaving California
It’s worried that state regulators could derail its efforts to convert to a for-profit entity. (WSJ $)
+ Rival Anthropic is backing California governor Gavin Newsom’s AI bill. (Politico)

6 ICE spends millions on facial recognition tech
In an effort to pinpoint people it suspects have assaulted officers. (404 Media)
+ The Supreme Court has given ICE the go-ahead to target people based on race. (Vox)
+ ICE directors were told to triple their daily arrests for undocumented immigrants. (NY Mag $)

7 AI researchers are training AI to replace them
They’re recording every detail of their working days to help AI grasp their jobs. (The Information $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

8 What comes after the smartphone?
The rise of AI agents means we may not be staring at glass slabs forever. (NYT $)
+ What’s next for smart glasses. (MIT Technology Review)

9 Social media’s obsession with ‘locking in’ needs to die
Hustle culture and maximizing productivity at all costs are the aims of the game. (Insider $)

10 What it’s like to receive a massage from a robot
While it may not be quite as relaxing, it’s relatively cheap. (The Guardian)
+ Will we ever trust robots? (MIT Technology Review)

Quote of the day

“It was hell on Earth.”

—Duncan Okindo, who was enslaved in a Myanmar cyberscam compound and beaten for missing his targets, tells the Guardian about his harrowing experience.

One more thing

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few words in a search box and in return get a list of blue links to the most relevant results. Fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in a structured way.

But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines deliver information to us since the 1990s is happening right now, thanks to generative AI.

Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Read the full story.

—Mat Honan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Stephen King’s list of favorite movies doesn’t feature a whole lot of horror.
+ Tune into a breathtaking livestream of Earth, beamed live from the International Space Station.
+ Rodent thumbnails are way more important than I gave them credit for 🐿
+ Mark our words, actor Wagner Moura is going to be the next big thing.

Adapting to new threats with proactive risk management

In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.

The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”

Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.

A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.

Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.

To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Quick SEO: 6 Key Elements, 3 Free Tools

A page’s visibility on search engines and generative AI platforms depends on six key elements:

  • Title,
  • Meta tags,
  • HTML headings,
  • Links, internal and external,
  • Images,
  • Structured data.

When optimizing a page, I rely on three free browser extensions to quickly reveal those components: Devaka Tools, Site Inspector, and SEO Meta in 1 Click.

What follows is my explanation of the six elements, followed by a side-by-side table comparing the three extensions.

Title of page

The title tag is the most important on-page optimization element because search engines use it to understand the page’s purpose. Descriptive and keyword-focused page titles improve and diversify organic rankings.

The title tag appears in the browser tab and is not necessarily visible on the page.

Meta tags

Two types of meta tags are key for search engine optimization:

  • A meta description does not directly impact rankings, but it may appear in search snippets and thus affect click-throughs.

HTML headings

HTML headings such as H1, H2, and H3 organize on-page content. This article includes HTML headings: “Title of page,” “Meta tags,” “HTML headings,” et cetera. The headings, while optional, help readers digest the content and assist crawlers in identifying relevant info for searchers’ queries. Using keywords in HTML headings serves both purposes.

Links, internal and external

Internal links signal to search engines the importance of a page: the more internal links pointing to a page, the higher its significance.

Internal links also help search engines understand the linked page. The link’s anchor text is the strongest signal, although its surrounding words also send relevancy signals, per Google. Improving internal linking structure is often a quick way to streamline crawlability and increase organic search visibility.

External links to authoritative sites add credibility to the page, especially in “Your Money Your Life” niches.

Images

Images enhance visitor engagement, a Google ranking factor, and improve visibility in image search results.

Image alt tags are essential for both visually-impaired visitors and search engines. Compressing images enhances page speed and thus Core Web Vitals, another ranking factor.

Structured data

Structured data helps search engines and AI platforms understand a site, its pages, and the owner. Schema.org’s “vocabulary” of structured data is the most popular. Google and Bing recognize it as well as other methods.

Browser Extensions

You can quickly see all of these elements using one or more of the following browser extensions:

Feature Devaka Tools Site Inspector SEO Meta
Browser Many Chrome Chrome
Title of page Yes Yes Yes
Meta tags Yes Yes Yes
HTML headings Yes Yes Yes
Links: internal, external Yes Yes Yes (plus highlighting nofollow links)
Images Yes Yes Yes
Structured data Yes Yes Yes
Excel export Yes Yes Yes
Notes Can highlight keywords, show image alt text, reveal hidden text. Provides links to tools such as Schema.org validator and Search Console. Can keep sidebar open to automatically load data. Provides a page summary with word count, headings, images. Can export a page’s copy. Includes a page preview and Schema.org validator.

Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Google Drops Search Console Reporting For Six Structured Data Types via @sejournal, @MattGSouthern

Google will stop reporting six deprecated structured data types in Search Console and remove them from the Rich Results Test and appearance filters.

  • Search Console and Rich Results Test stop reporting on deprecated structured data types.
  • Rankings are unaffected; you can keep the markup, it just won’t show rich results.
  • API returns continue through December.
Structured Data’s Role In AI And AI Search Visibility via @sejournal, @marthavanberkel

The way people find and consume information has shifted. We, as marketers, must think about visibility across AI platforms and Google.

The challenge is that we don’t have the same ability to control and measure success as we do with Google and Microsoft, so it feels like we’re flying blind.

Earlier this year, Google, Microsoft, and ChatGPT each commented about how structured data can help LLMs to better understand your digital content.

Structured data can give AI tools the context they need to determine their understanding of content through entities and relationships. In this new era of search, you could say that context, not content, is king.

Schema Markup Helps To Build A Data Layer

By translating your content into Schema.org and defining the relationships between pages and entities, you are building a data layer for AI. This schema markup data layer, or what I like to call your “content knowledge graph,” tells machines what your brand is, what it offers, and how it should be understood.

This data layer is how your content becomes accessible and understood across a growing range of AI capabilities, including:

  • AI Overviews
  • Chatbots and voice assistants
  • Internal AI systems

Through grounding, structured data can contribute to visibility and discovery across Google, ChatGPT, Bing, and other AI platforms. It also prepares your web data to be of value to accelerate your internal AI initiatives as well.

The same week that Google and Microsoft announced they were using structured data for their generative AI experiences, Google and OpenAI announced their support of the Model Context Protocol.

What Is Model Context Protocol?

In November 2024, Anthropic introduced Model Context Protocol (MCP), “an open protocol that standardizes how applications provide context to LLMs” and was subsequently adopted by OpenAI and Google DeepMind.

You can think of MCP as the USB-C connector for AI applications and agents or an API for AI. “MCP provides a standardized way to connect AI models to different data sources and tools.”

Since we are now thinking of structured data as a strategic data layer, the problem Google and OpenAI need to solve is how they scale their AI capabilities efficiently and cost-effectively. The combination of structured data you put on your website, with MCP, would allow accuracy in inferencing and the ability to scale.

Structured Data Defines Entities And Relationships

LLMs generate answers based on the content they are trained on or connected to. While they primarily learn from unstructured text, their outputs can be strengthened when grounded in clearly defined entities and relationships, for example, via structured data or knowledge graphs.

Structured data can be used as an enhancer that allows enterprises to define key entities and their relationships.

When implemented using Schema.org vocabulary, structured data:

  • Defines the entities on a page: people, products, services, locations, and more.
  • Establishes relationships between those entities.
  • Can reduce hallucinations when LLMs are grounded in structured data through retrieval systems or knowledge graphs.

When schema markup is deployed at scale, it builds a content knowledge graph, a structured data layer that connects your brand’s entities across your site and beyond. 

A recent study by BrightEdge demonstrated that schema markup improved brand presence and perception in Google’s AI Overviews, noting higher citation rates on pages with robust schema markup.

Structured Data As An Enterprise AI Strategy

Enterprises can shift their view of structured data beyond the basic requirements for rich result eligibility to managing a content knowledge graph.

According to Gartner’s 2024 AI Mandates for the Enterprise Survey, participants cite data availability and quality as the top barrier to successful AI implementation.

By implementing structured data and developing a robust content knowledge graph you can contribute to both external search performance and internal AI enablement.

A scalable schema markup strategy requires:

  • Defined relationships between content and entities: Schema markup properties connect all content and entities across the brand. All page content is connected in context.
  • Entity Governance: Shared definitions and taxonomies across marketing, SEO, content, and product teams.
  • Content Readiness: Ensuring your content is comprehensive, relevant, representative of the topics you want to be known for, and connected to your content knowledge graph.
  • Technical Capability: Cross-functional tools and processes to manage schema markup at scale and ensure accuracy across thousands of pages.

For enterprise teams, structured data is a cross-functional capability that prepares web data to be consumed by internal AI applications.

What To Do Next To Prepare Your Content For AI

Enterprise teams can align their content strategies with AI requirements. Here’s how to get started:

1. Audit your current structured data to identify gaps in coverage and whether schema markup is defining relationships within your website. This context is critical for AI inferencing.

2. Map your brand’s key entities, such as products, services, people, and core topics, and ensure they are clearly defined and consistently marked up with schema markup across your content. This includes identifying the main page that defines an entity, known as the entity home.

3. Build or expand your content knowledge graph by connecting related entities and establishing relationships that AI systems can understand.

4. Integrate structured data into AI budget and planning, alongside other AI investments and that content is intended for AI Overviews, chatbots, or internal AI initiatives.

5. Operationalize schema markup management by developing repeatable workflows for creating, reviewing, and updating schema markup at scale.

By taking these steps, enterprises can ensure that their data is AI-ready, inside and outside the enterprise.

Structured Data Provides A Machine-Readable Layer

Structured data doesn’t assure placement in AI Overviews or directly control what large language models say about your brand. LLMs are still primarily trained on unstructured text, and AI systems weigh many signals when generating answers.

What structured data does provide is a strategic, machine-readable layer. When used to build a knowledge graph, schema markup defines entities and the relationships between them, creating a reliable framework that AI systems can draw from. This reduces ambiguity, strengthens attribution, and makes it easier to ground outputs in fact-based content when structured data is part of a connected retrieval or grounding system.

By investing in semantic, large-scale schema markup and aligning it across teams, organizations position themselves to be as discoverable in AI experiences as possible.

More Resources:


Featured Image: Koto Amatsukami/Shutterstock

New: From longform to key takeaways, in seconds. Meet Yoast AI Summarize

Today, we’re excited to welcome Yoast AI Summarize to our growing family of AI features. Just like our other AI tools, this new feature is designed to make your publishing process faster and easier by putting powerful, practical AI right where you work, in the WordPress Block Editor. 

Yoast AI Summarize is perfect for bloggers, content teams, agencies, and publishers who want to give readers instant value while also making sure their posts clearly communicate the intended message. 

What does Yoast AI Summarize do? 

You’ve finished drafting your post, great! But before you hit “Publish,” wouldn’t it be helpful to instantly see the core points your content is actually conveying? That’s exactly what Yoast AI Summarize does. 

With one click, you can insert a Key Takeaways block into your content. Yoast AI Summarize scans your post’s main body and creates a short, bullet-point summary, giving your readers a quick, scannable snapshot, and giving you a chance to check if your post is truly saying what you want it to. 

How you can access the new feature 

Yoast AI Summarize is automatically available to all Yoast SEO Premium customers. Just make sure you’ve updated to the latest version and granted consent to use AI. 

Once enabled, simply: 

  1. Open your post in the WordPress Block Editor
  1. Add the new block from the “Yoast AI Blocks” section 
  1. Click to generate summary, and watch your Key Takeaways section appear in seconds. 

Where you can use Yoast AI Summarize 

Right now, Yoast AI Summarize works in the WordPress Block Editor on posts and pages. The block is fully editable, you can change the title, rewrite bullet points, or move it anywhere in your content flow. 

Pricing and usage 

There are no hidden costs for Yoast AI Summarize, it’s included in Yoast SEO Premium. Like our other AI features, it uses our spark counter to track usage. 

  • A spark is a single click on an AI feature. 
  • Generating one summary = one spark. 
  • Your spark counter resets at the start of each month. 
  • There’s currently no hard limit, so you can experiment freely. 

Limitations 

Yoast AI Summarize is currently in beta. That means you may notice a few restrictions: 

  • Only available in the WordPress Block Editor
  • Summaries are excluded from Yoast SEO and Readability Analysis to protect your scores. 
  • Currently works only on published or drafted content within supported blocks. 
  • For very long posts, it may take a few seconds for the summary to generate. 

Try out Yoast AI Summarize today 

Upgrade to Yoast SEO Premium to unlock this and all our AI features, including the award-nominated Yoast AI Generate and the powerful Yoast AI Optimize. With Yoast AI Summarize, you can work faster, keep your content aligned with your intent, and give your readers instant value with clear, scannable takeaways. 

Update to the latest version and try it out today! 

2025 Innovator of the Year: Sneha Goenka for developing an ultra-fast sequencing technology

Sneha Goenka is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

Up to a quarter of children entering intensive care have undiagnosed genetic conditions. To be treated properly, they must first get diagnoses—which means having their genomes sequenced. This process typically takes up to seven weeks. Sadly, that’s often too slow to save a critically ill child.

Hospitals may soon have a faster option, thanks to a groundbreaking system built in part by Sneha Goenka, an assistant professor of electrical and computer engineering at Princeton—and MIT Technology Review’s 2025 Innovator of the Year. 

Five years ago, Goenka and her colleagues designed a rapid-sequencing pipeline that can provide a genetic diagnosis in less than eight hours. Goenka’s software computations and hardware architectures were critical to speeding up each stage of the process. 

“Her work made everyone realize that genome sequencing is not only for research and medical application in the future but can have immediate impact on patient care,” says Jeroen de Ridder, a professor at UMC Utrecht in the Netherlands, who has developed an ultrafast sequencing tool for cancer diagnosis. 

Now, as cofounder and scientific lead of a new company, she is working to make that technology widely available to patients around the world.

Goenka grew up in Mumbai, India. Her mother was an advocate for women’s education, but as a child, Goenka had to fight to persuade other family members to let her continue her studies. She moved away from home at 15 to attend her final two years of school and enroll in a premier test-­preparation academy in Kota, Rajasthan. Thanks to that education, she passed what she describes as “one of the most competitive exams in the world,” to get into the Indian Institute of Technology Bombay. 

Once admitted to a combined bachelor’s and master’s program in electrical engineering, she found that “it was a real boys’ club.” But Goenka excelled in developing computer architecture systems that accelerate computation. As an undergraduate, she began applying those skills to medicine, driven by her desire to “have real-world impact”—in part because she had seen her family struggle with painful uncertainty after her brother was born prematurely when she was eight years old. 

While working on a PhD in electrical engineering at Stanford, she turned her focus to evolutionary and clinical genomics. One day a senior colleague, Euan Ashley, presented her with a problem. He said, “We want to see how fast we can make a genetic diagnosis. If you had unlimited funds and resources, just how fast do you think you could make the compute?”

Streaming DNA

A genetic diagnosis starts with a blood sample, which is prepped to extract the DNA—a process that takes about three hours. Next that DNA needs to be “read.” One of the world’s leading long-read sequencing technologies, developed by Oxford Nanopore Technologies, can generate highly detailed raw data of an individual’s genetic code in about an hour and a half. Unfortunately, processing all this data to identify mutations can take another 21 hours. Shipping samples to a central lab and figuring out which mutations are of interest often leads the process to stretch out to weeks. 

Goenka saw a better way: Build a real-time system that could “stream” the sequencing data, analyzing it as it was being generated, like streaming a film on Netflix rather than downloading it to watch later.

Sneha Goenka

To do this, she designed a cloud computing architecture to pull in more processing power. Goenka’s first challenge was to increase the speed at which her team could upload the raw data for processing, by streamlining the requests between the sequencer and the cloud to avoid unnecessary “chatter.” She worked out the exact number of communication channels needed—and created algorithms that allowed those channels to be reused in the most efficient way.

The next challenge was “base calling”—converting the raw signal from the sequencing machine into the nucleotide bases A, C, T, and G, the language that makes up our DNA. Rather than using a central node to orchestrate this process, which is an inefficient, error-prone approach, Goenka wrote software to automatically assign dozens of data streams directly from the sequencer to dedicated nodes in the cloud.

Meet the rest of this year’s 
Innovators Under 35.

Then, to identify mutations, the sequences were aligned for comparison with a reference genome. She coded a custom program that triggers alignment as soon as base calling finishes for one batch of sequences while simultaneously initiating base calling for the next batch, thus ensuring that the system’s computational resources are used efficiently.

Add all these im­­prove­­ments together, and Goenka’s approach reduced the total time required to analyze a genome for mutations from around 20 hours to 1.5 hours. Finally, the team worked with genetic counselors and physicians to create a filter that identified which mutations were most critical to a person’s health, and that set was then given a final manual curation by a genetic specialist. These final stages take up to three hours. The technology was close to being fully operational when, suddenly, the first patient arrived. 

A critical test

When 13-year-old Matthew was flown to Stanford’s children’s hospital in 2021, he was struggling to breathe and his heart was failing. Doctors needed to know whether the inflammation in his heart was due to a virus or to a genetic mutation that would necessitate a transplant.  

His blood was drawn on a Thursday. The transplant committee made its decisions on Fridays. “It meant we had a small window of time,” says Goenka.

Goenka was in Mumbai when the sequencing began. She stayed up all night, monitoring the computations. That was when the project stopped being about getting faster for the sake of it, she says: “It became about ‘How fast can we get this result to save this person’s life?’”

The results revealed a genetic mutation that explained Matthew’s condition, and he was placed on the transplant list the next day. Three weeks later, he received a new heart. “He’s doing great now,” Goenka says.

So far, Goenka’s technology has been tested on 26 patients, including Matthew. Her pipeline is “directly affecting the medical care of newborns in the Stanford intensive care units,” Ashley says.

Now she’s aiming for even broader impact—Goenka and her colleagues are laying the groundwork for a startup that they hope will bring the technology to market and make sure it reaches as many patients as possible. Meanwhile, she has been refining the computational pipeline, reducing the time to diagnosis to about six hours.

The demand is clear, she says: “In an in-depth study involving more than a dozen laboratory directors and neonatologists, every respondent stressed urgency. One director put it succinctly: ‘I need this platform today—preferably yesterday.’”

Goenka is also developing software to make the technology more inclusive. The reference genome is skewed toward people of European descent. The Human Pangenome Project is an international collaboration to create reference genomes from more diverse populations, which Goenka aims to use to personalize her team’s filters, allowing them to flag mutations that may be more prevalent in the population to which a patient belongs.

Since seeing her work, Goenka’s extended family has become more appreciative of her education and career. “The entire family is very proud about the impact I’ve made,” she says. 

Helen Thomson is a freelance science journalist based in London.

Meet the Ethiopian entrepreneur who is reinventing ammonia production

Iwnetim Abate is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

“I’m the only one who wears glasses and has eye problems in the family,” Iwnetim Abate says with a smile as sun streams in through the windows of his MIT office. “I think it’s because of the candles.”

In the small town in Ethiopia where he grew up, Abate’s family had electricity, but it was unreliable. So, for several days each week when they were without power, Abate would finish his homework by candlelight.

Today, Abate, 32, is an assistant professor at MIT in the department of materials science and engineering. Part of his research focuses on sodium-ion batteries, which could be cheaper than the lithium-based ones that typically power electric vehicles and grid installations. He’s also pursuing a new research path, examining how to harness the heat and pressure under the Earth’s surface to make ammonia, a chemical used in fertilizer and as a green fuel.

Growing up without the ubiquitous access to electricity that many people take for granted shaped the way Abate thinks about energy issues, he says. He recalls rushing to dry out his school uniform over a fire before he left in the morning. One of his chores was preparing cow dung to burn as fuel—the key is strategically placing holes to ensure proper drying, he says.

Abate’s desire to devote his attention to energy crystallized in a high school chemistry class on fuel cells. “It was like magic,” he says, to learn it’s possible to basically convert water into energy. “Sometimes science is magic, right?”

Abate scored the highest of any student in Ethiopia on the national exam the year he took it, and he knew he wanted to go to the US to further his education. But actually getting there proved to be a challenge. 

Abate applied to US colleges for three years before he was granted admission to Concordia College Moorhead, a small liberal arts college, with a partial scholarship. To raise the remaining money, he reached out to various companies and wealthy people across Ethiopia. He received countless rejections but didn’t let that phase him. He laughs recalling how guards would chase him off when he dropped by prospects’ homes in person. Eventually, a family friend agreed to help.

When Abate finally made it to the Minnesota college, he walked into a room in his dorm building and the lights turned on automatically. “I both felt happy to have all this privilege and I felt guilty at the same time,” he says.

Lab notes

His college wasn’t a research institute, so Abate quickly set out to get into a laboratory. He reached out to Sossina Haile, then at the California Institute of Technology, to ask about a summer research position.

Haile, now at Northwestern University, recalls thinking that Abate was particularly eager. As a visible Ethiopian scientist, she gets a lot of email requests, but his stood out. “No obstacle was going to stand in his way,” she says. It was risky to take on a young student with no research experience who’d only been in the US for a year, but she offered him a spot in her lab.

Abate spent the summer working on materials for use in solid oxide fuel cells. He returned for the following summer, then held a string of positions in energy-materials research, including at IBM and Los Alamos National Lab, before completing his graduate degree at Stanford and postdoctoral work at the University of California, Berkeley.

Meet the rest of this year’s 
Innovators Under 35.

He joined the MIT faculty in 2023 and set out to build a research group of his own. Today, there are two major focuses of his lab. One is sodium-ion batteries, which are a popular alternative to the lithium-based cells used in EVs and grid storage installations. Sodium-ion batteries don’t require the kinds of critical minerals lithium-ion batteries do, which can be both expensive and tied up by geopolitics.  

One major stumbling block for sodium-ion batteries is their energy density. It’s possible to improve energy density by operating at higher voltages, but some of the materials used tend to degrade quickly at high voltages. That limits the total energy density of the battery, so it’s a problem for applications like electric vehicles, where a low energy density would restrict range.

Abate’s team is developing materials that could extend the lifetime of sodium-ion batteries while avoiding the need for nickel, which is considered a critical mineral in the US. The team is examining additives and testing materials-engineering techniques to help the batteries compete with lithium-ion cells.

Irons in the fire

Another vein of Abate’s work is in some ways a departure from his history in batteries and fuel cells. In January, his team published research describing a process to make ammonia underground, using naturally-occurring heat and pressure to drive the necessary chemical reactions.  

Today, making ammonia generates between 1% and 2% of global greenhouse gas emissions. It’s primarily used to fertilize crops, but it’s also being considered as a fuel for sectors like long-distance shipping.

Abate cofounded a company called Addis Energy to commercialize the research, alongside MIT serial entrepreneur Yet-Ming Chiang and a pair of oil industry experts. (Addis means “new” in Amharic, the official language of Ethiopia.) For an upcoming pilot, the company aims to build an underground reactor that can produce ammonia. 

When he’s not tied up in research or the new startup, Abate runs programs for African students. In 2017, he cofounded an organization called Scifro, which runs summer school programs in Ethiopia and plans to expand to other countries, including Rwanda. The programs focus on providing mentorship and educating students about energy and medical devices, which is the specialty of his cofounder. 

While Abate holds a position at one of the world’s most prestigious universities and serves as chief science officer of a buzzy startup, he’s quick to give credit to those around him. “It takes a village to build something, and it’s not just me,” he says.

Abate often thinks about his friends, family, and former neighbors in Ethiopia as he works on new energy solutions. “Of course, science is beautiful, and we want to make an impact,” he says. “Being good at what you do is important, but ultimately, it’s about people.”

How Yichao “Peak” Ji became a global AI app hitmaker

Yichao “Peak” Ji is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production—it was directed by cofounder Zhang Tao and filmed in a corner of their Beijing office. But something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people. 

At first sight, Manus works like most chatbots: Users can ask it questions in a chat window. However, besides providing answers, it can also carry out tasks (for example, finding an apartment that meets specified criteria within a certain budget). It does this by breaking tasks down into steps, then using a cloud-based virtual machine equipped with a browser and other tools to execute them—perusing websites, filling in forms, and so on.

Ji is the technical core of the team. Now based in Singapore, he leads product and infrastructure development as the company pushes forward with its global expansion. 

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility among both engineers and investors—and put him at the forefront of a rising class of Chinese technologists with AI products and global ambitions. 

Serial builder

The son of a professor and an IT professional, Ji moved to Boulder, Colorado, at age four for his father’s visiting scholar post, returning to Beijing in second grade.

His fluent English set him apart early on, but it was an elementary school robotics team that sparked his interest in programming. By high school, he was running the computer club, teaching himself how to build operating systems, and drawing inspiration from Bill Gates, Linux, and open-source culture. He describes himself as a lifelong Apple devotee, and it was Apple’s launch of the App Store in 2008 that ignited his passion for development.

In 2010, as a high school sophomore, Ji created the Mammoth browser, a customizable third-party iPhone browser. It quickly became the most-downloaded third-party browser developed by an individual in China and earned him the Macworld Asia Grand Prize in 2011. International tech site AppAdvice called it a product that “redefined the way you browse the internet.” At age 20, he was on the cover of Forbes magazine and made its “30 Under 30” list. 

Meet the rest of this year’s 
Innovators Under 35.

During his teenage years, Ji developed several other iOS apps, including a budgeting tool designed for Hasbro’s Monopoly game, which sold well—until it attracted a legal notice for using the trademarked name. But Ji wasn’t put off a career in tech by that early brush with a multinational legal team. If anything, he says, it sharpened his instincts for both product and risk. 

In 2012, Ji launched his own company, Peak Labs, and later led the development of Magi, a search engine. The tool extracted information from across the web to answer queries—conceptually similar to today’s AI-powered search, but powered by a custom language model. 

​​Magi was briefly popular, drawing millions of users in its first month, but consumer adoption didn’t stick. It did, however, attract enterprise interest, and Ji adapted it for B2B use, before selling it in 2022. 

AI acumen 

Manus would become his next act—and a more ambitious one. His cofounders, Zhang Tao and Xiao Hong, complement Ji’s technical core with product know-how, storytelling, and organizational savvy. Both Xiao and Ji are serial entrepreneurs who have been backed by venture capital firm ZhenFund multiple times. Together, they represent the kind of long-term collaboration and international ambition that increasingly defines China’s next wave of entrepreneurs.

JULIANA TAN

People who have worked with Ji describe him as a clear thinker, a fast talker, and a tireless, deeply committed builder who thinks in systems, products, and user flows. He represents a new generation of Chinese technologists: equally at home coding or in pitch meetings, fluent in both building and branding. He’s also a product of open-source culture, and remains an active contributor whose projects regularly garner attention—and GitHub stars—across developer communities.

With new funding led by US venture capital firm Benchmark, Ji and his team are taking Manus to the wider world, relocating operations outside of China, to Singapore, and actively targeting consumers around the world. The product is built on US-based infrastructure, drawing on technologies like Claude Sonnet, Microsoft Azure, and open-source tools such as Browser Use. It’s a distinctly global setup: an AI agent developed by a Chinese team, powered by Western platforms, and designed for international users. That isn’t incidental; it reflects the more fluid nature of AI entrepreneurship today, where talent, infrastructure, and ambition move across borders just as quickly as the technology itself.

For Ji, the goal isn’t just building a global company—it’s building a legacy. “I hope Manus is the last product I’ll ever build,” Ji says. “Because if I ever have another wild idea—(I’ll just) leave it to Manus!”