Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Google Drops Search Console Reporting For Six Structured Data Types via @sejournal, @MattGSouthern

Google will stop reporting six deprecated structured data types in Search Console and remove them from the Rich Results Test and appearance filters.

  • Search Console and Rich Results Test stop reporting on deprecated structured data types.
  • Rankings are unaffected; you can keep the markup, it just won’t show rich results.
  • API returns continue through December.
Structured Data’s Role In AI And AI Search Visibility via @sejournal, @marthavanberkel

The way people find and consume information has shifted. We, as marketers, must think about visibility across AI platforms and Google.

The challenge is that we don’t have the same ability to control and measure success as we do with Google and Microsoft, so it feels like we’re flying blind.

Earlier this year, Google, Microsoft, and ChatGPT each commented about how structured data can help LLMs to better understand your digital content.

Structured data can give AI tools the context they need to determine their understanding of content through entities and relationships. In this new era of search, you could say that context, not content, is king.

Schema Markup Helps To Build A Data Layer

By translating your content into Schema.org and defining the relationships between pages and entities, you are building a data layer for AI. This schema markup data layer, or what I like to call your “content knowledge graph,” tells machines what your brand is, what it offers, and how it should be understood.

This data layer is how your content becomes accessible and understood across a growing range of AI capabilities, including:

  • AI Overviews
  • Chatbots and voice assistants
  • Internal AI systems

Through grounding, structured data can contribute to visibility and discovery across Google, ChatGPT, Bing, and other AI platforms. It also prepares your web data to be of value to accelerate your internal AI initiatives as well.

The same week that Google and Microsoft announced they were using structured data for their generative AI experiences, Google and OpenAI announced their support of the Model Context Protocol.

What Is Model Context Protocol?

In November 2024, Anthropic introduced Model Context Protocol (MCP), “an open protocol that standardizes how applications provide context to LLMs” and was subsequently adopted by OpenAI and Google DeepMind.

You can think of MCP as the USB-C connector for AI applications and agents or an API for AI. “MCP provides a standardized way to connect AI models to different data sources and tools.”

Since we are now thinking of structured data as a strategic data layer, the problem Google and OpenAI need to solve is how they scale their AI capabilities efficiently and cost-effectively. The combination of structured data you put on your website, with MCP, would allow accuracy in inferencing and the ability to scale.

Structured Data Defines Entities And Relationships

LLMs generate answers based on the content they are trained on or connected to. While they primarily learn from unstructured text, their outputs can be strengthened when grounded in clearly defined entities and relationships, for example, via structured data or knowledge graphs.

Structured data can be used as an enhancer that allows enterprises to define key entities and their relationships.

When implemented using Schema.org vocabulary, structured data:

  • Defines the entities on a page: people, products, services, locations, and more.
  • Establishes relationships between those entities.
  • Can reduce hallucinations when LLMs are grounded in structured data through retrieval systems or knowledge graphs.

When schema markup is deployed at scale, it builds a content knowledge graph, a structured data layer that connects your brand’s entities across your site and beyond. 

A recent study by BrightEdge demonstrated that schema markup improved brand presence and perception in Google’s AI Overviews, noting higher citation rates on pages with robust schema markup.

Structured Data As An Enterprise AI Strategy

Enterprises can shift their view of structured data beyond the basic requirements for rich result eligibility to managing a content knowledge graph.

According to Gartner’s 2024 AI Mandates for the Enterprise Survey, participants cite data availability and quality as the top barrier to successful AI implementation.

By implementing structured data and developing a robust content knowledge graph you can contribute to both external search performance and internal AI enablement.

A scalable schema markup strategy requires:

  • Defined relationships between content and entities: Schema markup properties connect all content and entities across the brand. All page content is connected in context.
  • Entity Governance: Shared definitions and taxonomies across marketing, SEO, content, and product teams.
  • Content Readiness: Ensuring your content is comprehensive, relevant, representative of the topics you want to be known for, and connected to your content knowledge graph.
  • Technical Capability: Cross-functional tools and processes to manage schema markup at scale and ensure accuracy across thousands of pages.

For enterprise teams, structured data is a cross-functional capability that prepares web data to be consumed by internal AI applications.

What To Do Next To Prepare Your Content For AI

Enterprise teams can align their content strategies with AI requirements. Here’s how to get started:

1. Audit your current structured data to identify gaps in coverage and whether schema markup is defining relationships within your website. This context is critical for AI inferencing.

2. Map your brand’s key entities, such as products, services, people, and core topics, and ensure they are clearly defined and consistently marked up with schema markup across your content. This includes identifying the main page that defines an entity, known as the entity home.

3. Build or expand your content knowledge graph by connecting related entities and establishing relationships that AI systems can understand.

4. Integrate structured data into AI budget and planning, alongside other AI investments and that content is intended for AI Overviews, chatbots, or internal AI initiatives.

5. Operationalize schema markup management by developing repeatable workflows for creating, reviewing, and updating schema markup at scale.

By taking these steps, enterprises can ensure that their data is AI-ready, inside and outside the enterprise.

Structured Data Provides A Machine-Readable Layer

Structured data doesn’t assure placement in AI Overviews or directly control what large language models say about your brand. LLMs are still primarily trained on unstructured text, and AI systems weigh many signals when generating answers.

What structured data does provide is a strategic, machine-readable layer. When used to build a knowledge graph, schema markup defines entities and the relationships between them, creating a reliable framework that AI systems can draw from. This reduces ambiguity, strengthens attribution, and makes it easier to ground outputs in fact-based content when structured data is part of a connected retrieval or grounding system.

By investing in semantic, large-scale schema markup and aligning it across teams, organizations position themselves to be as discoverable in AI experiences as possible.

More Resources:


Featured Image: Koto Amatsukami/Shutterstock

New: From longform to key takeaways, in seconds. Meet Yoast AI Summarize

Today, we’re excited to welcome Yoast AI Summarize to our growing family of AI features. Just like our other AI tools, this new feature is designed to make your publishing process faster and easier by putting powerful, practical AI right where you work, in the WordPress Block Editor. 

Yoast AI Summarize is perfect for bloggers, content teams, agencies, and publishers who want to give readers instant value while also making sure their posts clearly communicate the intended message. 

What does Yoast AI Summarize do? 

You’ve finished drafting your post, great! But before you hit “Publish,” wouldn’t it be helpful to instantly see the core points your content is actually conveying? That’s exactly what Yoast AI Summarize does. 

With one click, you can insert a Key Takeaways block into your content. Yoast AI Summarize scans your post’s main body and creates a short, bullet-point summary, giving your readers a quick, scannable snapshot, and giving you a chance to check if your post is truly saying what you want it to. 

How you can access the new feature 

Yoast AI Summarize is automatically available to all Yoast SEO Premium customers. Just make sure you’ve updated to the latest version and granted consent to use AI. 

Once enabled, simply: 

  1. Open your post in the WordPress Block Editor
  1. Add the new block from the “Yoast AI Blocks” section 
  1. Click to generate summary, and watch your Key Takeaways section appear in seconds. 

Where you can use Yoast AI Summarize 

Right now, Yoast AI Summarize works in the WordPress Block Editor on posts and pages. The block is fully editable, you can change the title, rewrite bullet points, or move it anywhere in your content flow. 

Pricing and usage 

There are no hidden costs for Yoast AI Summarize, it’s included in Yoast SEO Premium. Like our other AI features, it uses our spark counter to track usage. 

  • A spark is a single click on an AI feature. 
  • Generating one summary = one spark. 
  • Your spark counter resets at the start of each month. 
  • There’s currently no hard limit, so you can experiment freely. 

Limitations 

Yoast AI Summarize is currently in beta. That means you may notice a few restrictions: 

  • Only available in the WordPress Block Editor
  • Summaries are excluded from Yoast SEO and Readability Analysis to protect your scores. 
  • Currently works only on published or drafted content within supported blocks. 
  • For very long posts, it may take a few seconds for the summary to generate. 

Try out Yoast AI Summarize today 

Upgrade to Yoast SEO Premium to unlock this and all our AI features, including the award-nominated Yoast AI Generate and the powerful Yoast AI Optimize. With Yoast AI Summarize, you can work faster, keep your content aligned with your intent, and give your readers instant value with clear, scannable takeaways. 

Update to the latest version and try it out today! 

2025 Innovator of the Year: Sneha Goenka for developing an ultra-fast sequencing technology

Sneha Goenka is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

Up to a quarter of children entering intensive care have undiagnosed genetic conditions. To be treated properly, they must first get diagnoses—which means having their genomes sequenced. This process typically takes up to seven weeks. Sadly, that’s often too slow to save a critically ill child.

Hospitals may soon have a faster option, thanks to a groundbreaking system built in part by Sneha Goenka, an assistant professor of electrical and computer engineering at Princeton—and MIT Technology Review’s 2025 Innovator of the Year. 

Five years ago, Goenka and her colleagues designed a rapid-sequencing pipeline that can provide a genetic diagnosis in less than eight hours. Goenka’s software computations and hardware architectures were critical to speeding up each stage of the process. 

“Her work made everyone realize that genome sequencing is not only for research and medical application in the future but can have immediate impact on patient care,” says Jeroen de Ridder, a professor at UMC Utrecht in the Netherlands, who has developed an ultrafast sequencing tool for cancer diagnosis. 

Now, as cofounder and scientific lead of a new company, she is working to make that technology widely available to patients around the world.

Goenka grew up in Mumbai, India. Her mother was an advocate for women’s education, but as a child, Goenka had to fight to persuade other family members to let her continue her studies. She moved away from home at 15 to attend her final two years of school and enroll in a premier test-­preparation academy in Kota, Rajasthan. Thanks to that education, she passed what she describes as “one of the most competitive exams in the world,” to get into the Indian Institute of Technology Bombay. 

Once admitted to a combined bachelor’s and master’s program in electrical engineering, she found that “it was a real boys’ club.” But Goenka excelled in developing computer architecture systems that accelerate computation. As an undergraduate, she began applying those skills to medicine, driven by her desire to “have real-world impact”—in part because she had seen her family struggle with painful uncertainty after her brother was born prematurely when she was eight years old. 

While working on a PhD in electrical engineering at Stanford, she turned her focus to evolutionary and clinical genomics. One day a senior colleague, Euan Ashley, presented her with a problem. He said, “We want to see how fast we can make a genetic diagnosis. If you had unlimited funds and resources, just how fast do you think you could make the compute?”

Streaming DNA

A genetic diagnosis starts with a blood sample, which is prepped to extract the DNA—a process that takes about three hours. Next that DNA needs to be “read.” One of the world’s leading long-read sequencing technologies, developed by Oxford Nanopore Technologies, can generate highly detailed raw data of an individual’s genetic code in about an hour and a half. Unfortunately, processing all this data to identify mutations can take another 21 hours. Shipping samples to a central lab and figuring out which mutations are of interest often leads the process to stretch out to weeks. 

Goenka saw a better way: Build a real-time system that could “stream” the sequencing data, analyzing it as it was being generated, like streaming a film on Netflix rather than downloading it to watch later.

Sneha Goenka

To do this, she designed a cloud computing architecture to pull in more processing power. Goenka’s first challenge was to increase the speed at which her team could upload the raw data for processing, by streamlining the requests between the sequencer and the cloud to avoid unnecessary “chatter.” She worked out the exact number of communication channels needed—and created algorithms that allowed those channels to be reused in the most efficient way.

The next challenge was “base calling”—converting the raw signal from the sequencing machine into the nucleotide bases A, C, T, and G, the language that makes up our DNA. Rather than using a central node to orchestrate this process, which is an inefficient, error-prone approach, Goenka wrote software to automatically assign dozens of data streams directly from the sequencer to dedicated nodes in the cloud.

Meet the rest of this year’s 
Innovators Under 35.

Then, to identify mutations, the sequences were aligned for comparison with a reference genome. She coded a custom program that triggers alignment as soon as base calling finishes for one batch of sequences while simultaneously initiating base calling for the next batch, thus ensuring that the system’s computational resources are used efficiently.

Add all these im­­prove­­ments together, and Goenka’s approach reduced the total time required to analyze a genome for mutations from around 20 hours to 1.5 hours. Finally, the team worked with genetic counselors and physicians to create a filter that identified which mutations were most critical to a person’s health, and that set was then given a final manual curation by a genetic specialist. These final stages take up to three hours. The technology was close to being fully operational when, suddenly, the first patient arrived. 

A critical test

When 13-year-old Matthew was flown to Stanford’s children’s hospital in 2021, he was struggling to breathe and his heart was failing. Doctors needed to know whether the inflammation in his heart was due to a virus or to a genetic mutation that would necessitate a transplant.  

His blood was drawn on a Thursday. The transplant committee made its decisions on Fridays. “It meant we had a small window of time,” says Goenka.

Goenka was in Mumbai when the sequencing began. She stayed up all night, monitoring the computations. That was when the project stopped being about getting faster for the sake of it, she says: “It became about ‘How fast can we get this result to save this person’s life?’”

The results revealed a genetic mutation that explained Matthew’s condition, and he was placed on the transplant list the next day. Three weeks later, he received a new heart. “He’s doing great now,” Goenka says.

So far, Goenka’s technology has been tested on 26 patients, including Matthew. Her pipeline is “directly affecting the medical care of newborns in the Stanford intensive care units,” Ashley says.

Now she’s aiming for even broader impact—Goenka and her colleagues are laying the groundwork for a startup that they hope will bring the technology to market and make sure it reaches as many patients as possible. Meanwhile, she has been refining the computational pipeline, reducing the time to diagnosis to about six hours.

The demand is clear, she says: “In an in-depth study involving more than a dozen laboratory directors and neonatologists, every respondent stressed urgency. One director put it succinctly: ‘I need this platform today—preferably yesterday.’”

Goenka is also developing software to make the technology more inclusive. The reference genome is skewed toward people of European descent. The Human Pangenome Project is an international collaboration to create reference genomes from more diverse populations, which Goenka aims to use to personalize her team’s filters, allowing them to flag mutations that may be more prevalent in the population to which a patient belongs.

Since seeing her work, Goenka’s extended family has become more appreciative of her education and career. “The entire family is very proud about the impact I’ve made,” she says. 

Helen Thomson is a freelance science journalist based in London.

Meet the Ethiopian entrepreneur who is reinventing ammonia production

Iwnetim Abate is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

“I’m the only one who wears glasses and has eye problems in the family,” Iwnetim Abate says with a smile as sun streams in through the windows of his MIT office. “I think it’s because of the candles.”

In the small town in Ethiopia where he grew up, Abate’s family had electricity, but it was unreliable. So, for several days each week when they were without power, Abate would finish his homework by candlelight.

Today, Abate, 32, is an assistant professor at MIT in the department of materials science and engineering. Part of his research focuses on sodium-ion batteries, which could be cheaper than the lithium-based ones that typically power electric vehicles and grid installations. He’s also pursuing a new research path, examining how to harness the heat and pressure under the Earth’s surface to make ammonia, a chemical used in fertilizer and as a green fuel.

Growing up without the ubiquitous access to electricity that many people take for granted shaped the way Abate thinks about energy issues, he says. He recalls rushing to dry out his school uniform over a fire before he left in the morning. One of his chores was preparing cow dung to burn as fuel—the key is strategically placing holes to ensure proper drying, he says.

Abate’s desire to devote his attention to energy crystallized in a high school chemistry class on fuel cells. “It was like magic,” he says, to learn it’s possible to basically convert water into energy. “Sometimes science is magic, right?”

Abate scored the highest of any student in Ethiopia on the national exam the year he took it, and he knew he wanted to go to the US to further his education. But actually getting there proved to be a challenge. 

Abate applied to US colleges for three years before he was granted admission to Concordia College Moorhead, a small liberal arts college, with a partial scholarship. To raise the remaining money, he reached out to various companies and wealthy people across Ethiopia. He received countless rejections but didn’t let that phase him. He laughs recalling how guards would chase him off when he dropped by prospects’ homes in person. Eventually, a family friend agreed to help.

When Abate finally made it to the Minnesota college, he walked into a room in his dorm building and the lights turned on automatically. “I both felt happy to have all this privilege and I felt guilty at the same time,” he says.

Lab notes

His college wasn’t a research institute, so Abate quickly set out to get into a laboratory. He reached out to Sossina Haile, then at the California Institute of Technology, to ask about a summer research position.

Haile, now at Northwestern University, recalls thinking that Abate was particularly eager. As a visible Ethiopian scientist, she gets a lot of email requests, but his stood out. “No obstacle was going to stand in his way,” she says. It was risky to take on a young student with no research experience who’d only been in the US for a year, but she offered him a spot in her lab.

Abate spent the summer working on materials for use in solid oxide fuel cells. He returned for the following summer, then held a string of positions in energy-materials research, including at IBM and Los Alamos National Lab, before completing his graduate degree at Stanford and postdoctoral work at the University of California, Berkeley.

Meet the rest of this year’s 
Innovators Under 35.

He joined the MIT faculty in 2023 and set out to build a research group of his own. Today, there are two major focuses of his lab. One is sodium-ion batteries, which are a popular alternative to the lithium-based cells used in EVs and grid storage installations. Sodium-ion batteries don’t require the kinds of critical minerals lithium-ion batteries do, which can be both expensive and tied up by geopolitics.  

One major stumbling block for sodium-ion batteries is their energy density. It’s possible to improve energy density by operating at higher voltages, but some of the materials used tend to degrade quickly at high voltages. That limits the total energy density of the battery, so it’s a problem for applications like electric vehicles, where a low energy density would restrict range.

Abate’s team is developing materials that could extend the lifetime of sodium-ion batteries while avoiding the need for nickel, which is considered a critical mineral in the US. The team is examining additives and testing materials-engineering techniques to help the batteries compete with lithium-ion cells.

Irons in the fire

Another vein of Abate’s work is in some ways a departure from his history in batteries and fuel cells. In January, his team published research describing a process to make ammonia underground, using naturally-occurring heat and pressure to drive the necessary chemical reactions.  

Today, making ammonia generates between 1% and 2% of global greenhouse gas emissions. It’s primarily used to fertilize crops, but it’s also being considered as a fuel for sectors like long-distance shipping.

Abate cofounded a company called Addis Energy to commercialize the research, alongside MIT serial entrepreneur Yet-Ming Chiang and a pair of oil industry experts. (Addis means “new” in Amharic, the official language of Ethiopia.) For an upcoming pilot, the company aims to build an underground reactor that can produce ammonia. 

When he’s not tied up in research or the new startup, Abate runs programs for African students. In 2017, he cofounded an organization called Scifro, which runs summer school programs in Ethiopia and plans to expand to other countries, including Rwanda. The programs focus on providing mentorship and educating students about energy and medical devices, which is the specialty of his cofounder. 

While Abate holds a position at one of the world’s most prestigious universities and serves as chief science officer of a buzzy startup, he’s quick to give credit to those around him. “It takes a village to build something, and it’s not just me,” he says.

Abate often thinks about his friends, family, and former neighbors in Ethiopia as he works on new energy solutions. “Of course, science is beautiful, and we want to make an impact,” he says. “Being good at what you do is important, but ultimately, it’s about people.”

How Yichao “Peak” Ji became a global AI app hitmaker

Yichao “Peak” Ji is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production—it was directed by cofounder Zhang Tao and filmed in a corner of their Beijing office. But something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people. 

At first sight, Manus works like most chatbots: Users can ask it questions in a chat window. However, besides providing answers, it can also carry out tasks (for example, finding an apartment that meets specified criteria within a certain budget). It does this by breaking tasks down into steps, then using a cloud-based virtual machine equipped with a browser and other tools to execute them—perusing websites, filling in forms, and so on.

Ji is the technical core of the team. Now based in Singapore, he leads product and infrastructure development as the company pushes forward with its global expansion. 

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility among both engineers and investors—and put him at the forefront of a rising class of Chinese technologists with AI products and global ambitions. 

Serial builder

The son of a professor and an IT professional, Ji moved to Boulder, Colorado, at age four for his father’s visiting scholar post, returning to Beijing in second grade.

His fluent English set him apart early on, but it was an elementary school robotics team that sparked his interest in programming. By high school, he was running the computer club, teaching himself how to build operating systems, and drawing inspiration from Bill Gates, Linux, and open-source culture. He describes himself as a lifelong Apple devotee, and it was Apple’s launch of the App Store in 2008 that ignited his passion for development.

In 2010, as a high school sophomore, Ji created the Mammoth browser, a customizable third-party iPhone browser. It quickly became the most-downloaded third-party browser developed by an individual in China and earned him the Macworld Asia Grand Prize in 2011. International tech site AppAdvice called it a product that “redefined the way you browse the internet.” At age 20, he was on the cover of Forbes magazine and made its “30 Under 30” list. 

Meet the rest of this year’s 
Innovators Under 35.

During his teenage years, Ji developed several other iOS apps, including a budgeting tool designed for Hasbro’s Monopoly game, which sold well—until it attracted a legal notice for using the trademarked name. But Ji wasn’t put off a career in tech by that early brush with a multinational legal team. If anything, he says, it sharpened his instincts for both product and risk. 

In 2012, Ji launched his own company, Peak Labs, and later led the development of Magi, a search engine. The tool extracted information from across the web to answer queries—conceptually similar to today’s AI-powered search, but powered by a custom language model. 

​​Magi was briefly popular, drawing millions of users in its first month, but consumer adoption didn’t stick. It did, however, attract enterprise interest, and Ji adapted it for B2B use, before selling it in 2022. 

AI acumen 

Manus would become his next act—and a more ambitious one. His cofounders, Zhang Tao and Xiao Hong, complement Ji’s technical core with product know-how, storytelling, and organizational savvy. Both Xiao and Ji are serial entrepreneurs who have been backed by venture capital firm ZhenFund multiple times. Together, they represent the kind of long-term collaboration and international ambition that increasingly defines China’s next wave of entrepreneurs.

JULIANA TAN

People who have worked with Ji describe him as a clear thinker, a fast talker, and a tireless, deeply committed builder who thinks in systems, products, and user flows. He represents a new generation of Chinese technologists: equally at home coding or in pitch meetings, fluent in both building and branding. He’s also a product of open-source culture, and remains an active contributor whose projects regularly garner attention—and GitHub stars—across developer communities.

With new funding led by US venture capital firm Benchmark, Ji and his team are taking Manus to the wider world, relocating operations outside of China, to Singapore, and actively targeting consumers around the world. The product is built on US-based infrastructure, drawing on technologies like Claude Sonnet, Microsoft Azure, and open-source tools such as Browser Use. It’s a distinctly global setup: an AI agent developed by a Chinese team, powered by Western platforms, and designed for international users. That isn’t incidental; it reflects the more fluid nature of AI entrepreneurship today, where talent, infrastructure, and ambition move across borders just as quickly as the technology itself.

For Ji, the goal isn’t just building a global company—it’s building a legacy. “I hope Manus is the last product I’ll ever build,” Ji says. “Because if I ever have another wild idea—(I’ll just) leave it to Manus!”

How Trump’s policies are affecting early-career scientists—in their own words

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

Every year MIT Technology Review celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list. We’ve just published the 2025 edition. This year, though, the context is pointedly different: The US scientific community finds itself in an unprecedented position, with the very foundation of its work under attack

Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities individually and academia more broadly, and made substantial funding cuts to the country’s science and technology infrastructure. It has also upended longstanding rights and norms related to free speech, civil rights, and immigration—all of which further affects the overall environment for research and innovation in science and technology. 

We wanted to understand how these changes are affecting the careers and work of our most recent classes of innovators. The US government is the largest source of research funding at US colleges and universities, and many of our honorees are new professors and current or recent graduate or PhD students, while others work with government-funded entities in other ways. Meanwhile, about 16% of those in US graduate programs are international students. 

We sent surveys to the six most recent cohorts, which include 210 people. We asked people about both positive and negative impacts of the administration’s new policies and invited them to tell us more in an optional interview. Thirty-seven completed our survey, and we spoke with 14 of them in follow-up calls. Most respondents are academic researchers (about two-thirds) and are based in the US (81%); 11 work in the private sector (six of whom are entrepreneurs). Their responses provide a glimpse into the complexities of building their labs, companies, and careers in today’s political climate. 

Twenty-six people told us that their work has been affected by the Trump administration’s changes; only one of them described those effects as “mostly positive.” The other 25 reported primarily negative effects. While a few agreed to be named in this story, most asked to be identified only by their job titles and general areas of work, or wished to remain anonymous, for fear of retaliation. “I would not want to flag the ire of the US government,” one interviewee told us. 

Across interviews and surveys, certain themes appeared repeatedly: the loss of jobs, funding, or opportunities; restrictions on speech and research topics; and limits on who can carry out that research. These shifts have left many respondents deeply concerned about the “long-term implications in IP generation, new scientists, and spinout companies in the US,” as one respondent put it. 

One of the things we heard most consistently is that the uncertainty of the current moment is pushing people to take a more risk-averse approach to their scientific work—either by selecting projects that require fewer resources or that seem more in line with the administration’s priorities, or by erring on the side of hiring fewer people. “We’re not thinking so much about building and enabling … we’re thinking about surviving,” said one respondent. 

Ultimately, many are worried that all the lost opportunities will result in less innovation overall—and caution that it will take time to grasp the full impact. 

“We’re not going to feel it right now, but in like two to three years from now, you will feel it,” said one entrepreneur with a PhD who started his company directly from his area of study. “There are just going to be fewer people that should have been inventing things.”

The money: “Folks are definitely feeling the pressure”

The most immediate impact has been financial. Already, the Trump administration has pulled back support for many areas of science—ending more than a thousand awards by the National Institutes of Health and over 100 grants for climate-related projects by the National Science Foundation. The rate of new awards granted by both agencies has slowed, and the NSF has cut the number of graduate fellowships it’s funding by half for this school year. 

The administration has also cut or threatened to cut funding from a growing number of universities, including Harvard, Columbia, Brown, and UCLA, for supposedly not doing enough to combat antisemitism.

As a result, our honorees said that finding funding to support their work has gotten much harder—and it was already a big challenge before. 

A biochemist at a public university told us she’d lost a major NIH grant. Since it was terminated earlier this year, she’s been spending less time in the lab and more on fundraising. 

Others described uncertainty about the status of grants from a wide range of agencies, including NSF, the Advanced Research Projects Agency for Health, the Department of Energy, and the Centers for Disease Control and Prevention, which collectively could pay out more than $44 million to the researchers we’ve recognized. Several had waited months for news on an application’s status or updates on when funds they had already won would be disbursed. One AI researcher who studies climate-related issues is concerned that her multiyear grant may not be renewed, even though renewal would have been “fairly standard” in the past.

Two individuals lamented the cancellation of 24 awards in May by the DOE’s Office of Clean Energy Demonstrations, including grants for carbon capture projects and a clean cement plant. One said the decision had “severely disrupted the funding environment for climate-tech startups” by creating “widespread uncertainty,” “undermining investor confidence,” and “complicating strategic planning.” 

Climate research and technologies have been a favorite target of the Trump administration: The recently passed tax and spending bill put stricter timelines in place that make it harder for wind and solar installations to qualify for tax credits via the Inflation Reduction Act. Already, at least 35 major commercial climate-tech projects have been canceled or downsized this year. 

In response to a detailed list of questions, a DOE spokesperson said, “Secretary [Chris] Wright and President Trump have made it clear that unleashing American scientific innovation is a top priority.” They pointed to “robust investments in science” in the president’s proposed budget and the spending bill and cited special areas of focus “to maintain America’s global competitiveness,” including nuclear fusion, high-performance computing, quantum computing, and AI. 

Other respondents cited tighter budgets brought on by a change in how the government calculates indirect costs, which are funds included in research grants to cover equipment, institutional overhead, and in some cases graduate students’ salaries. In February, the NIH instituted a 15% cap on indirect costs—which ran closer to 28% of the research funds the NIH awarded in 2023. The DOE, DOD, and NSF all soon proposed similar caps. This collective action has sparked lawsuits, and indirect costs remain in limbo. (MIT, which owns MIT Technology Review, is involved in several of these lawsuits; MIT Technology Review is editorially independent from the university.) 

Looking ahead, an academic at a public university in Texas, where the money granted for indirect costs funds student salaries, said he plans to hire fewer students for his own lab. “It’s very sad that I cannot promise [positions] at this point because of this,” he told us, adding that the cap could also affect the competitiveness of public universities in Texas, since schools elsewhere may fund their student researchers differently. 

At the same time, two people with funding through the Defense Department—which could see a surge of investment under the president’s proposed budget—said their projects were moving forward as planned. A biomedical engineer at a public university in the Midwest expressed excitement about what he perceives as a fresh surge of federal interest in industrial and defense applications of synthetic biology. Still, he acknowledged colleagues working on different projects don’t feel as optimistic: “Folks are definitely feeling the pressure.”

Many who are affected by cuts or delays are now looking for new funding sources in a bid to become less reliant on the federal government. Eleven people said they are pursuing or plan to pursue philanthropic and foundation funding or to seek out industry support. However, the amount of private funding available can’t begin to make up the difference in federal funds lost, and investors often focus more on low-risk, short-term applications than on open scientific questions. 

The NIH responded to a detailed list of questions with a statement pointing to unspecified investments in early-career researchers. “Recent updates to our priorities and processes are designed to broaden scientific opportunity rather than restrict it, ensuring that taxpayer-funded research is rigorous, reproducible, and relevant to all Americans,” it reads. The NSF declined a request for comment from MIT Technology Review

Further complicating this financial picture are tariffs—some of which are already in effect, and many more of which have been threatened. Nine people who responded to our survey said their work is already being affected by these taxes imposed on goods imported into the US. For some scientists, this has meant higher operating costs for their labs: An AI researcher said tariffs are making computational equipment more expensive, while the Texas academic said the cost of buying microscopes from a German firm had gone up by thousands of dollars since he first budgeted for them. (Neither the White House press office nor the White House Office of Science and Technology Policy responded to requests for comment.) 

One cleantech entrepreneur saw a positive impact on his business as more US companies reevaluated their supply chains and sought to incorporate more domestic suppliers. The entrepreneur’s firm, which is based in the US, has seen more interest for its services from potential customers seeking “tariff-proof vendors.”  

“Everybody is proactive on tariffs and we’re one of these solutions—we’re made in America,” he said. 

Another person, who works for a European firm, is factoring potential tariffs into decisions about where to open new production facilities. Though the Trump administration has said the taxes are meant to reinvigorate US manufacturing, she’s now less inclined to build out a significant presence in the US because, she said, tariffs may drive up the costs of importing raw materials that are required to make the company’s product. 

What’s more, financial backers have encouraged her company to stay rooted abroad because of the potential impact of tariffs for US-based facilities: “People who invest worldwide—they are saying it’s reassuring for them right now to consider investing in Europe,” she said.

The climate of fear: “It will impact the entire university if there is retaliation” 

Innovators working in both academia and the private sector described new concerns about speech and the politicization of science. Many have changed how they describe their work in order to better align with the administration’s priorities—fearing funding cuts, job terminations, immigration action, and other potential retaliation. 

This is particularly true for those who work at universities. The Trump administration has reached deals with some institutions, including Columbia and Brown, that would restore part of the funding it slashed—but only after the universities agreed to pay hefty fines and abide by terms that, critics say, hand over an unprecedented level of oversight to administration officials. 

Some respondents had received guidance on what they could or couldn’t say from program managers at their funding agencies or their universities or investors; others had not received any official guidance but made personal decisions on what to say and share publicly based on recent news of grant cancellations.

Both on and off campus, there is substantial pressure on diversity, equity, and inclusion (DEI) initiatives, which have been hit particularly hard as the administration seeks to eliminate what it called “illegal and immoral discrimination programs” in one of the first executive orders of President Trump’s second term.  

One respondent, whose work focuses on fighting child sexual abuse materials, recalled rewriting a grant abstract “3x to remove words banned” by Senator Ted Cruz of Texas, an administration ally; back in February, Cruz identified 3,400 NSF grants as “woke DEI” research advancing “neo-Marxist class warfare propaganda.” (His list includes grants to research self-driving cars and solar eclipses. His office did not respond to a request for comment.) 

Many other researchers we spoke with are also taking steps to avoid being put in the DEI bucket. A technologist at a Big Tech firm whose work used to include efforts to provide more opportunities for marginalized communities to get into computing has stopped talking about those recruiting efforts. One biologist described hearing that grant applications for the NIH now have to avoid words like “cell type diversity” for “DEI reasons”—no matter that “cell type diversity” is, she said, a common and “neutral” scientific term in microbiology. (In its statement, the NIH said: “To be clear, no scientific terms are banned, and commonly used terms like ‘cell type diversity’ are fully acceptable in applications and research proposals.”) 

Plenty of other research has also gotten caught up in the storm

One person who works in climate technology said that she now talks about “critical minerals,” “sovereignty,” and “energy independence” or “dominance” rather than “climate” or “industrial decarbonization.” (Trump’s Energy Department has boosted investment in critical minerals, pledging nearly $1 billion to support related projects.) Another individual working in AI said she has been instructed to talk less about “regulation,” “safety,” or “ethics” as they relate to her work. One survey respondent described the language shift as “definitely more red-themed.”

Some said that shifts in language won’t change the substance of their work, but others feared they will indeed affect the research itself. 

Emma Pierson, an assistant professor of computer science at the University of California, Berkeley, worried that AI companies may kowtow to the administration, which could in turn “influence model development.” While she noted that this fear is speculative, the Trump administration’s AI Action Plan contains language that directs the federal government to purchase large language models that generate “truthful responses” (by the administration’s definition), with a goal of “preventing woke AI in the federal government.” 

And one biomedical researcher fears that the administration’s effective ban on DEI will force an end to outreach “favoring any one community” and hurt efforts to improve the representation of women and people of color in clinical trials. The NIH and the Food and Drug Administration had been working for years to address the historic underrepresentation of these groups through approaches including specific funding opportunities to address health disparities; many of these efforts have recently been cut

Respondents from both academia and the private sector told us they’re aware of the high stakes of speaking out. 

“As an academic, we have to be very careful about how we voice our personal opinion because it will impact the entire university if there is retaliation,” one engineering professor told us. 

“I don’t want to be a target,” said one cleantech entrepreneur, who worries not only about reprisals from the current administration but also about potential blowback from Democrats if he cooperates with it. 

“I’m not a Trumper!” he said. “I’m just trying not to get fined by the EPA.” 

The people: “The adversarial attitude against immigrants … is posing a brain drain”

Immigrants are crucial to American science, but what one respondent called a broad “persecution of immigrants,” and an increasing climate of racism and xenophobia, are matters of growing concern. 

Some people we spoke with feel vulnerable, particularly those who are immigrants themselves. The Trump administration has revoked 6,000 international student visas (causing federal judges to intervene in some cases) and threatened to “aggressively” revoke the visas of Chinese students in particular. In recent months, the Justice Department has prioritized efforts to denaturalize certain citizens, while similar efforts to revoke green cards granted decades ago were shut down by court order. One entrepreneur who holds a green card told us, “I find myself definitely being more cognizant of what I’m saying in public and certainly try to stay away from anything political as a result of what’s going on, not just in science but in the rest of the administration’s policies.” 

On top of all this, federal immigration raids and other enforcement actions—authorities have turned away foreign academics upon arrival to the US and detained others with valid academic visas, sometimes because of their support for Palestine—have created a broad climate of fear.  

Four respondents said they were worried about their own immigration status, while 16 expressed concerns about their ability to attract or retain talent, including international students. More than a million international students studied in the US last year, with nearly half of those enrolling in graduate programs, according to the Institute of International Education

“The adversarial attitude against immigrants, especially those from politically sensitive countries, is posing a brain drain,” an AI researcher at a large public university on the West Coast told us. 

This attack on immigration in the US can be compounded by state-level restrictions. Texas and Florida both restrict international collaborations with and recruitment of scientists from countries including China, even though researchers told us that international collaborations could help mitigate the impacts of decreased domestic funding. “I cannot collaborate at this point because there’s too many restrictions and Texas also can limit us from visiting some countries,” the Texas academic said. “We cannot share results. We cannot visit other institutions … and we cannot give talks.”

All this is leading to more interest in positions outside the United States. One entrepreneur, whose business is multinational, said that their company has received a much higher share of applications from US-based candidates to openings in Europe than it did a year ago, despite the lower salaries offered there. 

“It is becoming easier to hire good people in the UK,” confirmed Karen Sarkisyan, a synthetic biologist based in London. 

At least one US-based respondent, an academic in climate technology, accepted a tenured position in the United Kingdom. Another said that she was looking for positions in other countries, despite her current job security and “very good” salary. “I can tell more layoffs are coming, and the work I do is massively devalued. I can’t stand to be in a country that treats their scientists and researchers and educated people like this,” she told us. 

Some professors reported in our survey and interviews that their current students are less interested in pursuing academic careers because graduate and PhD students are losing offers and opportunities as a result of grant cancellations. So even as the number of international students dwindles, there may also be “shortages in domestic grad students,” one mechanical engineer at a public university said, and “research will fall behind.”  

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

In the end, this will affect not just academic research but also private-sector innovation. One biomedical entrepreneur told us that academic collaborators frequently help his company generate lots of ideas: “We hope that some of them will pan out and become very compelling areas for us to invest in.” Particularly for small startups without large research budgets, having fewer academics to work with will mean that “we just invest less, we just have fewer options to innovate,” he said. “The level of risk that industry is willing to take is generally lower than academia, and you can’t really bridge that gap.” 

Despite it all, a number of researchers and entrepreneurs who generally expressed frustration about the current political climate said they still consider the US the best place to do science. 

Pierson, the AI researcher at Berkeley, described staying committed to her research into social inequities despite the political backlash: “I’m an optimist. I do believe this will pass, and these problems are not going to pass unless we work on them.” 

And a biotech entrepreneur pointed out that US-based scientists can still command more resources than those in most other countries. “I think the US still has so much going for it. Like, there isn’t a comparable place to be if you’re trying to be on the forefront of innovation—trying to build a company or find opportunities,” he said.

Several academics and founders who came to the US to pursue scientific careers spoke about still being drawn to America’s spirit of invention and the chance to advance on their own merits. “For me, I’ve always been like, the American dream is something real,” said one. They said they’re holding fast to those ideals—for now.

Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

The Download: introducing our 35 Innovators Under 35 list for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: our 35 Innovators Under 35 list for 2025

The world is full of extraordinary young people brimming with ideas for how to crack tough problems. Every year, we recognize 35 such individuals from around the world—all of whom are under the age of 35.

These scientists, inventors, and entrepreneurs are working to help mitigate climate change, accelerate scientific progress, and alleviate human suffering from disease. Some are launching companies while others are hard at work in academic labs. They were selected from hundreds of nominees by expert judges and our newsroom staff. 

Get to know them all—including our 2025 Innovator of the Year—in these profiles.

Why basic science deserves our boldest investment

—Julia R. Greer is a materials scientist at the California Institute of Technology, a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

A modern chip the size of a human fingernail contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches form the infrastructure behind nearly every digital device in use today.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. But that funding is under increasing pressure, thanks to deep budget cuts proposed by the White House.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. 

In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is considering annual chip supply permits in China
For South Korean companies Samsung and SK Hynix, specifically. (Bloomberg $)
+ US lawmakers still hold power over chips in China. (CNN)

2 America has recorded its first case of screwworm in over 50 years
And the warming climate is making it easier for the flies to thrive. (Vox)
+ Experts fear an approaching public health emergency. (The Guardian)

3 Drone warfare is dominating Ukraine’s frontline
Amid relentless assaults, overhead and land drones are being put to work. (The Guardian)
+ How cutting-edge drones forced land-locked tanks to evolve. (NYT $)
+ On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

4 OpenAI is working out why chatbots hallucinate so much
Examining a model’s incentives provides some clues. (Insider $)
+ Models’ tendency to confidently present falsehoods as fact is a big problem. (TechCrunch)
+ Why does AI hallucinate? (MIT Technology Review)

5 How one man is connecting Silicon Valley to the Middle East’s AI boom
If you want to build a data center, Zachary Cefaratti is your man. (FT $)
+ The data center boom in the desert. (MIT Technology Review)

6 The first OpenAI-backed movie is coming to theaters next year
The animated Critterz is hoping for a Cannes Film Festival debut. (WSJ $)
+ A Disney director tried—and failed—to use an AI Hans Zimmer to create a soundtrack. (MIT Technology Review)

7 Who wants to live forever?
These billionaires are confident their cash will pave the way to longer lives. (WSJ $)
+ Putin says organ transplants could grant immortality. Not quite. (MIT Technology Review)

8 Tesla isn’t focused on selling cars any more
The company’s latest Master Plan is all about humanoid robots. (The Atlantic $)
+ The board is willing to offer Musk a $1 trillion pay package if he delivers. (Wired $)
+ Uber is gearing up to test driverless cars in Germany. (The Verge)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

9 Do aliens go on holiday?
Scientists wonder whether tourism could be a potential drive for them to visit us. (New Yorker $)
+ How these two UFO hunters became go-to experts on America’s “mystery drone” invasion. (MIT Technology Review)

10 Vodafone’s new TikTok influencer isn’t real
It’s yet another example of AI avatars being used in ads. (The Verge)
+ Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back. (MIT Technology Review)

Quote of the day

“Silicon Valley totally effed up in overhyping LLMs.”

—Palantir CEO Alex Karp criticizes those who fueled the AI hype around large language models, Semafor reports.

One more thing

Puerto Rico’s power struggles

On the southeastern coast of Puerto Rico lies the country’s only coal-fired power station, flanked by a mountain of toxic ash. The plant, owned by the utility giant AES, has long plagued this part of Puerto Rico with air and water pollution.

Before the coal plant opened Guayama had on average just over 103 cancer cases per year. In 2003, the year after the plant opened, the number of cancer cases in the municipality surged by 50%, to 167. 

In 2022, the most recent year with available data, cases hit a new high of 209. The question is: How did it get this bad? Read the full story.

—Alexander C. Kaufman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ What’s up with tennis players’ strange serving rituals?
+ If constant scrolling is turning your hands into gnarled claws, this stretch should help.
+ How to land a genuine bargain on Facebook Marketplace.
+ This photographer tracks down people who featured in pictures decades before, and persuades them to recreate their poses. Heartwarming stuff ❤