How machines that can solve complex math problems might usher in more powerful AI

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been another big week in AI. Meta updated its powerful new Llama model, which it’s handing out for free, and OpenAI said it is going to trial an AI-powered online search tool that you can chat with, called SearchGPT. 

But the news item that really stood out to me was one that didn’t get as much attention as it should have. It has the potential to usher in more powerful AI and scientific discovery than previously possible. 

Last Thursday, Google DeepMind announced it had built AI systems that can solve complex math problems. The systems—called AlphaProof and AlphaGeometry 2—worked together to successfully solve four out of six problems from this year’s International Mathematical Olympiad, a prestigious competition for high school students. Their performance was the equivalent of winning a silver medal. It’s the first time any AI system has ever achieved such a high success rate on these kinds of problems. My colleague Rhiannon Williams has the news here

Math! I can already imagine your eyes glazing over. But bear with me. This announcement is not just about math. In fact, it signals an exciting new development in the kind of AI we can now build. AI search engines that you can chat with may add to the illusion of intelligence, but systems like Google DeepMind’s could improve the actual intelligence of AI. For that reason, building systems that are better at math has been a goal for many AI labs, such as OpenAI.  

That’s because math is a benchmark for reasoning. To complete these exercises aimed at high school students, the AI system needed to do very complex things like planning to understand and solve abstract problems. The systems were also able to generalize, allowing them to solve a whole range of different problems in various  branches of mathematics. 

“What we’ve seen here is that you can combine [reinforcement learning] that was so successful in things like AlphaGo with large language models and produce something which is extremely capable in the space of text,” David Silver, principal research scientist at Google DeepMind and indisputably a pioneer of deep reinforcement learning, said in a press briefing. In this case, that capability was used to construct programs in the computer language Lean that represent mathematical proofs. He says the International Mathematical Olympiad represents a test for what’s possible and paves the way for further breakthroughs. 

This same recipe could be applied in any situation with really clear, verified reward signals for reinforcement-learning algorithms and an unambiguous way to measure correctness as you can in mathematics, said Silver. One potential application would be coding, for example. 

Now for a compulsory reality check: AlphaProof and AlphaGeometry 2 can still only solve hard high-school-level problems. That’s a long way away from the extremely hard problems top human mathematicians can solve. Google DeepMind stressed that its tool did not, at this point, add anything to the body of mathematical knowledge humans have created. But that wasn’t the point. 

“We are aiming to provide a system that can prove anything,” Silver said. Think of an AI system as reliable as a calculator, for example, that can provide proofs for many challenging problems, or verify tests for computer software or scientific experiments. Or perhaps build better AI tutors that can give feedback on exam results, or fact-check news articles. 

But the thing that excites me most is what Katie Collins, a researcher at the University of Cambridge who specializes in math and AI (and was not involved in the project), told Rhiannon. She says these tools create and evaluate new problems, motivate new people to enter the field, and spark more wonder. That’s something we definitely need more of in this world.


Now read the rest of The Algorithm

Deeper Learning

A new tool for copyright holders can show if their work is in AI training data

Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: “copyright traps.” These are pieces of hidden text that let you mark written content in order to later detect whether it has been used in AI models or not. 

Why this matters: Copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The idea is that these traps could help to nudge the balance a little more in the content creators’ favor. Read more from me here

Bits and Bytes

AI trained on AI garbage spits out AI garbage
New research published in Nature shows that the quality of AI models’ output gradually degrades when it’s trained on AI-generated data. As subsequent models produce output that is then used as training data for future models, the effect gets worse. (MIT Technology Review

OpenAI unveils SearchGPT 
The company says it is testing new AI search features that give you fast and timely answers with clear and relevant sources cited. The idea is for the technology to eventually be incorporated into ChatGPT, and CEO Sam Altman says it’ll be possible to do voice searches. However, like many other AI-powered search services, including Google’s, it’s already making errors, as the Atlantic reports. 
(OpenAI

AI video generator Runway trained on thousands of YouTube videos without permission
Leaked documents show that the company was secretly training its generative AI models by scraping thousands of videos from popular YouTube creators and brands, as well as pirated films. (404 media

Meta’s big bet on open-source AI continues
Meta unveiled Llama 3.1 405B, the first frontier-level open-source AI model, which matches state-of-the-art models such as GPT-4 and Gemini in performance. In an accompanying blog post, Mark Zuckerberg renewed his calls for open-source AI to become the industry standard. This would be good for customization, competition, data protection, and efficiency, he argues. It’s also good for Meta, because it leaves competitors with less of an advantage in the AI space. (Facebook

How the US and its allies can rebuild economic security

A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state’s security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

When the history of the war in Ukraine is written, it will be taught as the first true “drone war.” But it should also be understood as an industrial wake-up call: a time when the role of a drone’s component parts was laid bare and the supply chains that support this technology—the knowledge, production operations, and manufacturing processes—were found wanting. Heroic stories will be told of Ukrainian ingenuity in building drones with Chinese parts in basements and on kitchen tables, and we will hear of the country’s attempt to rebuild supply chains dominated by China while in the midst of an existential fight for survival. But in the background, we will also need to understand the ways in which other nations, especially China, controlled the war through long-term economic policies focused on capturing industrial capacity that the US and its allies failed to support through to maturity.

Disassemble one of the FPV drones found across the battlefields of Ukraine and you will find about seven critical subsystems: power, propulsion, flight control, navigation and sensors (which gather location data and other information to support flight), compute (the processing and memory capacity needed to analyze the vast array of information and then support operations), communications (to connect the drone to the ground), and—supporting it all—the airframe.

We have created a bill of materials listing the components necessary to build an FPV drone and the common suppliers for those parts.

China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US. However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

Propulsion is another case in point. The brushless DC motors used to convert electrical energy from batteries into mechanical energy to rotate drone propellers were invented in the US and Germany. The sintered permanent neodymium (NdFeB) magnets used in these motors were invented in Japan and the US. Today, to our knowledge, all brushless DC motors for drones are made in China. Similarly, China dominates all steps in the processing and manufacture of NdFeB magnets, accounting for 92% of global NdFeB magnet and magnet alloy markets.

The missing middle of technology investment—insufficient funding for commercial production—is evident in each and every one of these failures, but the loss of expertise is an added dimension. For example, lithium polymer (LiPo) batteries are at the heart of every FPV drone. LiPo uses a solid or gel polymer electrolyte and achieves higher specific energy (energy per unit of weight)—a feature that is crucial for lightweight drones. Today, you would be hard-pressed to find a LiPo battery that was not manufactured in China. The experienced workforce behind these companies has contributed to learning curves that have led to a 97% drop in the cost of lithium-ion batteries and a simultaneous 300%-plus increase in battery energy density over the past three decades.

China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

Again, this dominance was not inevitable. Most of the critical breakthroughs needed to invent and commercialize Li-ion batteries were made by scientists in North America and Japan. But in comparison to the US and Europe (at least until very recently), China has taken a proactive stance to coordinate, support, and co-invest with strategic industries to commercialize emerging technologies. China’s Ministry of Industry and Information Technology has been at pains to support these domestic industries.

The case of Li-ion batteries is not an isolated one. The shift to Chinese dominance in the underlying electronics for FPV drones coincides with the period beginning in 2000, when Shenzhen started to emerge as a global hub for low-cost electronics. This trend was amplified by US corporations from Apple, for which low-cost production in China has been essential, to General Electric, which also sought low-cost approaches to maintain the competitive edge of its products. The global nature of supply chains was seen as a strength for US companies, whose comparative advantage lay in the design and integration of consumer products (such as smartphones) with little or no relevance for national security. Only a small handful of “exquisite systems” essential for military purposes were carefully developed within the US. And even those have relied upon global supply chains.

While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

Edlyn V. Levine is CEO and co-founder of a stealth-mode technology start up and an affiliate at MIT Sloan School of Management and the Department of Physics at Harvard University. Levine was co-founder and CSO of America’s Frontier Fund, and formerly Chief Technologist for the MITRE Corporation.

Fiona Murray is the William Porter (1967) Professor of Entrepreneurship at the MIT School of Management where she works at the intersection of critical technologies, entrepreneurship, and geopolitics. She is the Vice Chair of the NATO Innovation Fund—a multi-sovereign venture fund for defense, security and resilience, and served for a decade on the UK Prime Minister’s Council on Science and Technology.

The US physics community is not done working on trust

In April 2024, Nature released detailed information about investigations into claims made by Ranga Dias, a physicist at the University of Rochester, in two high-profile papers the journal had published about the discovery of room-temperature superconductivity. Those two papers, which showed evidence of fabricated data, were eventually retracted, along with other papers from the Dias group on related physics, including one in Physical Review Letters

This work made it into top journals because reviewers are used to being able to trust that data have not been so completely manipulated, and Dias’s experiments required very high pressures that other labs could not easily replicate. One natural reaction from the physics community would be “How could we ever have let this happen?” But another should be “Here we go again!” 

Alas, a pattern of similar behavior has been known for at least two decades. The history of such deceptions led the American Physical Society (APS) to study occurrences of fabrication, falsification, plagiarism, and harassment, and to create structures to address the issue. The APS work helped solidify community standards, but ethical violations are still a critical problem. 

Back in 2003, in response to two high-profile cases of premeditated fraud in physics, one of them remarkably similar to the cases being discussed now, the APS created a Task Force on Ethics. It conducted surveys to learn about the kind of ethics training physics researchers receive, and to determine the community’s awareness of a variety of ethics issues. The most compelling responses came from a survey of APS “junior members” (those who had earned their PhD in the previous three years). Approximately 50% of these members responded, illustrating tremendous concern about a number of ethics violations they had either observed or been forced to participate in. A 2004 Physics Today article that presented the survey data showed the types of ethics violations reported, including instances of data fabrication, fraud, and plagiarism (the federal definition of research misconduct). It also brought to light serious accusations of bullying and sexual harassment. The survey data revealed that ethics education was casual at best. 

Following the publication of the survey results and many discussions within the physics community, the APS issued an ethics statement focused on respectful treatment of subordinates. It also charged a task force with improving resources for ethics education, resulting in a collection of physics-centric case studies to facilitate training and discussion on ethical matters. And together with the scientific community, the APS’s journals established an explicit focus on publication ethics. 

In 2018 the APS updated and consolidated its ethics statements and expanded the scope of ethical misbehaviors to include harassment, sexual misconduct, conflicts of commitment, and misuse of public funds. The resulting Ethics Guidelines were adopted by the APS Council in 2019, and at the same time a standing Ethics Committee was established to monitor ethics issues in the physics community. Continuing its focus on education, the APS collaborated with the American Association of Physics Teachers (AAPT) to develop additional materials. The online guide Effective Practices for Physics Programs (known as EP3) is an excellent resource, designed to facilitate efforts by departments and other groups to educate our community through discussions. We particularly recommend the chapter titled “Guide to Ethics.” The APS has joined the Committee on Publication Ethics and the International Association of Scientific, Technical, and Medical Publishers to combat the threat posed by paper mills

What sort of impact have these actions had? In 2020, the APS Ethics Committee, in partnership with the Statistical Research Center of the American Institute of Physics, conducted two additional surveys, described in 2023 and 2024 articles in Physics Today. One targeted early-career members (those who had earned their PhD within the previous five years) and graduate students for comparison to the 2004 survey results, and the other focused on physics department chairs in the US. The surveys showed that ethics education in physics departments had improved in the intervening 15 years, but that bullying and sexual harassment were still problems for a number of members. Importantly, most cases of ethical violations experienced or observed by this group go unreported, for fear of inaction or reprisals. When the results of the two surveys were compared, clear differences emerged between the perspectives of department chairs and those of students and postdocs on the extent of ethical violations and the best way to deliver ethics training.

These surveys showed that improved education alone is not enough to sustain a culture of ethics in physics. They uncovered suggestive patterns to explain why some complaints about ethical violations are reported and resolved but most are not. The main reason young scientists keep quiet about fabrication, falsification, plagiarism, or harassment is that they fear complaints will destroy their careers while the perpetrators go untouched. In cases that were resolved, there were people that those with complaints trusted well enough to share their concerns, and those people in turn had enough power and connections to follow through and find a resolution. We call this a trust network. Key figures in a trust network could be an associate chair, an ombudsperson, or a faculty member. These people take it on themselves to listen to concerns, whoever raises them, and bring them to the institution’s attention. Indeed, similar networks would be highly valuable in any institution that employs professional scientists for research and development, since unethical behavior can happen anywhere. How to create and nurture such networks is a matter that needs more attention. 

Just as reviewers and journal editors need to be able to trust that data in a paper are not fabricated or falsified, all participants in the scientific enterprise need to be able to trust that their institutions fully support them as ethical people. Ranga Dias’s graduate students had worries about data quality early on but were caught in a power dynamic. Problems might have been recognized earlier if the students had been able to be fully engaged in the institutional response.

Fostering trust networks and continuing to use education to build an understanding of all the nuances involved in ethical decision-making are powerful tools to reinforce ethical behavior. We need to ingrain them as deeply as technical expertise.

Frances Houle is a senior scientist in the Chemical Sciences and Molecular Biophysics and Integrated Bioimaging Divisions at Lawrence Berkeley National Laboratory and was chair of the APS Ethics Committee in 2021. 

Kate Kirby is chief executive officer emerita of the APS and senior physicist (retired) and former associate director of the Harvard-Smithsonian Center for Astrophysics.

Laura Greene is the chief scientist of the National High Magnetic Field Laboratory, the Marie Krafft Professor of Physics at Florida State University, and the 2017 APS president. She presently serves on the President’s Council of Advisors on Science and Technology. 

Michael Marder is professor of physics, director of the Center for Nonlinear Dynamics, and executive director of UTeach at the University of Texas at Austin and was the founding chair of the APS Ethics Committee, serving in 2019 and 2020.

A controversial Chinese CRISPR scientist is still hopeful about embryo gene-editing. Here’s why.

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Back in 2018, it was my colleague Antonio Regalado, senior editor for biomedicine, who broke the story that a Chinese scientist named He Jiankui had used CRISPR to edit the genes of live human embryos, leading to the first gene-edited babies in the world. The news made He (or JK, as he prefers to be called) a controversial figure across the world, and just a year later, he was sentenced to three years in prison by the Chinese government, which deemed him guilty of illegal medical practices.

Last Thursday, JK, who was released from prison in 2022, sat down with Antonio and Mat Honan, our editor in chief, for a live broadcast conversation on the experiment, his current situation, and his plans for the future.

If you subscribe to MIT Technology Review, you can watch a recording of the conversation or read the transcript here. But if you don’t yet subscribe (and do consider it—I’m biased, but it’s worth it), allow me to recap some of the highlights of what JK shared.

His life has been eventful since he came out of prison. JK sought to live in Hong Kong but was rejected by its government; he publicly declared he would set up a nonprofit lab in Beijing, but that hasn’t happened yet; he was hired to lead a genetic-medicine research institution at Wuchang University of Technology, a private university in Wuhan, but he seems to have been let go again. Now, according to Stat News, he has relocated to Hainan, China’s southernmost island province, and started a lab there.

During the MIT Technology Review conversation, JK confirmed that he’s currently in Hainan and working on using gene-editing technology to cure genetic diseases like Duchenne muscular dystrophy (DMD). 

He’s currently funded by private donations from Chinese and American companies, although he refused to name them. Some have even offered to pay him to travel to obscure countries with lax regulations to continue his previous work, but he turned them down. He would much prefer to return to academia to do research, JK said, but he can still conduct scientific research at a private company. 

For now, he’s planning to experiment only on mice, monkeys, and nonviable human embryos, JK said.

His experiment in 2018 inspired China to come out with regulations that explicitly forbid gene editing for reproductive uses. Today, implanting an edited embryo into a human is a crime subject to up to seven years in prison. JK repeatedly said all his current work will “comply with all the laws, regulations, and international ethics” but shied away from answering a question on what he thinks regulation around gene editing should look like.

However, he is hopeful that society will come around one day and accept embryo gene editing as a form of medical treatment. “As humans, we are always conservative. We are always worried about new things, and it takes time for people to accept new technology,” he said. He believes this lack of societal acceptance is the biggest obstacle to using CRISPR for embryo editing.

Other than DMD, another disease for which JK is currently working on gene-editing treatments is Alzheimer’s. And there’s a personal reason. “I decided to do Alzheimer’s disease because my mother has Alzheimer’s. So I’m going to have Alzheimer’s too, and maybe my daughter and my granddaughter. So I want to do something to change it,” JK said. He said his interest in embryo gene editing was never about trying to change human evolution, but about changing the lives of his family and the patients who have come to him for help.

His idea for Alzheimer’s treatment is to modify one letter in the human DNA sequence to simulate a natural mutation found in some Icelandic and Scandinavian people, which previous research found could be related to a lower chance of getting Alzheimer’s disease. JK said it would take only about two years to finish the basic research for this treatment, but he won’t go into human trials with the current regulations. 

He compares these gene-editing treatments to vaccines that everyone will be able to get easily in the future. “I would say in 50 years, like in 2074, embryo gene editing will be as common as IVF babies to prevent all the genetic diseases we know today. So the babies born at that time will be free of genetic disease,” he said. 

For all that he’s been through, JK seems pretty optimistic about the future of embryo gene editing. “I believe society will eventually accept that embryo gene editing is a good thing because it improves human health. So I’m waiting for society to accept that,” he said.

Do you agree with his vision of embryo gene editing as a universal medical treatment in the future? I’d love to hear your thoughts. Write to me at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. There’s a new buzz phrase in China’s latest national economy blueprint: “new productive forces.” It just means the country is still invested in technology-driven economic growth. (The Economist $

2. For the first time ever, Chinese scientists found water in the form of hydrated minerals from lunar soil samples retrieved in 2020. (Sixth Tone)

3. In June, Chinese electric-vehicle brands accounted for 11% of the European EV market, reaching a new record. But tariffs that went into effect in July could stop that trend. (Bloomberg $)

4. Chinese companies are supplying precision parts for weapons to Russia through a Belarusian defense contractor. (Nikkei Asia $)

5. China is looking for international buyers for its first home-grown passenger jet, the C919. Airlines in Southeast Asian countries like Indonesia and Brunei are the most likely customers. (South China Morning Post $)

6. Hundreds of Temu suppliers protested at the headquarters of the company in Guangzhou. They said the platform is subjecting the suppliers to unfair penalties for consumer complaints. (Bloomberg $)

Lost in translation

Since Russia tightened its import regulations early this year, the once-lucrative business of smuggling Chinese electric vehicles has almost vanished, according to the Chinese publication Lifeweek. Previously, traders could leverage the high demand for Chinese EVs in Russia and the low tariffs in transit countries in Central Asia to reap huge profits. For example, one businessman earned 870,000 RMB (about $120,000) through one batch export of 12 cars in December.

But new policies in Russia drastically increased import duties and enforced stricter vehicle registration. Chinese carmakers like BYD and XPeng also saw the opportunity to set up licensed operations in Central Asia to cater to this market. These changes transformed a profitable business into a barely sustainable one, and traders have been forced to adapt or exit the market.

One more thing

To prevent drivers from falling asleep, some highways in China have installed laser equipment that light up the night sky with red, blue, and green rays to attract attention and keep people awake. This looks straight out of a sci-fi novel but has been in use in over 10 Chinese provinces since 2022, according to the company that made the system.

The Download: ethics in physics, and talking to ChatGPT

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The US physics community is not done working on trust

—Frances Houle, Kate Kirby, Laura Greene & Michael Marder

In April 2024, Nature released detailed information about investigations into claims made by Ranga Dias, a physicist at the University of Rochester, in two high-profile papers the journal had published about the discovery of room-temperature superconductivity. Those two papers, which showed evidence of fabricated data, were eventually retracted, along with other Dias papers. 

This work made it into top journals because reviewers are used to being able to trust that data have not been so completely manipulated, and Dias’s experiments required very high pressures that other labs could not easily replicate. One natural reaction from the physics community would be “How could we ever have let this happen?” But another should be “Here we go again!” 

Alas, a pattern of similar behavior has been known for at least two decades. But improved education alone is not enough to sustain a culture of ethics in physics. Here’s what we need to do as well.

OpenAI has released a new ChatGPT bot that you can talk to

The news: OpenAI is rolling out an advanced AI chatbot that you can talk to. It’s available now—at least for some. The new ChatGPT voice bot can tell what different tones of voice convey, respond to interruptions, and reply to queries in real time. It has also been trained to sound more natural and use voices to convey a wide range of different emotions.

Why it matters: The new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants in the vein of Siri and Alexa, but with far more capabilities to enable more natural, fluent conversations. It is a step in the march to more fully capable AI agents. Read the full story.

—Melissa Heikkilä

A controversial Chinese CRISPR scientist is still hopeful about embryo gene-editing. Here’s why.

Back in 2018, it was my colleague Antonio Regalado, senior editor for biomedicine, who broke the story that a Chinese scientist named He Jiankui had used CRISPR to edit the genes of live human embryos, leading to the first gene-edited babies in the world.

The news made JK (as he likes to be called) a controversial figure across the world, and just a year later, he was sentenced to three years in prison by the Chinese government, which deemed him guilty of illegal medical practices.

Last Thursday, JK, who was released from prison in 2022, sat down with Antonio and Mat Honan, our editor in chief, for a live broadcast conversation on the experiment, his current situation, and why he’s hopeful that society will come around one day and accept embryo gene editing as a form of medical treatment. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter exploring technology in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US Senate has passed landmark online child safety bills 
They’re the first bills of their kind to be passed in two decades. (The Verge)
+ The legislation forces platforms to take ‘reasonable’ steps to protect children. (WP $)
+ But child online safety laws could actually hurt kids, critics say. (MIT Technology Review)

2 The US is clamping down on the export of chipmaking equipment
A new rule will restrict exports even further than they are currently. (Reuters)
+ Some major makers are dodging the ban, though. (Bloomberg $)

3 X suspended the account of a major Kamala Harris fundraiser
Its owner, Elon Musk, is a vocal supporter of rival candidate Donald Trump. (WP $)
+ How all in on crypto is Harris really? (NY Mag $)

4 Meta will pay Texas more than $1 billion
To settle claims it harvested residents’ biometric data without consent. (FT $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)

5 Hollywood’s editors and artists are fearful of AI
They’re increasingly worried they’ll become electronic gig workers. (NYT $)+ Why artists are becoming less scared of AI. (MIT Technology Review)

6 Goodbye to Meta’s celebrity chatbots
Turns out no one wanted to chat to an AI version of Snoop Dogg. (The Information $)
+ AI Studio, which makes customizable chatbots, is its new focus. (The Verge)

7 3D printers are experiencing a renaissance
Defense and energy companies rely on them to overcome shortages. (WSJ $)

8 Primitive cells look very different to 21st century cells
Biologists are working to understand why and how they changed. (New Scientist $)

9 How social media turned tinned fish into a must-have 🐟
Sardines, eels, whelks; you name it, it’s selling. (Economist $)

10 Managing our digital lives is a full time job
But do we really need to keep our old photos and messages? (The Guardian)

Quote of the day

“Tag somebody and ask them: Do you believe?”

—Kenyan preacher Jeffter Wekesa, who broadcasts nightly sermons live on social media, implores his congregation to connect, Rest of World reports.

The big story

A close-up of a person's face

Description automatically generated

Inside the messy ethics of making war with machines

August 2023

In recent years, intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—have become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever.

Intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use.

However, these systems have become sophisticated enough to raise novel questions—ones that are surprisingly tricky to answer. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story.

—Arthur Holland Michel

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ To mark Arnold Schwarzenegger’s 77th birthday this week, let’s take a trip down memory lane to appraise the Austrian Oak’s finest roles (number one is entirely correct).
+ This photo of Brazilian Olympic surfer Gabriel Medina is unbelievable.
+ The trailer for the forthcoming Bob Dylan biopic actually looks pretty good.
+ How is Purple Rain 40 years old?!

Reimagining cloud strategy for AI-first enterprises

The rise of generative artificial intelligence (AI), natural language processing, and computer vision has sparked lofty predictions: AI will revolutionize business operations, transform the nature of knowledge work, and boost companies’ bottom lines and the larger global economy by trillions of dollars.

Executives and technology leaders are eager to see these promises realized, and many are enjoying impressive results of early AI investments. Balakrishna D.R. (Bali), executive vice president, global services head, AI and industry verticals at Infosys, says that generative AI is already proving game-changing for tasks such as knowledge management, search and summarization, software development, and customer service across sectors such as financial services, retail, health care, and automotive.

Realizing AI’s full potential on a mass scale will require more than just executives’ enthusiasm; becoming a truly AI-first enterprise will require a significant, sustained investment in cloud infrastructure and strategy. In 2024, the cloud has evolved beyond its initial purpose as a storage tool and cost saver to become a crucial driver of innovation, transformation, and disruption. Now, with AI in the mix, enterprises are looking to the cloud to support large language models (LLMs) to maximize R&D performance and prevent cybersecurity attacks, among other high-impact use cases.

A 2023 report by Infosys looks at how prepared companies are to realize the combined potential of cloud and AI. To further assess this state of readiness, MIT Technology Review Insights and Infosys surveyed 500 business leaders across industries such as IT, manufacturing, financial services, and consumer goods about how their organizations are thinking about—and acting upon—an integrated cloud and AI strategy.

This research found that most companies are still experimenting and preparing their infrastructure landscape for AI from a cloud perspective—and many are planning additional investments to accelerate their progress.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Charts: Global Entertainment and Media Trends

Global revenue of entertainment and media companies will grow at a 3.9% annual compound rate to reach $3.4 trillion by 2028. That’s according to PwC’s report, “Global Entertainment & Media Outlook 2024-28,” which covers 11 revenue segments across 53 countries and territories.

Per PwC, revenue from advertising and connectivity (i.e., internet access) is growing faster than consumer spending (purchases of games, events, apps). The study anticipates that advertising revenue will surpass $1 trillion by 2026 and double from 2020 levels by 2028.

In addition, the study shows growth in online ads, as revenue from internet advertising worldwide in 2028 will almost double that of 2021.

Driven mainly by Asia-Pacific users, the gaming industry continues to be one of the fastest-growing sectors, with revenue projected to exceed $300 billion by 2028.

Is Perplexity AI’s Revenue Share Plan Fair? via @sejournal, @martinibuster

AI-powered answer engine Perplexity AI announced a revenue-sharing plan with publishers when their content is referenced, but there are few details on how smaller publishers will benefit. Some in the digital marketing community expressed skepticism that only the biggest and most powerful publishers will be paid.

Perplexity AI Revenue Share

Perplexity recently announced the establishment of a new enterprise called Perplexity Publishers Program that promises revenue share. Perplexity swung the doors open wide for six big brand publishers who will receive cash payments in advance representing double digit revenue percentage shares. But there were literally no details about what ordinary publishers who lack the clout to get invited will earn or how to even join.

Short on details but long on promises, according to Perplexity:

“Revenue sharing: In the coming months, we’ll introduce advertising through our related questions feature. Brands can pay to ask specific related follow-up questions in our answer engine interface and on Pages. When Perplexity earns revenue from an interaction where a publisher’s content is referenced, that publisher will also earn a share.

We’re also excited to work with ScalePost.ai, a platform that streamlines collaborations between content publishers and AI companies and provides AI analytics for publishers. Our collaboration with them will enable our partners to gain deeper insights into how Perplexity cites their content.”

The six big brand entities who are receiving VIP invitations are:

  1. Der Spiegel
  2. Entrepreneur
  3. Fortune
  4. The Texas Tribune
  5. TIME
  6. WordPress.com

Is ScalePost.ai Legit?

There is an ad-hoc feeling to Perplexity’s announcement, not just because it’s short on details, but because it’s made in partnership with a boutique advertising network whose website only has two pages on it, the home page and the “contact us” page. There isn’t even an About Us page or office address listed.

Screenshot Of ScalePost.AI Home Page

The Internet Archive only discovered the site a few months ago, which makes the website younger than the condiments rolling around in most people’s refrigerators.

Screenshot Of ScalePost AI At Internet Archive

Despite all the typical signals that ScalePost is not a legit company, it actually is a legit company.

The founders and senior advisors are are associated with high profile people like the ex-engineering director for Google Peter Norvig and executives from top big brand publishers like Hearst, Conde Nast, Wired and Fast Company. Those aren’t who are people who are associated with the elite upper tier of publishers and technologies, not known championing the earnings of smaller publishers.

Agreement With WordPress

WordPress.com is a web publishing platform and web host owned by Automattic and is not the same as the non-profit WordPress.org, which produces the free content management system (CMS) that powers the majority of the world’s websites.

Their announcement shared details about how the revenue sharing is triggered:

“Being part of Perplexity’s Publishing Partners Program means that knowledge from WordPress.com can now be included in the variety of answers that are served on Perplexity’s “Keep Exploring” section on their Discover pages. That means your articles will be included in their search index and your articles can be surfaced as an answer on their answer engine and Discover feed.  your website is referenced in a Perplexity search result where the company earns advertising revenue, you’ll be eligible for revenue share. “

WordPress.com announced that participation in the revenue share program is on by default for publishers but that there is a way to opt out should publishers who utilize the free-tier of their publishing platform desire to not participate.

A spokesperson for WordPress.com clarified to Nieman Lab that VIP level publishers who pay to host on their premium tier will not be a part of the deal.

Nieman Lab quoted them as saying:

“Megan Fox, a spokesperson for Automattic, clarified the deal excludes publishers hosted on the premium WordPress VIP, including customers like NewsCorp. The deal also carves out an exception for smaller news outlets that use Newspack, a service for local news publishers hosted on WordPress.com, including CalMatters, Capital B, Reveal and Houston Landing.”

Matt Mullenweg, the founder of Automattic, had no specific details for publishers:

“We’ll share more details of how it works as this partnership evolves, including how we’ll be distributing revenue-share payments to those whose content qualifies.”

…If you want to opt out, we already offer the ability to opt out of content sharing.”

Skepticism About Receiving Perplexity Revenue Share

Influential digital marketer Ryan Jones expressed doubt on X (formerly Twitter):

“Unpopular opinion: Unless you’re one of the top few thousand websites on the internet, LLMs or search engines are never going to pay you for your content.”

Ryan expressed the opinion that only big sites with large amounts of traffic will ever see payments.

Terry Van Horne agreed (and he wasn’t the only one):

“I’d say more like top 100…”

Is There Reason To Be Skeptical?

At this point in time, the arrangement between Perplexity AI and a brand new advertising network is long on promise and doesn’t show any evidence of expertise or experience. Of course some people are skeptical, it might be abnormal to not be skeptical of the arrangement.

Featured Image by Shutterstock/Ljupco Smokovski

New Wix AI Tool Scales Content With Authenticity via @sejournal, @martinibuster

Wix announced a suite of AI tools that can automatically create topics and article outlines for blog posts that maintain quality and authenticity, helping businesses overcome an important hurdle of engaging and converting potential customers.

An Average Of 86% More Organic Traffic

An interesting insight shared by Wix is that they’ve noticed that websites with blogs on average cultivate 86% more organic traffic than sites without blogs. This new tool helps Wix users capitalize on that insight by making it easier to plan content, create the outlines for content, create a draft a new article and even create the entire article.

Publishing content at a constant pace is a key way to build an audience and increase organic traffic. A Content Plan or Content Calendar is a key way to ensure that an organization can get on track to publishing that content with the regularity necessary for successfully increasing more traffic to a site. Wix’s new tool automates the process of creating content in a flexible way that adjusts to the user’s needs.

Preserves Authenticity

Perhaps one of the most interesting feature of this tool is that users can decide how much the AI is involved in order to preserve the authenticity, creativity and insights that a human can provide by coming up with topic ideas, outlines and article drafts that can serve as a starting point. The suite of tools can even simplify the process of generating images for the articles.

Another feature of the Wix’s new tool is that the process of creating content can be automated based on existing content, products and coming events.

Wix shared the following features and benefits:

  • Versatile Content Creation: From ideation to creating full posts or outlines, users have a broad selection of content creation tools depending on their needs.
  • Extensive Customization: Select the outline tool for a suggestion on the structure paired with writing instructions, combining AI assistance with creative control. Users can fine-tune their content to resonate with their target audience, ensuring it meets their preferences and interests.
  • Titles, Image, and SEO Optimization Tools: Users can enhance blog titles, images, and existing text with AI-driven suggestions. Additionally, users can add the keywords they want to include for SEO, and it will be incorporated throughout the content.
  • Visual Content Integration: Images are included to make the blog content more visually appealing, intending to increase views and engagement. Users can describe what they want to create and choose their style and a unique image will be generated.
  • Access to Wix Business Solutions: The AI blog tools are completely integrated into the Wix platform, giving users access to connect their blogs to Wix business solutions. This allows for convenient features like sending promotional emails to subscribers with a single click, linking blog content to pricing plans, and much more.

Read more about Wix’s suite of AI-powered tools for streamlining the process of creating content and attracting more organic traffic:

Your always up-to-date guide to Wix’s AI tools

Featured Image by Shutterstock/Roman Samborskyi