The new word in home construction could be “plastics”

Single-use plastics are a persistent source of environmental pollution, and the need to house a growing global population puts increasing pressure on resources such as timber. MIT engineers have an idea that could make a dent in both problems at once.

In a recent study, a team led by mechanical engineering professor David Hardt, SM ’74, PhD ’79, and lecturer and research scientist AJ Perez ’13, MEng ’14, PhD ’23, laid out a plan for using recycled plastic to 3D-print construction-grade beams, trusses, and other structures that could one day offer lighter, more sustainable alternatives to traditional wood-based framing. Although some companies are working on using large-scale additive manufacturing to create walls, they’re mainly using concrete or clay, whose production typically has a large negative environmental impact. These engineers are among the first to explore printing structural framing elements—and to do so using recycled plastic.

The design they came up with is similar in shape to the traditional wooden trusses that support flooring, with beams that connect in a pattern resembling a ladder with diagonal rungs. To test it, they obtained pellets made of recycled PET polymers and glass fibers from an aerospace materials company and fed them into a room-size 3D printer as “ink.” When they printed four long trusses with this material and configured them into a conventional plywood-topped floor frame, the result had a load-bearing capacity of over 4,000 pounds, far exceeding key building standards set by the US Department of Housing and Urban Development.

The plastic-printed trusses weigh about 13 pounds each, light enough to transport without a flatbed truck. An industrial printer can crank one out in under 13 minutes. Crucially, the researchers are developing the process to work with “dirty” plastic that hasn’t been cleaned or preprocessed. In addition to floor trusses, they are working on printing other elements and combining them into a full frame for a modest-size house.

“We’ve estimated that the world needs about 1 billion new homes by 2050. If we try to make that many homes using wood, we would need to clear-cut the equivalent of the Amazon rainforest three times over,” says Perez. “The key here is: We recycle dirty plastic into building products for homes that are lighter, more durable, and sustainable.”

The researchers envision that one day, trash like used bottles and food containers could be sent directly into a shredder, turned into pellets, and fed into a large-scale additive manufacturing machine to become structural composite construction components. At the construction site, the elements could be quickly fitted into a lightweight yet sturdy home frame.

“The idea is to bring shipping containers close to where you know you’ll have a lot of plastic, like next to a football stadium,” Perez says. “Then you could use off-the-shelf shredding technology and feed that dirty shredded plastic into a large-scale additive manufacturing system, which could exist in micro-factories, just like bottling centers, around the world. You could print the parts for entire buildings that would be light enough to transport on a moped or pickup truck to where homes are most needed.” 

A natural protein may protect the GI tract from infection

Embedded in the body’s mucosal surfaces, proteins called lectins bind to sugars found on cell surfaces. A team led by MIT chemistry professor Laura Kiessling has found that one such protein, intelectin-2, both helps fortify the mucosal barrier and offers broad-spectrum protection against harmful bacteria found in the GI tract. 

Intelectin-2 binds to a sugar molecule called galactose that is found on bacterial membranes, the team found, trapping the bacteria and hindering their growth; the trapped microbes eventually disintegrate, suggesting that the protein is able to kill them by disrupting their cell membranes. It also helps strengthen the intestine’s protective lining by binding to the galactose in the mucins that make up mucus.

“What’s remarkable is that intelectin-2 operates in two complementary ways. It helps stabilize the mucus layer, and if that barrier is compromised, it can directly neutralize or restrain bacteria that begin to escape,” says Kiessling, who conducted the study with colleagues including Amanda Dugan, a former MIT postdoc and research scientist, and Deepsing Syangtan, PhD ’24.

Because intelectin-2 can neutralize or eliminate pathogens such as Staphylococcus aureus and Klebsiella pneumoniae, which are often difficult to treat with antibiotics, it could someday be adapted as an antimicrobial agent, the researchers say. Restoring desirable levels of intelectin-2 could also help people with disorders such as inflammatory bowel disease, who may have either too little of it (potentially weakening the mucus barrier) or too much (killing off beneficial gut bacteria).

“Harnessing human lectins as tools to combat antimicrobial resistance opens up a fundamentally new strategy that draws on our own innate immune defenses,” Kiessling says. “Taking advantage of proteins that the body already uses to protect itself against pathogens is compelling and a direction that we are pursuing.” 

This tool could show how consciousness works

How does the physical matter in our brains translate into thoughts, sensations, and emotions? It’s hard to explore that question without neurosurgery. But in a recent paper, MIT philosopher Matthias Michel, Lincoln Lab researcher Daniel Freeman, and colleagues outline a strategy for doing so with an emerging tool called transcranial focused ultrasound.

This noninvasive technology reaches deeper into the brain, with greater resolution, than techniques such as EEG and MRI. It works by sending acoustic waves through the skull to focus on an area of a few millimeters, allowing specific brain structures to be stimulated so the effects can be studied.

The researchers lay out an experimental approach that would use the tool to help test two competing conceptions of consciousness. The “cognitivist” concept holds that brain activity generating conscious experience must involve higher-level processes such as reasoning or self-reflection, likely using the frontal cortex. The “non-­cognitivist” idea is that specific patterns of neural activity—more localized in subcortical structures or at the back of the cortex—give rise to subjective experiences directly.

“This is a tool that’s not just useful for medicine, or even basic science, but could also help address the hard problem of consciousness,” Freeman says. “It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.” 

Early life may have breathed oxygen earlier than believed

Around 2.3 billion years ago, a pivotal period known as the Great Oxidation Event set the evolutionary course for oxygen-breathing life on Earth. But MIT geobiologists and colleagues have found evidence that some early forms of life evolved the ability to use oxygen hundreds of millions of years before that.

By mapping enzyme sequences from several thousand modern organisms onto an evolutionary tree of life, the researchers traced the origins of an enzyme that enables organisms to use oxygen to the Mesoarchean period, 3.2 to 2.8 billion years ago.

The team’s results may help explain a longstanding puzzle in Earth’s history: Given that the first oxygen-­producing microbes likely emerged before the Mesoarchean, why didn’t oxygen build up in the atmosphere until hundreds of millions of years later? Having evolved the key enzyme, organisms living near those microbes, called cyanobacteria, may have gobbled up the small amounts of oxygen they produced.

“This does dramatically change the story of aerobic respiration,” says Fatima Husain, SM ’18, PhD ’25, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS) and a coauthor with Gregory Fournier, an associate professor of geobiology, of a paper on the research. “It shows us how incredibly innovative life is at all periods in Earth’s history.” 

Analog computing from waste heat

Heat generated by electronic devices is usually a problem, but a team led by Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies, has found a way to use it for data processing that doesn’t rely on electricity.

In this analog computing method, input data is encoded not as binary 1s and 0s but as a set of temperatures based on the waste heat already present in a device. The flow and distribution of that heat through tiny silicon structures, designed by a physics-based optimization algorithm they developed, forms the basis of the calculation. Then the output is represented by the power collected at the other end.

The researchers used these structures to perform a simple form of matrix vector multiplication, the fundamental mathematical technique machine-learning models like large language models use to process information and make predictions. The results were more than 99% accurate in many cases.

The researchers still have to overcome many hurdles to scale up this computing method for modern deep-learning models, such as the challenges involved in tiling millions of these structures together. As the matrices become more complicated, the results also become less accurate, especially when there is a large distance between the input and output terminals. 

But the technique could also have a more immediate use: detecting problematic heat sources and measuring temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that can currently take up space on a chip.

“Most of the time, when you are performing computations in an electronic device, heat is the waste product,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the work. “You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself.” 

Get ready for hotter, muggier, stormier summers

A long stretch of humid heat followed by a powerful thunderstorm is a familiar weather pattern in the tropics, but it’s also becoming more common in midlatitude regions such as the US Midwest. A recent study by two MIT scientists identifies a key atmospheric condition that determines how hot, humid, and stormy such a region can get: inversions, in which a layer of warm air settles over cooler air.

Inversions were already known to act as an atmospheric blanket that traps pollutants at ground level. Now Funing Li, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and Talia Tamarin-Brodsky, an assistant professor of EAPS, have found that they also trap heat and moisture at the surface. The more persistent an inversion, the more heat and humidity a region can accumulate, which can lead to more oppressive, longer-lasting humid heat waves. And when an inversion eventually weakens, intense thunderstorms and heavy rainfall can be the result.

In typical conditions, the atmosphere’s layers get colder with altitude, and a heat wave that warms the air at ground level will trigger convection: The warmer, lighter air will rise, prompting colder air to sink. When the warm air hits colder altitudes, it condenses into droplets that fall as rain, often cooling things down.

Li and Tamarin-Brodsky found that when warm or light air has settled over colder or heavier ground-level air, more heat and moisture are needed for a given “parcel” of air to build up enough energy to rise through that inversion layer. The upper limit on how hot and humid it can get depends on how stable the inversion is. If a blanket of warm air parks over a region for a long time without moving, it allows more moisture and heat to build up, which also makes the eventual storm more intense when it finally happens.

Inversions often form at night, when surfaces that warmed during the day radiate heat to space so that the air in contact with them becomes cooler and denser than the air above. Or they can form when a shallow layer of cool marine air moves inland and slides beneath warmer air over the land. In some cases, however, persistent inversions can form when air heated over sun-warmed mountains is carried over low-lying regions. In the US, Li says, “the Great Plains and the Midwest have had many inversions historically due to the Rocky Mountains.” 

But global warming is likely to make the effect more pronounced. “Our analysis shows that the eastern and midwestern regions of the US and the eastern Asian regions may be new hot spots for humid heat,” he says.

“As the climate warms, theoretically the atmosphere will be able to hold more moisture,” says Tamarin-Brodsky.And because inversions will likely intensify, “new regions in the midlatitudes could experience moist heat waves that will cause stress that they weren’t used to before.”

She adds, “Our theory gives an understanding of the limit for humid heat and severe convection for these communities that will be future heat wave and thunderstorm hot spots.” 

AI at MIT

At MIT, AI has become so pervasive that you can almost find your way into it without meaning to. Take Sili Deng, an associate professor of mechanical engineering. Deng says she still doesn’t know whether she’d have gone all in on artificial intelligence had it not been for the covid pandemic. She had joined the faculty in 2019 and was in the process of setting up her lab to study combustion kinetics, emissions reduction, and flame synthesis of energy materials when covid hit, putting a halt to all lab renovations. Because she needed to start from scratch, she challenged herself and her postdocs to try machine learning “and see, with the fundamental knowledge we have on the combustion side, what are the gaps that we think machine learning could [fill].” Under her leadership, Deng’s Energy and Nanotechnology Group used AI to develop a “digital twin” that mirrors the performance of an energy/flow device—a digital replica of a physical system. Eventually, this model should be able to predict and control the workings of fuel combustion systems in real time. 

Unlike Deng, who came to AI through the slings and arrows of outrageous fortune, Zachary Cordero, an associate professor of aero-astro, began using AI thanks to a colleague’s expertise. In 2024 John Hart, head of the Department of Mechanical Engineering, suggested that Cordero, who develops novel materials and structures for emerging aerospace applications, meet with Faez Ahmed, an associate professor of mechanical engineering and an expert in machine learning and optimization for engineering design. Cordero says he hadn’t previously been pursuing AI-related research: “This is all totally new to me.” Working with Ahmed and other collaborators on a project sponsored by the US Defense Advanced Research Projects Agency (DARPA), Cordero developed an AI tool that can optimize the material composition of what’s known as a blisk—a bladed disk that’s a key component in jet and rocket turbine engines. Their work aims to improve engine performance and longevity and could lead to more reliable reusable rocket engines for heavy-lift launch vehicles. Cordero says the AI system augmented human intuition—even “on problems where it’s almost impossible to have intuition.”  

Professor Ju Li posits that if AI is given autonomy to do experiments, to try different things and fail and learn from that, it could evolve into something very similar to human intelligence.

Stories like these abound at MIT. In every department, in almost every lab on campus, AI technologies such as machine learning, large language models, and neural networks are transforming research—turbocharging existing methods, opening previously unexplored or inaccessible pathways, and creating novel opportunities in drug development, computing, energy technologies, manufacturing, robotics, neuroscience, metallurgy, and even wildlife preservation. “I cannot think of a single group meeting that we have where we’re not talking about these tools,” says Angela Koehler, the Charles W. and Jennifer C. Johnson Professor of Biological Engineering and faculty lead of the MIT Health and Life Sciences Collaborative (MIT HEALS). Her research group uses AI models to develop drug candidates designed to attach to molecular targets previously considered “undruggable,” such as transcription factors, RNA-binding proteins, or cytokines. “I would say 90% of the thesis committees I’m on involve a significant AI component,” she says. “And that definitely was not the case five years ago.”

“Artificial intelligence is everywhere on campus,” says Ian Waitz, MIT’s vice president for research and the Jerome C. Hunsaker Professor of Aero-Astro. “Any field with a tremendous amount of complexity will benefit from it. Life sciences. Materials science. Anyone who does any kind of image analysis uses these tools now. I don’t know of a single research field here at MIT that hasn’t been impacted by AI.”

AI isn’t exactly new at MIT

Though Deng and Cordero may have come to it through happenstance or clever matchmaking, most developments in AI at MIT don’t arise that way. Nor is the interest in it new. More than 70 years ago, in 1954, computer researcher Belmont G. Farley and physicist Wesley A. Clark ran the world’s first computer simulation of a neural network at MIT. Interest in neural network technology—now better known as deep learning—waxed and waned over the next decades. Ju Li, PhD ’00, the Carl Richard Soderberg Professor of Power Engineering (as well as a professor of nuclear science and engineering and materials science and engineering), remembers taking a course on neural networks during Independent Activities Period (IAP) in 1995, when he was a graduate student. “It was not a deep network—just a few layers,” recalls Li, who researches materials used in nuclear energy, batteries, electrolyzers, and energy-­efficient computing. He characterizes it as essentially a regression tool that they used to fit curves.

But over the past few years, activity in AI has exploded globally, fueled by powerful new models and an enormous increase in the computing power of chips; the resulting proliferation and evolution of data centers has in turn sparked more activity. Today, neural networks can have more than a thousand layers. Backed by massive investments in AI in both the public and private spheres, AI researchers have created a suite of tools that can scan almost immeasurable quantities and types of data; interface with sensors, robotics, and other mechanical devices; and communicate with human researchers in natural language. 

REGINA BARZILAY

RACHEL WU VIA MIT NEWS OFFICE

“Many of the tools that we developed in the lab— they’re very broadly used in the pharmaceutical industry. And they’re really making significant impact.”

Regina Barzilay

Regina Barzilay has been working on AI since she came to MIT in 2003. Today, she’s the School of Engineering Distinguished Professor for AI and Health and AI faculty lead of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health. But she says that if anyone had told her even 10 years ago where the field would be now and what kinds of things she’d be working on, she “absolutely” wouldn’t have believed it.

AI applications for drug discovery and development, one of Barzilay’s areas of expertise, have been particularly prolific and successful at MIT. Giovanni Traverso’s lab, for example, has used AI to design nanoparticles that can deliver RNA vaccines and other therapies more efficiently than previous systems. Researchers at CSAIL (the Computer Science & Artificial Intelligence Laboratory, where Barzilay is a principal investigator) have used AI models to explain how a narrow-­spectrum anti­biotic specifically targets harmful microbes in people with Crohn’s disease. The Jameel Clinic has helped build models that can predict which flu vaccine will be most effective in a given year. “Many of the tools that we developed in the lab—they’re very broadly used in the pharmaceutical industry,” she explains. “And they’re really making significant impact.” She says there’s not even a question anymore about whether they make a difference. They’ve become standard tools because they work every day. 

One such tool is Boltz, an open-source AI model developed by a group at the Jameel Clinic and initially released in November 2024 as Boltz-1. Inspired by DeepMind’s AlphaFold2—a model that earned Demis Hassabis and John Jumper the 2024 Nobel Prize in chemistry—Boltz-1 helps scientists predict the 3D structures of proteins and other biological molecules. The Jameel Clinic researchers soon followed up with Boltz-2, which in addition to predicting molecular structure can also predict affinity—the strength with which a protein binds with a small molecule. Assays to measure affinity, a vital measure in drug development, are among the most importantperformed in biology and chemistry labs. 

In October 2025, the Jameel Clinic released its latest iteration, BoltzGen—a generative AI model capable of designing custom proteins that could bind with a wide range of biomolecular targets. Molecular binders already play important roles in fields including therapeutics, diagnostics, and biotechnology. BoltzGen is the first advanced, large-scale model that considers every single atom in the potential new protein and every atom in its target molecule, providing greater accuracy. 

Hannes Stärk, the fourth-year PhD student at CSAIL who built BoltzGen, says the model works because it actually learns—drawing inferences from the data it is trained with and then producing novel ideas inspired by that data. With machine learning, you want the model to generalize from the data you use to train it, says Stärk, who created BoltzGen over seven months, often working up to 12 hours a day. “Because otherwise,” he says, “your solution is already in your training data.” Stärk has also assembled a network of over 30 scientists both within and beyond MIT to explore the design and applications of molecular binders for use in drug development, metabolomics, and structural biology as well as in treating cancer, autoimmune diseases, and genetic diseases. “It’s nice to have one model that can do all of this,” he says. Training across all these areas also makes the model better at generalizing.

Beyond drug discovery

As labs working in drug development continue to reap benefits from AI, other researchers across the Institute are busy applying existing AI tools or, more often, developing their own models for use in myriad disciplines and applications. A cross-­disciplinary group involving the Department of Electrical Engineering and Computer Science (EECS), CSAIL, and Mass General Hospital has launched MultiverSeg, a tool that quickly annotates areas of interest in medical images and could help scientists develop new treatments and map disease progression. MIT researchers are also designing and running AI-directed automated laboratories to accelerate and refine the process of discovering new components for sustainable materials and solar panels. And Ahmed’s MechE group is developing AI models to do such things as help automakers design high-performance vehicles or determine whether a large shipping vessel can be considered seaworthy. Ahmed also teaches a course titled AI and Machine Learning for Engineering Design. First offered in 2021, it attracts not only mechanical, civil, and environmental engineers but students from aero-astro, Sloan, and more. 

Sarah Beery

MIT TECHNOLOGY REVIEW

“The goal is to tap into diverse types of raw data and turn that into “something that helps us understand what is putting species at risk.”

Sara Beery

Meanwhile, Priya Donti, an assistant professor of EECS and a PI at the Laboratory for Information & Decision Systems (LIDS), has developed AI-enabled optimization approaches to help schedule power generation resources on power grids. The machine-learning tools her group builds will help utility operators respond to many inevitable grid issues. “The big challenge is that on a power grid, you need to maintain this exact balance between the amount of power you’re producing and putting into the grid and the amount that you’re taking out on the other side,” she explains. “When you have a lot of variation from solar, wind, and other sources of power whose output varies based on the weather, you have to coordinate the grid much more tightly in order to maintain that balance.” Information about the physics of how power grids work is embedded in Donti’s AI model, so it functions and reacts much as a real grid would.  

MIT researchers are even applying AI tools to explore and analyze the natural world. Sara Beery, an assistant professor of EECS who specializes in AI and decision-­making, develops AI methods that discover and dig into ecological data collected by a wide range of remote sensing technologies to analyze and predict how species and ecosystems are changing around the globe. These technologies enable Beery and her colleagues to gather data on a far greater number of endangered species than ever before, and at an unprecedented scale. Historically, most ecological research has focused on collecting incredibly rich data about single species in really small regions, she says, but “we’ve realized that’s not sufficient.” Information gleaned from, say, a small part of one river ecosystem will not help us understand or prevent what she calls “the exponential increase in species extinction rates that we’re currently facing.” Already, Beery says, “we’re using multimodal AI to enable experts to quickly search massive repositories of image data, to discover data points that were previously very difficult to find.” But she says the goal is to be able to readily tap into diverse types of raw data—from satellite and bioacoustic sensor data to camera images and DNA—and “actually turn that into some sort of scientific insight, something that helps us understand what is putting species at risk.” 

Mens et manus in AI

While some MIT researchers have successfully used AI to help invent technologies ranging from novel cancer therapies to safer high-performance automobiles, others are also using machine learning and other AI tools to help determine whether these technologies perform as promised—or can be produced successfully and economically at scale. Connor Coley, SM ’16, PhD ’19, an associate professor of chemical engineering and EECS, designs new molecules—and recipes for making new molecules, primarily small organic molecules—for potential use by pharmaceutical, agricultural, and other chemical companies. Coley, a former MIT Technology Review Innovators Under 35 honoree, has developed a “genetic” algorithm that uses biologically inspired processes including selection and mutation. This tool encodes potential polymer blends drawn from a large database of polymers into what is effectively a digital chromosome, which the algorithm then improves to generate the most promising material combinations.

Working at the intersection of chemistry and computer science, Coley believes AI could one day help his lab discover polymer blends that would lead to improved battery electrolytes and tailored nanoparticles for safer drug delivery. He and his lab also work to develop machine-learning tools that streamline the discovery and production processes. “If you want AI to be the brain behind some of the science you’re doing, you need the hands as well,” says Coley, who was one of the first MIT faculty members hired into the MIT Schwarzman College of Computing. He and his group have coupled a robotic liquid-handling platform with an optimization algorithm. In the project designed to look for optimal polymer blends, the autonomous system not only chooses which polymer solutions to test but also performs the physical testing. The system, which can generate and test 700 new polymer blends in a day, has identified one that performed 18% better than any of its components.

Systems with a similar level of autonomy could also have a big impact on early-stage drug discovery. One effect, he observes, should be to reduce the time it takes to advance a drug from the lab into clinical trials. But the real question, he says, is “What might we be able to do that we just couldn’t do with any reasonable amount of resources previously?” 

Alexander Siemenn, PhD ’25, also uses AI both to search for new materials and to control robots that test the physical properties of those materials. For his doctoral thesis, Siemenn built from scratch a fully autonomous AI-driven robotic laboratory to discover and test sustainable high-­performance materials for solar panels. The system incorporates computer vision, machine learning, and an optimization algorithm and runs 24 hours a day.  

“We are pairing conventional methods [of measurement] that have been almost entirely manual to this point with the AI methods,” says Siemenn. “The goal is to be able to not just improve their accuracy but also make them fast and autonomous.” 

Hits and near misses

Institute labs are also encountering some of the first real borders of the brave new AI-enhanced world. Many researchers at MIT and elsewhere agree that most of the “low-hanging fruit” has already been collected. That includes AI’s contributions to managing massive data sets and accelerating existing discovery and testing processes, at times to near light speed. Beyond those immediate gains, though, results vary—even in drug development, which has seen some of the most spectacular achievements of AI.

“There are some areas where you would assume we should be doing much better here and we are not,” observes Barzilay. “The reason we cannot cure neurodegenerative diseases like Alzheimer’s or very advanced cancer is because we don’t really understand fully—on the molecular level—the disease itself, the drivers, and how to control it.” And AI still hasn’t made what she calls “a significant transformation” in terms of understanding those underlying disease mechanisms. “There are some helper tools,” she says, but AI hasn’t provided a profoundly new understanding of any disease—“So this is a place that we would hope to see more.”

RAFAEL GÓMEZ BOMBARELLI

MIT TECHNOLOGY REVIEW

“In AI, scaling is synergistic and good. In chemistry and materials, scaling is kind of a scary beast that you need to beat in order to make an impact.”

Rafael Gómez-Bombarelli

Limits in materials science are also emerging, particularly in translating digital solutions proposed by AI into objects made of atoms and molecules. Rafael Gómez-Bombarelli, an associate professor of materials science and engineering, develops physics-based machine-learning simulations to accelerate the discovery cycle for sustainable polymers and materials for use in energy, health care, and batteries. While physics-based simulations in themselves have been an unmitigated success, he says, results have been spottier when it comes to manufacturing the materials themselves; many of the solutions generated by these simulators fail in the physical world. “It turns out these simulators don’t capture lots of things that are important,” he says. “They operate on the atomically resolved problems for nanosecond-timescale questions. But many, many [materials] problems don’t happen in nanoseconds, don’t involve just a few ten thousands of atoms.” And they often involve physics more complicated than current AI models account for. What’s more, when the goal might be to produce millions of tons of a new material, scaling errors can be disastrous. “In AI, scaling is synergistic and good,” Gómez-Bombarelli says. “In chemistry and materials, scaling is kind of a scary beast that you need to beat in order to make an impact.”

New methods, new insights

While AI has already produced myriad results and surprises, researchers at MIT believe much of its potential is still waiting to be discovered. And they are eager to search for high-impact applications. Ila Fiete, a professor of brain and cognitive sciences, builds AI tools and mathematical models to expand our knowledge of how the brain develops and reshapes its neural connections. Her work, she believes, can help us understand how we form memories or perceive ourselves in space—and that, in turn, can lead to improvements in AI. Many features of AI, including parallel computing in neural networks, were inspired by the human brain. “AI has [helped] and will continue to help us do more science and better science,” she says. “But neuroscientists believe there is a lot about how humans and other biological intelligences learn and solve problems that is better in some dimensions than current AI models. And by learning better how that works, we can actually inform better AI architectures.”

Li agrees that certain elements of human intelligence and learning could benefit AI and help it solve some of our world’s most pressing and complex problems, including global poverty and climate change. “Large language models today have read tens of millions of papers and books,” he says, adding that they are “much more interdisciplinary than any of us.” Yet he notes that scientific literature is strongly biased toward success. “The day-to-day experience in the lab is 95% frustration, and I think it’s the failure cases which build character,” he says. He posits that if AI is given autonomy to do experiments, to try different things and fail and learn from that, it could evolve into something very similar to human intelligence.

Researchers at MIT believe that as AI continues to evolve, expand, and proliferate, the Institute has a special duty to channel these technologies toward useful, attainable ends. “Right now, in the AI world there is a lot of hype and fluff,” says Ahmed, who is developing generative AI tools to help tackle complex engineering and design problems. “The digital world is overflowing with stuff,” he says, and there’s a lot happening on the AI front with images, text, and video. “But the physical world is still less affected, and we are seeing a lot more happening at the intersection of physical and AI at MIT.”

AI’s future includes potential triumphs and potential pitfalls. Researchers still worry about “hallucinations”—results spit out of AI models that make no sense in the real world. They worry, as well, that some practitioners will rely too heavily on AI tools, omitting key insights and safeguards that keep an experiment or production facility on track. And they worry about overpromising—unrealistically presenting AI as a magical solution to all problems great and small. “It’s impossible to predict how good these models are going to get,” says MechE’s Hart. “Where they are going to shine and where they are going to limit.” But instead of sensing danger, Hart sees opportunity, especially at MIT: “We have the learned expertise and experience that allows us to frame the right questions and use these tools in the right way.” The challenge for the MITs of the world, he says, is to figure out how to use AI tools to create faster, better solutions and navigate more complex problems than we ever could before. 

Inventor recalls eye imaging breakthrough

If you’ve been to an eye doctor and had an image taken of the inside of your eye, chances are good it was done with optical coherence tomography (OCT)—a technology invented by clinician-scientist David Huang ’85, SM ’89, PhD ’93, and now used in 40 million procedures per year. 

OCT is a noninvasive technique used to produce detailed images of complicated biological tissues such as the retina and the plaques that can build up in coronary arteries. It maps the time-of-flight of light waves reflected from tissue and paints a high-resolution picture of internal structures. 

“It uses infrared light that’s barely visible compared to the bright flash of fundus photography [another common method of eye imaging] and provides a lot more information—three-dimensional rather than two-dimensional information—at higher resolution,” Huang says. The discovery earned him and his co-inventors slots in the National Inventors Hall of Fame in 2025 as well as the Lasker Award and the National Medals of Technology and Innovation in 2023.

Huang didn’t expect to change the paradigm of eye imaging when he began studying electrical engineering as an undergraduate at MIT, but he was interested in using an engineering mindset to contribute to medical advancements. That, he thought, could be his way to follow in the footsteps of his father, who was a family practitioner. 

OCT emerged from his work as an MD-PhD student in the Harvard-MIT Program in Health Sciences and Technology. While studying ultrafast lasers at MIT under James Fujimoto ’79, SM ’81, PhD ’84, the Elihu Thomson Professor of Electrical Engineering, Huang was tasked with using the lasers to improve various ophthalmological tasks, including measuring the thickness of the cornea and retina. 

Huang thought an approach known as interferometry, which could measure the time of flight down to one quadrillionth of a second, could improve thickness measurements to micrometer resolution. Huang’s experiments revealed that the technique was able to detect very faint signals arising from fine internal structures within the retina. Fujimoto and Huang realized the potential for inventing a new type of imaging and enlisted the help of Eric Swanson, SM ’84, who was using interferometry for intersatellite communications at Lincoln Laboratory, to develop an OCT machine for biological applications. Huang tested the new machine on several types of tissues accessed through Harvard Medical School and found it particularly successful in imaging retinal and coronary artery samples. He and his colleagues published their initial findings in Science in 1991, establishing OCT as a new imaging modality.

“Because of our ability to form collaborations with medical doctors and the more advanced technologies that were easily accessible at Lincoln Lab and MIT, we were able to make this new imaging technology take off when other people who were exploring around the same area were not able to demonstrate imaging results,” he says. 

After the groundbreaking invention, Huang finished his academic and medical training as an ophthalmologist while Fujimoto and Swanson formed a startup company to ensure that the device got into medical offices. 

Over the next decades, Huang has continued to refine OCT for various applications. Today, as the director of research at Oregon Health and Science University’s Casey Eye Institute, he leads research groups exploring new ways to use OCT in techniques such as OCT angiography (imaging blood flow down to the capillary level) and OCT optoretinography (mapping the light response in retinal photoreceptor cells). 

In addition to conducting research, he also sees patients and is the cofounder of GoCheck Kids, a digital platform for pediatric eye screening. 

Huang credits his knack for innovation to his position at the nexus of diverse fields. “It’s hard for a pure medical doctor or a pure laser engineer to realize that there is an opportunity to invent a new device that solves a real problem in the clinic,” he says. “But it’s really easy when you have knowledge on both sides.” 

Roundtables: Unveiling The 10 Things That Matter in AI Right Now

Listen to the session or watch below

Watch a special edition of Roundtables simulcast live from EmTech AI, MIT Technology Review’s signature conference for AI leadership. Subscribers got an exclusive first look at a new list capturing 10 key technologies, emerging trends, bold ideas, and powerful movements in AI that you need to know about in 2026.

Speakers: Grace Huckins, AI reporter, hosted this session as Amy Nordrum and Niall Firth, executive editors, unveiled the list onstage.

Recorded on April 21, 2026

Related Stories:

B2B Ecommerce Powers Africa Retail

Consumer-focused ecommerce in Africa faces the challenge of high customer acquisition costs and complex residential delivery.

Yet in Sub-Saharan Africa, approximately 90% of consumer spending remains anchored in physical retail: mom-and-pop shops, neighborhood kiosks, and market stalls.

Consequently, ecommerce is shifting toward B2B distributors that serve these retailers directly. These platforms are moving beyond delivery apps into core supply chain infrastructure, taking on inventory sourcing and trade credit.

Retail Aggregation

Photo of warehouse worker pulling a pallet jack

Nigeria-based TradeDepot is a prominent B2B distributor. Image: TradeDepot.

In many African cities — Lagos, Nairobi, Cairo — consumers make frequent, small-value, in-person transactions. Supplying these sellers in bulk lowers overall restock costs.

In Lagos, for instance, where gridlock can reduce a B2C courier’s daily capacity, a B2B truck delivering to a concentrated retail node can move five times the volume in a single trip.

For example, Nigeria’s TradeDepot uses a sophisticated pre-selling model in which its fleet moves only across a specific cluster of shops. This ensures that every truck leaving the warehouse has a guaranteed high-density route.

Working Capital

Banks struggle to lend to small physical shops owing to little visibility into daily cash flow and inventory turnover. B2B distributors, by contrast, capture this data with every SKU delivered.

With this visibility, distributors can offer revolving inventory credit themselves.

For example, B2B distributors MaxAB and Wasoko (merged in 2024) collectively serve over 450,000 African merchants. In Egypt, the company’s finance arm generates over $180 million in annual turnover, outpacing its core ecommerce division. With repayment rates reportedly above 99%, the distributor becomes the acquisition channel, and working capital becomes the product.

Visibility

B2B distributors are changing how demand is understood and controlled.

Historically, fast-moving consumer goods brands sold into wholesale networks and lost granular visibility once products left the warehouse.

Distributors such as MaxAB-Wasoko provide SKU-level visibility at the point of retail. A brand manager at, say, Unilever or Nestlé can now see what is selling and where.

This real-time data enables brands to adjust pricing and allocate inventory precisely, bypassing the friction typically absorbed by intermediaries.

For Brands

  • Prioritize infrastructure. Focus on distributors that own the last mile — the distance between the warehouse and the retail shelf.
  • Look beyond the goods. Physical goods are primarily a vehicle for data acquisition in a 2-5% margin environment. Long-term profitability may lie in embedded finance.
  • Solve for continuity. A small-shop retailer’s primary threat is the stock-out. Brands that guarantee inventory availability will win over those competing on price alone.