We finally have a definition for open-source AI

Open-source AI is everywhere right now. The problem is, no one agrees on what it actually is. Now we may finally have an answer. The Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source, has released a new definition, which it hopes will help lawmakers develop regulations to protect consumers from AI risks. 

Though OSI has published much about what constitutes open-source technology in other fields, this marks its first attempt to define the term for AI models. It asked a 70-person group of researchers, lawyers, policymakers, and activists, as well as representatives from big tech companies like Meta, Google, and Amazon, to come up with the working definition. 

According to the group, an open-source AI system can be used for any purpose without securing permission, and researchers should be able to inspect its components and study how the system works.

It should also be possible to modify the system for any purpose—including to change its output—and to share it with others to use, with or without modifications, for any purpose. In addition, the standard attempts to define a level of transparency for a given model’s training data, source code, and weights. 

The previous lack of an open-source standard presented a problem. Although we know that the decisions of OpenAI and Anthropic to keep their models, data sets, and algorithms secret makes their AI closed source, some experts argue that Meta and Google’s freely accessible models, which are open to anyone to inspect and adapt, aren’t truly open source either, because of licenses that restrict what users can do with the models and because the training data sets aren’t made public. Meta, Google, and OpenAI have been contacted for their response to the new definition but did not reply before publication.

“Companies have been known to misuse the term when marketing their models,” says Avijit Ghosh, an applied policy researcher at Hugging Face, a platform for building and sharing AI models. Describing models as open source may cause them to be perceived as more trustworthy, even if researchers aren’t able to independently investigate whether they really are open source.

Ayah Bdeir, a senior advisor to Mozilla and a participant in OSI’s process, says certain parts of the open-source definition were relatively easy to agree upon, including the need to reveal model weights (the parameters that help determine how an AI model generates an output). Other parts of the deliberations were more contentious, particularly the question of how public training data should be.

The lack of transparency about where training data comes from has led to innumerable lawsuits against big AI companies, from makers of large language models like OpenAI to music generators like Suno, which do not disclose much about their training sets beyond saying they contain “publicly accessible information.” In response, some advocates say that open-source models should disclose all their training sets, a standard that Bdeir says would be difficult to enforce because of issues like copyright and data ownership. 

Ultimately, the new definition requires that open-source models provide information about the training data to the extent that “a skilled person can recreate a substantially equivalent system using the same or similar data.” It’s not a blanket requirement to share all training data sets, but it also goes further than what many proprietary models or even ostensibly open-source models do today. It’s a compromise.

“Insisting on an ideologically pristine kind of gold standard that actually will not effectively be met by anybody ends up backfiring,” Bdeir says. She adds that OSI is planning some sort of enforcement mechanism, which will flag models that are described as open source but do not meet its definition. It also plans to release a list of AI models that do meet the new definition. Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.

AI could be a game changer for people with disabilities

As a lifelong disabled person who constantly copes with multiple conditions, I have a natural tendency to view emerging technologies with skepticism. Most new things are built for the majority of people—in this case, people without disabilities—and the truth of the matter is there’s no guarantee I’ll have access to them.

There are certainly exceptions to the rule. A prime example is the iPhone. Although discrete accessibility software did not appear until the device’s third-generation model, in 2009, earlier generations were still revolutionary for me. After I’d spent years using flip phones with postage-stamp-size screens and hard-to-press buttons, the fact that the original iPhone had a relatively large screen and a touch-based UI was accessibility unto itself. 

AI could make these kinds of jumps in accessibility more common across a wide range of technologies. But you probably haven’t heard much about that possibility. While the New York Times sues OpenAI over ChatGPT’s scraping of its content and everyone ruminates over the ethics of AI tools, there seems to be less consideration of the good ChatGPT can do for people of various abilities. For someone with visual and motor delays, using ChatGPT to do research can be a lifesaver. Instead of trying to manage a dozen browser tabs with Google searches and other pertinent information, you can have ChatGPT collate everything into one space. Likewise, it’s highly plausible that artists who can’t draw in the conventional manner could use voice prompts to have Midjourney or Adobe Firefly create what they’re thinking of. That might be the only way for such a person to indulge an artistic passion. 

For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal.

Of course, data needs to be vetted for accuracy and gathered with permission—there are ample reasons to be wary of AI’s potential to serve up wrong or potentially harmful, ableist information about the disabled community. Still, it feels unappreciated (and underreported) that AI-based software can truly be an assistive technology, enabling people to do things they otherwise would be excluded from. AI could give a disabled person agency and autonomy. That’s the whole point of accessibility—freeing people in a society not designed for their needs.

The ability to automatically generate video captions and image descriptions provides additional examples of how automation can make computers and productivity technology more accessible. And more broadly, it’s hard not to be enthused about ever-burgeoning technologies like autonomous vehicles. Most tech journalists and other industry watchers are interested in self-driving cars for the sheer novelty, but the reality is the AI software behind vehicles like Waymo’s fleet of Jaguar SUVs is quite literally enabling many in the disability community to exert more agency over their transport. For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal. It’s not hard to envision a future in which, as the technology matures, autonomous vehicles are normalized to the point where blind people could buy their own cars. 

At the same time, AI is enabling serious advances in technology for people with limb differences. How exciting will it be, decades from now, to have synthetic arms and legs, hands or feet, that more or less function like the real things? Similarly, the team at Boston-based Tatum Robotics is combining hardware with AI to make communication more accessible for deaf-blind people: A robotic hand forms hand signs, or words in American Sign Language that can be read tactilely against the palm. Like autonomous vehicles, these applications have enormous potential to positively influence the everyday lives of countless people. All this goes far beyond mere chatbots.

It should be noted that disabled people historically have been among the earliest adopters of new technologies. AI is no different, yet public discourse routinely fails to meaningfully account for this. After all, AI plays to a computer’s greatest strength: automation. As time marches on, the way AI grows and evolves will be unmistakably and indelibly shaped by disabled people and our myriad needs and tolerances. It will offer us more access to information, to productivity, and most important, to society writ large.

Steven Aquino is a freelance tech journalist covering accessibility and assistive technologies. He is based in San Francisco.

Tech that measures our brainwaves is 100 years old. How will we be using it 100 years from now?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, we’re acknowledging a special birthday. It’s 100 years since EEG (electroencephalography) was first used to measure electrical activity in a person’s brain. The finding was revolutionary. It helped people understand that epilepsy was a neurological disorder as opposed to a personality trait, for one thing (yes, really).

The fundamentals of EEG have not changed much over the last century—scientists and doctors still put electrodes on people’s heads to try to work out what’s going on inside their brains. But we’ve been able to do a lot more with the information that’s collected.

We’ve been able to use EEG to learn more about how we think, remember, and solve problems. EEG has been used to diagnose brain and hearing disorders, explore how conscious a person might be, and even allow people to control devices like computers, wheelchairs, and drones.

But an anniversary is a good time to think about the future. You might have noticed that my colleagues and I are currently celebrating 125 years of MIT Technology Review by pondering the technologies the next 125 years might bring. What will EEG allow us to do 100 years from now?

First, a quick overview of what EEG is and how it works. EEG involves placing electrodes on the top of someone’s head, collecting electrical signals from brainwaves, and feeding these to a computer for analysis. Today’s devices often resemble swimming caps. They’re very cheap compared with other types of brain imaging technologies, such as fMRI scanners, and they’re pretty small and portable.

The first person to use EEG in people was Hans Berger, a German psychiatrist who was fascinated by the idea of telepathy. Berger developed EEG as a tool to measure “psychic energy,” and he carried out his early research—much of it on his teenage son—in secret, says Faisal Mushtaq, a cognitive neuroscientist at the University of Leeds in the UK. Berger was, and remains, a controversial figure owing to his unclear links with Nazi regime, Mushtaq tells me.

But EEG went on to take the neuroscience world by storm. It has become a staple of neuroscience labs, where it can be used on people of all ages, even newborns. Neuroscientists use EEG to explore how babies learn and think, and even what makes them laugh. In my own reporting, I’ve covered the use of EEG to understand the phenomenon of lucid dreaming, to reveal how our memories are filed away during sleep, and to allow people to turn on the TV by thought alone.   

EEG can also serve as a portal into the minds of people who are otherwise unable to communicate. It has been used to find signs of consciousness in people with unresponsive wakefulness syndrome (previously called a “vegetative state”). The technology has also allowed people paralyzed with amyotrophic lateral sclerosis (ALS) to communicate by thought and tell their family members they are happy.

So where do we go from here? Mushtaq, along with Pedro Valdes-Sosa at the University of Electronic Science and Technology of China in Chengdu and their colleagues, put the question to 500 people who work with EEG, including neuroscientists, clinical neurophysiologists, and brain surgeons. Specifically, with the help of ChatGPT, the team generated a list of predictions, which ranged from the very likely to the somewhat fanciful. Each of the 500 survey responders was asked to estimate when, if at all, each prediction might be likely to pan out.  

Some of the soonest breakthroughs will be in sleep analysis, according to the responders. EEG is already used to diagnose and monitor sleep disorders—but this is set to become routine practice in the next decade. Consumer EEG is also likely to take off in the near future, potentially giving many of us the opportunity to learn more about our own brain activity, and how it corresponds with our wellbeing. “Perhaps it’s integrated into a sort of baseball cap that you wear as you walk around, and it’s connected to your smartphone,” says Mushtaq. EEG caps like these have already been trialed on employees in China and used to monitor fatigue in truck drivers and mining workers, for example.

For the time being, EEG communication is limited to the lab or hospital, where studies focus on the technology’s potential to help people who are paralyzed, or who have disorders of consciousness. But that is likely to change in the coming years, once more clinical trials have been completed. Survey respondents think that EEG could become a primary tool of communication for individuals like these in the next 20 years or so.

At the other end of the scale is what Mushtaq calls the “more fanciful” application—the idea of using EEG to read people’s thoughts, memories, and even dreams.

Mushtaq thinks this is a “relatively crazy” prediction—one that’s a long, long way from coming to pass considering we don’t yet have a clear picture of how and where our memories are formed. But it’s not completely science fiction, and some respondents predict the technology could be with us in around 60 years.

Artificial intelligence will probably help neuroscientists squeeze more information from EEG recordings by identifying hidden patterns in brain activity. And it is already being used to turn a person’s thoughts into written words, albeit with limited accuracy. “We’re on the precipice of this AI revolution,” says Mushtaq.

These kinds of advances will raise questions over our right to mental privacy and how we can protect our thoughts. I talked this over with Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina, last year. She told me that while brain data itself is not thought, it can be used to make inferences about what a person is thinking or feeling. “The only person who has access to your brain data right now is you, and it is only analyzed in the internal software of your mind,” she said. “But once you put a device on your head … you’re immediately sharing that data with whoever the device manufacturer is, and whoever is offering the platform.”

Valdes-Sosa is optimistic about the future of EEG. Its low cost, portability, and ease of use make the technology a prime candidate for use in poor countries with limited resources, he says; he has been using it in his research since 1969. (You can see what his set up looked like in 1970 in the image below!) EEG should be used to monitor and improve brain health around the world, he says: “It’s difficult … but I think it could happen in the future.” 

photo from the 1970s of two medical professionals facing an eeg machine

PEDRO VALDES-SOSA

Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full interview with Nita Farahany, in which she describes some decidedly creepy uses of brain data, here.

Ross Compton’s heart data was used against him when he was accused of burning down his home in Ohio in 2016. Brain data could be used in a similar way. One person has already had to hand over recordings from a brain implant to law enforcement officials after being accused of assaulting a police officer. (It turned out that person was actually having a seizure at the time.) I looked at some of the other ways your brain data could be used against you in a previous edition of The Checkup.

Teeny-tiny versions of EEG caps have been used to measure electrical activity in brain organoids (clumps of neurons that are meant to represent a full brain), as my colleague Rhiannon Williams reported a couple of years ago.

EEG has also been used to create a “brain-to-brain network that allows three people to collaborate on a game of Tetris by thought alone.

Some neuroscientists are using EEG to search for signs of consciousness in people who seem completely unresponsive. One team found such signs in a 21-year-old woman who had experienced a traumatic brain injury. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” her neurophysiologist told MIT Technology Review. After a test that involved EEG found signs of consciousness, the neurophysiologist told rehabilitation staff to “search everywhere and find her!” They did, about a month later. With physical and drug therapy, she learned to move her fingers to answer simple questions.

From around the web

Food waste is a problem. This Japanese company is fermenting it to create sustainable animal feed. In case you were wondering, the food processing plant smells like a smoothie, and the feed itself tastes like sour yogurt. (BBC Future)

The pharmaceutical company Gilead Sciences is accused of “patent hopping”—having dragged its feet to bring a safer HIV treatment to market while thousands of people took a harmful one. The company should be held accountable, argues a cofounder of PrEP4All, an advocacy organization promoting a national HIV prevention plan. (STAT)

Anti-suicide nets under San Francisco’s Golden Gate Bridge are already saving lives, perhaps by acting as a deterrent. (The San Francisco Standard)

Genetic screening of newborn babies could help identify treatable diseases early in life. Should every baby be screened as part of a national program? (Nature Medicine)

Is “race science”—which, it’s worth pointing out, is nothing but pseudoscience—on the rise, again? The far right’s references to race and IQ make it seem that way. (The Atlantic)

As part of our upcoming magazine issue celebrating 125 years of MIT Technology Review and looking ahead to the next 125, my colleague Antonio Regalado explores how the gene-editing tool CRISPR might influence the future of human evolution. (MIT Technology Review)

How we could turn plastic waste into food

In 2019, an agency within the U.S. Department of Defense released a call for research projects to help the military deal with the copious amount of plastic waste generated when troops are sent to work in remote locations or disaster zones. The agency wanted a system that could convert food wrappers and water bottles, among other things, into usable products, such as fuel and rations. The system needed to be small enough to fit in a Humvee and capable of running on little energy. It also needed to harness the power of plastic-eating microbes.

“When we started this project four years ago, the ideas were there. And in theory, it made sense,” said Stephen Techtmann, a microbiologist at Michigan Technological University, who leads one of the three research groups receiving funding. Nevertheless, he said, in the beginning, the effort “felt a lot more science-fiction than really something that would work.”

In one reactor, shown here at a recent MTU demonstration, some deconstructed plastics are subject to high heat and the absence of oxygen — a process called pyrolysis.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

That uncertainty was key. The Defense Advanced Research Projects Agency, or DARPA, supports high-risk, high-reward projects. This means there’s a good chance that any individual effort will end in failure. But when a project does succeed, it has the potential to be a true scientific breakthrough. “Our goal is to go from disbelief, like, ‘You’re kidding me. You want to do what?’ to ‘You know, that might be actually feasible,’” said Leonard Tender, a program manager at DARPA who is overseeing the plastic waste projects.

The problems with plastic production and disposal are well known. According to the United Nations Environment Program, the world creates about 440 million tons of plastic waste per year. Much of it ends up in landfills or in the ocean, where microplastics, plastic pellets, and plastic bags pose a threat to wildlife. Many governments and experts agree that solving the problem will require reducing production, and some countries and U.S. states have additionally introduced policies to encourage recycling.

For years, scientists have also been experimenting with various species of plastic-eating bacteria. But DARPA is taking a slightly different approach in seeking a compact and mobile solution that uses plastic to create something else entirely: food for humans.

In the beginning, the effort “felt a lot more science-fiction than really something that would work.”

The goal, Techtmann hastens to add, is not to feed people plastic. Rather, the hope is that the plastic-devouring microbes in his system will themselves prove fit for human consumption. While Techtmann believes most of the project will be ready in a year or two, it’s this food step that could take longer. His team is currently doing toxicity testing, and then they will submit their results to the Food and Drug Administration for review. Even if all that goes smoothly, an additional challenge awaits. There’s an ick factor, said Techtmann, “that I think would have to be overcome.”

The military isn’t the only entity working to turn microbes into nutrition. From Korea to Finland, a small number of researchers, as well as some companies, are exploring whether microorganisms might one day help feed the world’s growing population.


According to Tender, DARPA’s call for proposals was aimed at solving two problems at once. First, the agency hoped to reduce what he called supply-chain vulnerability: During war, the military needs to transport supplies to troops in remote locations, which creates a safety risk for people in the vehicle. Additionally, the agency wanted to stop using hazardous burn pits as a means of dealing with plastic waste. “Getting those waste products off of those sites responsibly is a huge lift,” Tender said.

A research engineer working on the MTU project takes a raw sample from the pyrolysis reactor, which can be upcycled into fuels and lubricants.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

The Michigan Tech system begins with a mechanical shredder, which reduces the plastic to small shards that then move into a reactor, where they soak in ammonium hydroxide under high heat. Some plastics, such as PET, which is commonly used to make disposable water bottles, break down at this point. Other plastics used in military food packaging — namely polyethylene and polypropylene — are passed along to another reactor, where they are subject to much higher heat and an absence of oxygen.

Under these conditions, the polyethylene and polypropylene are converted into compounds that can be upcycled into fuels and lubricants. David Shonnard, a chemical engineer at Michigan Tech who oversaw this component of the project, has developed a startup company called Resurgent Innovation to commercialize some of the technology. (Other members of the research team, said Shonnard, are pursuing additional patents related to other parts of the system.)

After the PET has broken down in the ammonium hydroxide, the liquid is moved to another reactor, where it is consumed by a colony of microbes. Techtmann initially thought he would need to go to a highly contaminated environment to find bacteria capable of breaking down the deconstructed plastic. But as it turned out, bacteria from compost piles worked really well. This may be because the deconstructed plastic that enters the reactor has a similar molecular structure to some plant material compounds, he said. So the bacteria that would otherwise eat plants can perhaps instead draw their energy from the plastic.

Materials for the MTU project are shown at a recent demonstration. Before being placed in a reactor, plastic feedstocks (bottom row) are mechanically shredded into small pieces.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

After the bacteria consume the plastic, the microbes are then dried into a powder that smells a bit like nutritional yeast and has a balance of fats, carbohydrates, and proteins, said Techtmann.

Research into edible microorganisms dates back at least 60 years, but the body of evidence is decidedly small. (One review estimated that since 1961, an average of seven papers have been published per year.) Still, researchers in the field say there are good reasons for countries to consider microbes as a food source. Among other things, they are rich in protein, wrote Sang Yup Lee, a bioengineer and senior vice president for research at Korea Advanced Institute of Science and Technology, in an email to Undark. Lee and others have noted that growing microbes requires less land and water than conventional agriculture. Therefore, they might prove to be a more sustainable source of nutrition, particularly as the human population grows.

The product from the microbe reactor is collected in a glass jar. The microbes can be dried into a powder for human consumption — once they are deemed safe by regulators.
After PET is broken down in the ammonium hydroxide, the liquid is moved to a reactor where it is consumed by a colony of microbes.

Lee reviewed a paper describing the microbial portion of the Michigan Tech project, and said that the group’s plans are feasible. But he pointed out a significant challenge: At the moment, only certain microorganisms are considered safe to eat, namely “those we have been eating thorough fermented food and beverages, such as lactic acid bacteria, bacillus, some yeasts.” But these don’t degrade plastics.


Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe. Joshua Pearce, an electrical engineer at Western University in Ontario, Canada, performed the initial toxicology screening, breaking the microbes down into smaller pieces, which they compared against known toxins.

“We’re pretty sure there’s nothing bad in there,” said Pearce. He added that the microbes have also been fed to C. elegans roundworms without apparent ill-effects, and the team is currently looking at how rats do when they consume the microbes over the longer term. If the rats do well, then the next step would be to submit data to the Food and Drug Administration for review.

Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe.

At least a handful of companies are in various stages of commercializing new varieties of edible microbes. A Finnish startup, Solar Foods, for example, has taken a bacterium found in nature and created a powdery product with a mustard brown hue that has been approved for use in Singapore. In an email to Undark, chief experience officer Laura Sinisalo said that the company has applied for approval in the E.U. and the U.K., as well as in the U.S., where it hopes to enter the market by the end of this year.

Even if the plastic-eating microbes turn out to be safe for human consumption, Techtmann said, the public might still balk at the prospect of eating something nourished on plastic waste. For this reason, he said, this particular group of microbes might prove most useful on remote military bases or during disaster relief, where it could be consumed short-term, to help people survive.

“I think there’s a bit less of a concern about the ick factor,” said Techtmann, “if it’s really just, ‘This is going to keep me alive for another day or two.’”

This article was originally published on Undark. Read the original article.

A new system lets robots sense human touch without artificial skin

Even the most capable robots aren’t great at sensing human touch; you typically need a computer science degree or at least a tablet to interact with them effectively. That may change, thanks to robots that can now sense and interpret touch without being covered in high-tech artificial skin. It’s a significant step toward robots that can interact more intuitively with humans. 

To understand the new approach, led by the German Aerospace Center and published today in Science Robotics, consider the two distinct ways our own bodies sense touch. If you hold your left palm facing up and press lightly on your left pinky finger, you may first recognize that touch through the skin of your fingertip. That makes sense–you have thousands of receptors on your hands and fingers alone. Roboticists often try to replicate that blanket of sensors for robots through artificial skins, but these can be expensive and ineffective at withstanding impacts or harsh environments.

But if you press harder, you may notice a second way of sensing the touch: through your knuckles and other joints. That sensation–a feeling of torque, to use the robotics jargon–is exactly what the researchers have re-created in their new system.

Their robotic arm contains six sensors, each of which can register even incredibly small amounts of pressure against any section of the device. After precisely measuring the amount and angle of that force, a series of algorithms can then map where a person is touching the robot and analyze what exactly they’re trying to communicate. For example, a person could draw letters or numbers anywhere on the robotic arm’s surface with a finger, and the robot could interpret directions from those movements. Any part of the robot could also be used as a virtual button.

It means that every square inch of the robot essentially becomes a touch screen, except without the cost, fragility, and wiring of one, says Maged Iskandar, researcher at the German Aerospace Center and lead author of the study. 

“Human-robot interaction, where a human can closely interact with and command a robot, is still not optimal, because the human needs an input device,” Iskandar says. “If you can use the robot itself as a device, the interactions will be more fluid.”

A system like this could provide a cheaper and simpler way of providing not only a sense of touch, but also a new way to communicate with robots. That could be particularly significant for larger robots, like humanoids, which continue to receive billions in venture capital investment. 

Calogero Maria Oddo, a roboticist who leads the Neuro-Robotic Touch Laboratory at the BioRobotics Institute but was not involved in the work, says the development is significant, thanks to the way the research combines sensors, elegant use of mathematics to map out touch, and new AI methods to put it all together. Oddo says commercial adoption could be fairly quick, since the investment required is more in software than hardware, which is far more expensive.

There are caveats, though. For one, the new model cannot handle more than two points of contact at once. In a fairly controlled setting like a factory floor that might not be an issue, but in environments where human-robot interactions are less predictable, it could present limitations. And the sorts of sensors needed to communicate touch to a robot, though commercially available, can also cost tens of thousands of dollars.

Overall, though, Oddo envisions a future where skin-based sensors and joint-based ones are merged to give robots a more comprehensive sense of touch.

“We humans and other animals have integrated both solutions,” he says. “I expect robots working in the real world will use both, too, to interact safely and smoothly with the world and learn.”

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car

DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

The US government is still spending big on climate

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Friday marks two years since the US signed the landmark Inflation Reduction Act (IRA) into law. Now, I’m not usually one to track legislation birthdays. But this particular law is the exception, because it was a game changer for climate technology in the country, and beyond. 

Over the past two years we’ve seen an influx of investment from the federal government, private businesses hoping to get in on the action, and other countries trying to keep up. And now we’re seeing all this money starting to make a difference in the climate tech sector.  

Before we get to the present day, let’s do a quick refresher. In late July 2022, the US Congress reached a massive deal on a tax reform and spending package. The law changed some tax rules, implemented prescription drug pricing reform, and provided some funding for health care and the agency that collects taxes. 

And then there are the climate sections, to the tune of hundreds of billions of dollars of spending. There are tax credits for businesses that build and operate new factories to produce technologies like wind and solar. There are individual tax credits to help people buy electric vehicles, heat pumps, and solar panels. There’s funding to give loans to businesses working to bring their newer technologies into the world. 

Now to the fun part: Where is all that money going?

Some of the funding comes in the form of grants, designed to kick-start domestic manufacturing in areas like batteries for EVs and energy technologies. I wrote about several billion dollars going to companies making battery components and producing their ingredients in October 2022, for example

Tax credits are another huge chunk of the bill, and it’s starting to become clear just how significant they can be for businesses. First Solar, a company making thin-film solar panels in the US, revealed earlier this year that it was in the middle of a deal to receive about $700 million from tax credits

Then there are the provisions for individuals. As of late May, about three million households had claimed IRA tax credits for their homes in 2023. Together, they received about $8 billion for solar panels, batteries, heat pumps, and home efficiency technologies such as insulation. The credits are popular—that spending was roughly three times higher than projections had suggested. 

One area I’ve been following especially closely is funding from the Loan Programs Office of the US Department of Energy, which lends money to businesses to help them get their innovative projects built. There was a $2 billion commitment to Redwood Materials, a battery recycling company I dug into just before the announcement. You might also remember a $1.52 billion loan to reopen a nuclear power plant in Michigan and a $400 million loan to give zinc batteries a boost

It’s not just the federal government that’s pouring in money—businesses are following suit, announcing new factories or expanding old ones. Between the passage of the IRA in August 2022 and May 2024, companies have committed $110 billion for 159 projects from EVs and solar and wind to transmission projects, according to a tracker from Jack Conness, a policy analyst at Energy Innovation, an energy and climate policy firm. 

The effects have rippled out beyond the US. Europe finalized the Net-Zero Industry Act in early 2024, partly as an answer to the IRA. It’s not quite the same spending spree, but the bill does include a goal for Europe to supply 40% of its own climate tech by 2030 and it implements some rule changes regarding how new projects get approved to help that happen. 

The Inflation Reduction Act still has a lot of time left, and some programs have a 10-year window. One of the biggest, though often overlooked, changes over the last year is that we’ve gotten clarity on how some of the major programs are actually going to work. While the large contours were laid out in the law, some of the details about implementing them were left up to agencies to nail down. And while these specifics often seem small, they can affect which sorts of projects are eligible, changing how these credits might shape the industry. 

For example, in December 2023 we learned how restrictions in the EV tax credits will affect vehicles with components made in China. As a result, starting in 2024 some vehicle models became ineligible for the credits, including the Ford Mustang Mach-E. (The company hasn’t said exactly why the model lost eligibility, but some reporting has suggested it’s likely because the lithium iron phosphate batteries used in the vehicles come from the Chinese company CATL.) 

Some of those specifics get really complicated. The hydrogen tax credits could get tangled up in legal battles. The full rules on credits for sustainable aviation fuel raised concerns that fuels that don’t help much with emissions will still get funding. The credits for critical minerals apply only to processing, not to mining efforts, as my colleague James Temple detailed in his story about a Minnesota mine earlier this year

Looking ahead, the fate of the IRA’s programs may depend on the outcome of the presidential election in November. Vice President Kamala Harris, the Democratic nominee, cast the tie-breaking vote to pass the law, and she would likely keep the programs going. Meanwhile, Donald Trump, the Republican nominee, has been openly targeting many of its provisions, and he could do some damage to many of the tax credits included, even though it would require an act of Congress to actually repeal the law. (For more on what a second Trump presidency might mean for the climate law, check out this great deep dive from James Temple.) 

The action certainly isn’t slowing down in the world of climate technology. Looking ahead, one major piece of the puzzle we’ll be watching is a potential change to how new projects get approved. There’s a permitting reform package winding its way through the government now, so stay tuned for more on that, and on everything climate tech. 


Now read the rest of The Spark

Related reading

At our ClimateTech event last year, Leah Stokes, an environmental policy professor at UC Santa Barbara who was closely involved with developing the IRA, spoke with us about the law. For more on how it came to be and what changes we’ve seen so far, check out her segment here

Here’s what’s most at risk in the IRA as the US faces an election in November. 

One mine in Minnesota could unlock tens of billions of dollars in tax credits, as James Temple detailed in this story from January.

wide view of auto production at Mercedes-Benz factory

MERCEDES-BENZ AG

Another thing

Steel production is responsible for about 7% of global emissions. A growing array of technologies can produce the metal with less climate pollution, but there’s a big catch: They’re expensive. 

But in the grand scheme of things, even steel that costs 30% more than the standard stuff would only increase the cost of the average new car by about $100, or less than 1%. That gives the auto industry a unique opportunity to help drive the world toward greener steel. Get all the details in my latest story

Keeping up with climate  

The world’s biggest pumped hydropower project just came online in China. The $2.6 billion facility can store energy by pumping water uphill. (Bloomberg)

Scientists want to make a common chemical from wastewater. Researchers demonstrated a reactor that can produce ammonia from nitrates, a common pollutant found in municipal wastewater and agricultural runoff. (New Scientist)

→ Ammonia could be used as fuel for long-distance shipping. (MIT Technology Review)

The new movie Twisters shows a tornado ripping apart a wind turbine. Experts say we probably don’t need to worry too much about wind farms collapsing—those incidents tend to be rare, because turbines are built to withstand high wind speeds and are usually shut down and locked into a safe position in the case of extreme weather. (E&E News)

SunPower, once a dominant force in residential solar, is bankrupt. The company will sell off assets and gradually close up shop in the latest hit to a turbulent market. (Latitude Media)

More than 47,000 people in Europe died last year from heat-related causes. If it hadn’t been for adaptation measures like early warning systems and cooling technology, the toll could have been much higher. (New York Times)

Europe could be a bright spot for Beyond Meat and other companies selling plant-based products. The industry has seen sales and profits stagnate or drop recently, especially in the US, but Europe has lower levels of meat consumption, and supermarkets there have shown some support for animal-free alternatives. (Wired)

South Korea turns about 98% of its food waste into compost, animal feed, or energy. It’s one of the few countries with a comprehensive system for food waste, and it’s not an easy one to replicate. (Washington Post)

→ Here’s how companies want to use microbes to turn food scraps and agricultural waste into energy. (MIT Technology Review)

Just 12% of new low-emissions hydrogen projects have customers lined up. As a result, many proposed projects will probably never get built. (Bloomberg)

What Japan’s “megaquake” warning really tells us

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

On August 8, at 16:42 local time, a magnitude-7.1 earthquake shook southern Japan. The temblor, originating off the shores of mainland island of Kyūshū, was felt by nearly a million people across the region, and initially, the threat of a tsunami emerged. But only a diminutive wave swept ashore, buildings remained upright, and nobody died. The crisis was over as quickly as it began.

But then, something new happened. The Japan Meteorological Agency, a government organization, issued a ‘megaquake advisory’ for the first time. This pair of words may appear disquieting—and to some extent, they are. There is a ticking bomb below Japanese waters, a giant crevasse where one tectonic plate dives below another. Stress has been accumulating across this boundary for quite some time, and inevitably, it will do what it has repeatedly done in the past: part of it will violently rupture, generating a devastating earthquake and a potentially huge tsunami.

The advisory was in part issued because it is possible that the magnitude-7.1 quake is a foreshock – a precursory quake – to a far larger one, a tsunami-making monster that could kill a quarter of a million people.

The good news, for now, is that scientists think it is very unlikely that that magnitude-7.1 quake is a prelude to a cataclysm. Nothing is certain, but “the chances that this actually is a foreshock are really quite low,” says Harold Tobin, the director of the Pacific Northwest Seismic Network.

The advisory, ultimately, isn’t prophetic. Its primary purpose is to let the public know that scientists are aware of what’s going on, that they are cognizant of the worst-case scenario—and that everyone else should be mindful of that grim possibility too. Evacuation routes should be memorized, and emergency supplies should be obtained, just in case.

“Even if the probability is low, the consequences are so high,” says Judith Hubbard, an earthquake scientist at Cornell University. “It makes sense to worry about some of these low probabilities.”

Japan, which sits atop a tectonic jigsaw, is no stranger to large earthquakes. Just this past New Year’s Day, a magnitude-7.6 temblor convulsed the Noto Peninsula, killing 230 people. But special attention is paid to certain quakes even when they cause no direct harm.

The August 8 event took place on the Nankai subduction zone: here, the Philippine Sea plate creeps below Japan, which is attached to the Eurasian plate. This type of plate boundary is the sort capable of producing ‘megaquakes’, those of a magnitude-8.0 and higher. (The numerical difference may seem small, but the scale is logarithmic: a magnitude-8.0 quake unleashes 32 times more energy than a magnitude-7.0 quake.)

Consequently, the Nankai subduction zone (or Nankai Trough) has created several historical tragedies. A magnitude-7.9 quake in 1944 was followed by a magnitude-8.0 quake in 1946; both events were caused by part of the submarine trench jolting. The magnitude-8.6 quake of 1707, however, involved the rupture of the entire Nankai Trough. Thousands died on each occasion.

Predicting disaster

Predicting when and where the next major quake will happen anywhere on Earth is currently impossible. Nankai is no different: as recently noted by Hubbard on her blog Earthquake Insights – co-authored with geoscientist Kyle Bradley – there isn’t a set time between Nankai’s major quakes, which range from days to several centuries.

But as stress is continually accumulating on that plate boundary, it’s certain that, one day, the Nankai Trough will let loose another great quake, one which could push a vast volume of seawater toward a large swath of western and central Japan, making a tsunami 100 feet tall. The darkest scenario suggests that 230,000 could perish, two million buildings would be damaged or destroyed, and the country would be left with a $1.4 trillion bill.

Naturally, a magnitude-7.1 quake on that Trough worries scientists. Aftershocks (a series of smaller magnitude quakes) are a guaranteed feature of potent quakes. But there is a small chance that a large quake will be followed by an even larger quake, retrospectively making the first a foreshock.

“The earthquake changes the stress in the surrounding crust a little bit,” says Hubbard. Using the energy released during the August 8 rupture, and decoding the seismic waves created during the quake, scientists can estimate how much stress gets shifted to surrounding faults.

The worry is that some of the stress released by one quake gets transferred to a big fault that hasn’t ruptured in a very long time but is ready to fold like an explosive house of cards. “You never know which increment of stress is gonna be the one that pushes it over the edge.”

Scientists cannot tell whether a large quake is a foreshock until a larger quake occurs. But the possibility remains that the August 8 temblor is a foreshock to something considerably worse. Statistically, it’s unlikely. But there is additional context to why that megaquake advisory was issued: the specter of 2011’s magnitude-9.1 Tōhoku earthquake and tsunami, which killed 18,000 people, still haunts the Japanese government and the nation’s geoscientists. 

Hubbard explains that, two days before that quake struck off Japan’s eastern seaboard, there was a magnitude-7.2 event in the same area—now known to be a foreshock to the catastrophe. Reportedly, authorities in Japan regretted not highlighting that possibility in advance, which may have meant people on the eastern seaboard would have been more prepared, and more capable, of escaping their fate.

A sign to get prepared

In response, Japan’s government created new protocols for signaling that foreshock possibility. Most magnitude-7.0-or-so quakes would not be followed by a ‘megaquake advisory’. Only those happening in tectonic settings able to trigger truly gigantic quakes will—and that includes the Nankai Trough.

Crucially, this advisory is not a warning that a megaquake is imminent. It means: “be ready for when the big earthquake comes,” says Hubbard. Nobody is mandated to evacuate, but they are asked to know their escape routes. Meanwhile, local news reports that nursing homes and hospitals in the region are tallying emergency supplies while moving immobile patients to higher floors or other locations. The high-speed Shinkansen railway trains are running at a reduced maximum speed, and certain flights are carrying more fuel than usual in case they need to divert.

Earthquake advisories aren’t new. “California has something similar, and has issued advisories before,” says Wendy Bohon, an independent earthquake geologist. In September 2016, for example, a swarm of hundreds of modest quakes caused the U.S. Geological Survey to publicly advise that, for a week, there was a 0.03 to 1% chance of a magnitude-7.0-or-greater quake rocking the Southern San Andreas Fault—an outcome that fortunately didn’t come to pass.

But this megaquake advisory is Japan’s first, and it will have both pros and cons. “There are economic and social consequences to this,” says Bohon. Some confusion about how to respond has been reported, and widespread cancellations of travel to the region will come with a price tag. 

But calm reactions to the advisory seem to be the norm, and (ideally) this advisory will result in an increased understanding of the threat of the Nankai Trough. “It really is about raising awareness,” says Adam Pascale, chief scientist at the Seismology Research Centre in Melbourne, Australia. “It’s got everyone talking. And that’s the point.”

Geoscientists are also increasingly optimistic that the August 8 quake isn’t a harbinger of a seismic pandemonium. “This thing is way off to the extreme margin of the actual Nankai rupture zone,” says Tobin—meaning it may not even count as being in the zone of tectonic concern. 

A blog post co-authored by Shinji Toda, a seismologist at Tōhoku University in Sendai, Japan, also estimates that any stress transferal to the dangerous parts of the Trough is negligible. There is no clear evidence that the plate boundary is acting weirdly. And with each day that goes by, the odds of the August 8 quake being a foreshock drop even further.

Tech defenses

But if a megaquake did suddenly emerge, Japan has a technological shield that may mitigate a decent portion of the disaster. 

Buildings are commonly fitted with dampeners that allow them to withstand dramatic quake-triggered shaking. And like America’s West Coast, the entire archipelago has a sophisticated earthquake early-warning system: seismometers close to the quake’s origins listen to its seismic screams, and software makes a quick estimate of the magnitude and shaking intensity of the rupture, before beaming it to people’s various devices, giving them invaluable seconds to get to cover. Automatic countermeasures also slow trains down, control machinery in factories, hospitals, and office buildings, to minimize damage from the incoming shaking.

A tsunami early-warning system also kicks into gear if activated, beaming evacuation notices to phones, televisions, radios, sirens, and myriad specialized receivers in buildings in the afflicted region—giving people several minutes to flee. A megaquake advisory may be new, but for a population highly knowledgeable about earthquake and tsunami defense, it’s just another layer of protection.

The advisory has had other effects too: it’s caused those in another imperiled part of the world to take notice. The Cascadia Subduction Zone offshore from the US Pacific Northwest is also capable of producing both titanic quakes and prodigious tsunamis. Its last grand performance, in 1700, created a tsunami that not only inundated large sections of the North American coast, but it also swamped parts of Japan, all the way across the ocean.

Japan’s megaquake advisory has got Tobin thinking: “What would we do if our subduction zone starts acting weird?” he says—which includes a magnitude-7.0 quake in the Cascadian depths. “There is not a protocol in place the way there is in Japan.” Tobin speculates that a panel of experts would quickly assemble, and a statement – perhaps one not too dissimilar to Japan’s own advisory – would emerge from the U.S. Geological Survey. Like Japan, “we would have to be very forthright about the uncertainty,” he says.

Whether it’s Japan or the US or anywhere else, such advisories aren’t meant to engender panic. “You don’t want people to live their lives in fear,” says Hubbard. But it’s no bad thing to draw attention to the fact that Earth can sometimes be an unforgiving place to live.

Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How To Kill An Asteroid (October 2024).

This researcher wants to replace your brain, little by little

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.

His idea? Replace your body parts. All of them. Even your brain. 

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains. 

President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called  “bold, urgent innovation” with transformative potential. 

The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.

As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.

The idea has a halo of plausibility since there are already liver transplants and titanium hips, artificial corneas and substitute heart valves. The trickiest part is your brain. That ages, too, shrinking dramatically in old age. But you don’t want to swap it out for another—because it is you.

And that’s where Hébert’s research comes in. He’s been exploring ways to “progressively” replace a brain by adding bits of youthful tissue made in a lab. The process would have to be done slowly enough, in steps, that your brain could adapt, relocating memories and your self-identity.  

During a visit this spring to his lab at Albert Einstein, Hébert showed MIT Technology Review how he has been carrying out initial experiments with mice, removing small sections of their brains and injecting slurries of embryonic cells. It’s a step toward proving whether such youthful tissue can survive and take over important functions.

To be sure, the strategy is not widely accepted, even among researchers in the aging field. “On the surface it sounds completely insane, but I was surprised how good a case he could make for it,” says Matthew Scholz, CEO of aging research company Oisín Biotechnologies, who met with Hébert this year. 

Scholz is still skeptical though. “A new brain is not going to be a popular item,” he says. “The surgical element of it is going to be very severe, no matter how you slice it.”

Now, though, Hébert’s ideas appear to have gotten a huge endorsement from the US government. Hébert told MIT Technology Review that he had proposed a $110 million project to ARPA-H to prove his ideas in monkeys and other animals, and that the government “didn’t blink” at the figure. 

ARPA-H confirmed this week that it had hired Hébert as a program manager. 

The agency, modeled on DARPA, the Department of Defense organization that developed stealth fighters, gives managers unprecedented leeway in awarding contracts to develop novel technologies. Among its first programs are efforts to develop at-home cancer tests and cure blindness with eye transplants.

President Biden created ARPA-H in 2022 to pursue “bold, urgent innovation” with transformative potential.

It may be several months before details of the new project are announced, and it’s possible that ARPA-H will establish more conventional goals like treating stroke victims and Alzheimer’s patients, whose brains are damaged, rather than the more radical idea of extreme life extension. 

If it can work, forget aging; it would be useful for all kinds of neurodegenerative disease,” says Justin Rebo, a longevity scientist and entrepreneur.

But defeating death is Hébert’s stated aim. “I was a weird kid and when I found out that we all fall apart and die, I was like, ‘Why is everybody okay with this?’ And that has pretty much guided everything I do,” he says. “I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

Hébert, now 58, also recalls when he began thinking that the human form might not be set in stone. It was upon seeing the 1973 movie Westworld, in which the gun-slinging villain, played by Yul Brynner, turns out to be an android. “That really stuck with me,” Hébert said.

Lately, Hébert has become something of a star figure among immortalists, a fringe community devoted to never dying. That’s because he’s an established scientist who is willing to propose extreme steps to avoid death. “A lot of people want radical life extension without a radical approach. People want to take a pill, and that’s not going to happen,” says Kai Micah Mills, who runs a company, Cryopets, developing ways to deep-freeze cats and dogs for future reanimation.

The reason pharmaceuticals won’t ever stop aging, Hébert says, is that time affects all of our organs and cells and even degrades substances such as elastin, one of the molecular glues that holds our bodies together. So even if, say, gene therapy could rejuvenate the DNA inside cells, a concept some companies are exploring, Hébert believes we’re still doomed as the scaffolding around them comes undone.

One organization promoting Hébert’s ideas is the Longevity Biotech Fellowship (LBF), a self-described group of “hardcore” life extension enthusiasts, which this year published a technical roadmap for defeating aging altogether. In it, they used data from Hébert’s ARPA-H proposal to argue in favor of extending life with gradual brain replacement for elderly subjects, as well as transplant of their heads onto the bodies of “non-sentient” human clones, raised to lack a functioning brain of their own, a procedure they referred to as “body transplant.”

Such a startling feat would involve several technologies that don’t yet exist, including a means to attach a transplanted head to a spinal cord. Even so, the group rates “replacement” as the most likely way to conquer death, claiming it would take only 10 years and $3.6 billion to demonstrate.

“It doesn’t require you to understand aging,” says Mark Hamalainen, co-founder of the research and education group. “That is why Jean’s work is interesting.”

Hébert’s connections to such far-out concepts (he serves as a mentor in LBF’s training sessions) could make him an edgy choice for ARPA-H, a young agency whose budget is $1.5 billion a year.

For instance, Hebert recently said on a podcast with Hamalainen that human fetuses might be used as a potential source of life-extending parts for elderly people. That would be ethical to do, Hébert said during the program, if the fetus is young enough that there “are no neurons, no sentience, and no person.” And according to a meeting agenda viewed by MIT Technology Review, Hébert was also a featured speaker at an online pitch session held last year on full “body replacement,” which included biohackers and an expert in primate cloning.

Hébert declined to describe the session, which he said was not recorded “out of respect for those who preferred discretion.” But he’s in favor of growing non-sentient human bodies. “I am in conversation with all these groups because, you know, not only is my brain slowly deteriorating, but so is the rest of my body,” says Hébert. “I’m going to need other body parts as well.”

The focus of Hébert’s own scientific work is the neocortex, the outer part of the brain that looks like a pile of extra-thick noodles and which houses most of our senses, reasoning, and memory. The neocortex is “arguably the most important part of who we are as individuals,” says Hébert, as well as “maybe the most complex structure in the world.”

There are two reasons he believes the neocortex could be replaced, albeit only slowly. The first is evidence from rare cases of benign brain tumors, like a man described in the medical literature who developed a growth the size of an orange. Yet because it grew very slowly, the man’s brain was able to adjust, shifting memories elsewhere, and his behavior and speech never seemed to change—even when the tumor was removed. 

That’s proof, Hébert thinks, that replacing the neocortex little by little could be achieved “without losing the information encoded in it” such as a person’s self-identity.

The second source of hope, he says, is experiments showing that fetal-stage cells can survive, and even function, when transplanted into the brains of adults. For instance, medical tests underway are showing that young neurons can integrate into the brains of people who have epilepsy  and stop their seizures.  

“It was these two things together—the plastic nature of brains and the ability to add new tissue—that, to me, were like, ‘Ah, now there has got to be a way,’” says Hébert.

“I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

One challenge ahead is how to manufacture the replacement brain bits, or what Hebert has called “facsimiles” of neocortical tissue. During a visit to his lab at Albert Einstein, Hébert described plans to manually assemble chunks of youthful brain tissue using stem cells. These parts, he says, would not be fully developed, but instead be similar to what’s found in a still-developing fetal brain. That way, upon transplant, they’d be able to finish maturing, integrate into your brain, and be “ready to absorb and learn your information.”

To design the youthful bits of neocortex, Hébert has been studying brains of aborted human fetuses 5 to 8 weeks of age. He’s been measuring what cells are present, and in what numbers and locations, to try to guide the manufacture of similar structures in the lab.

“What we’re engineering is a fetal-like neocortical tissue that has all the cell types and structure needed to develop into normal tissue on its own,” says Hébert. 

Part of the work has been carried out by a startup company, BE Therapeutics (it stands for Brain Engineering), located in a suite on Einstein’s campus and which is funded by Apollo Health Ventures, VitaDAO, and with contributions from a New York State development fund. The company had only two employees when MIT Technology Review visited this spring, and the its future is uncertain, says Hébert, now that he’s joining ARPA-H and closing his lab at Einstein.

Because it’s often challenging to manufacture even a single cell type from stem cells, making a facsimile of the neocortex involving a dozen cell types isn’t an easy project. In fact, it’s just one of several scientific problems standing between you and a younger brain, some of which might never have practical solutions. “There is a saying in engineering. You are allowed one miracle, but if you need more than one, find another plan,” says Scholz.

Maybe the crucial unknown is whether young bits of neocortex will ever correctly function inside an elderly person’s brain, for example by establishing connections or storing and sending electro-chemical information. Despite evidence the brain can incorporate individual transplanted cells, that’s never been robustly proven for larger bits of tissue, says Rusty Gage, a biologist at the Salk Institute in La Jolla, Calif., and who is considered a pioneer of neural transplants. He says researchers for years have tried to transplant larger parts of fetal animal brains into adult animals, but with inconclusive results. “If it worked, we’d all be doing more of it,” he says.

The problem, says Gage, isn’t whether the tissue can survive, but whether it can participate in the workings of an existing brain. “I am not dissing his hypothesis. But that’s all it is,” says Gage. “Yes, fetal or embryonic tissue can mature in the adult brain. But whether it replaces the function of the dysfunctional area is an experiment he needs to do, if he wants to convince the world he has actually replaced an aged section with a new section.”

In his new role at ARPA-H, it’s expected that Hébert will have a large budget to fund scientists to try and prove his ideas can work. He agrees it won’t be easy. “We’re, you know, a couple steps away from reversing brain aging,” says Hébert. “A couple of big steps away, I should say.”

What’s next for drones

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Drones have been a mainstay technology among militaries, hobbyists, and first responders alike for more than a decade, and in that time the range available has skyrocketed. No longer limited to small quadcopters with insufficient battery life, drones are aiding search and rescue efforts, reshaping wars in Ukraine and Gaza, and delivering time-sensitive packages of medical supplies. And billions of dollars are being plowed into building the next generation of fully autonomous systems. 

These developments raise a number of questions: Are drones safe enough to be flown in dense neighborhoods and cities? Is it a violation of people’s privacy for police to fly drones overhead at an event or protest? Who decides what level of drone autonomy is acceptable in a war zone?

Those questions are no longer hypothetical. Advancements in drone technology and sensors, falling prices, and easing regulations are making drones cheaper, faster, and more capable than ever. Here’s a look at four of the biggest changes coming to drone technology in the near future.

Police drone fleets

Today more than 1,500 US police departments have drone programs, according to tracking conducted by the Atlas of Surveillance. Trained police pilots use drones for search and rescue operations, monitoring events and crowds, and other purposes. The Scottsdale Police Department in Arizona, for example, successfully used a drone to locate a lost elderly man with dementia, says Rich Slavin, Scottsdale’s assistant chief of police. He says the department has had useful but limited experiences with drones to date, but its pilots have often been hamstrung by the “line of sight” rule from the Federal Aviation Administration (FAA). The rule stipulates that pilots must be able to see their drones at all times, which severely limits the drone’s range.

Soon, that will change. On a rooftop somewhere in the city, Scottsdale police will in the coming months install a new police drone capable of autonomous takeoff, flight, and landing. Slavin says the department is seeking a waiver from the FAA to be able to fly its drone past the line of sight. (Hundreds of police agencies have received a waiver from the FAA since the first was granted in 2019.) The drone, which can fly up to 57 miles per hour, will go on missions as far as three miles from its docking station, and the department says it will be used for things like tracking suspects or providing a visual feed of an officer at a traffic stop who is waiting for backup. 

“The FAA has been much more progressive in how we’re moving into this space,” Slavin says. That could mean that around the country, the sight (and sound) of a police drone soaring overhead will become much more common. 

The Scottsdale department says the drone, which it is purchasing from Aerodome, will kick off its drone-as-first-responder program and will play a role in the department’s new “real-time crime center.” These sorts of centers are becoming increasingly common in US policing, and allow cities to connect cameras, license plate readers, drones, and other monitoring methods to track situations on the fly. The rise of the centers, and their associated reliance on drones, has drawn criticism from privacy advocates who say they conduct a great deal of surveillance with little transparency about how footage from drones and other sources will be used or shared. 

In 2019, the police department in Chula Vista, California, was the first to receive a waiver from the FAA to fly beyond line of sight. The program sparked criticism from members of the community who alleged the department was not transparent about the footage it collected or how it would be used. 

Jay Stanley, a senior policy analyst at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, says the waivers exacerbate existing privacy issues related to drones. If the FAA continues to grant them, police departments will be able to cover far more of a city with drones than ever, all while the legal landscape is murky about whether this would constitute an invasion of privacy. 

“If there’s an accumulation of different uses of this technology, we’re going to end up in a world where from the moment you step out of your front door, you’re going to feel as though you’re under the constant eye of law enforcement from the sky,” he says. “It may have some real benefits, but it is also in dire need of strong checks and balances.”

Scottsdale police say the drone could be used in a variety of scenarios, such as responding to a burglary in progress or tracking a driver with suspected connection to a kidnapping. But the real benefit, Slavin says, will come from pairing it with other existing technologies, like automatic license plate readers and hundreds of cameras placed around the city. “It can get to places very, very quickly,” he says. “It gives us real-time intelligence and helps us respond faster and smarter.”

While police departments might indeed benefit from drones in those situations, Stanley says the ACLU has found that many deploy them for far more ordinary cases, like reports of a kid throwing a ball against a garage or of “suspicious persons” in an area.

“It raises the question about whether these programs will just end up being another way in which vulnerable communities are over-policed and nickeled and dimed by law enforcement agencies coming down on people for all kinds of minor transgressions,” he says.

Drone deliveries, again

Perhaps no drone technology is more overhyped than home deliveries. For years, tech companies have teased futuristic renderings of a drone dropping off a package on your doorstep just hours after you ordered it. But they’ve never managed to expand them much beyond small-scale pilot projects, at least in the US, again largely due to the FAA’s line of sight rules. 

But this year, regulatory changes are coming. Like police departments, Amazon’s Prime Air program was previously limited to flying its drones within the pilot’s line of sight. That’s because drone pilots don’t have radar, air traffic controllers, or any of the other systems commercial flight relies on to monitor airways and keep them safe. To compensate, Amazon spent years developing an onboard system that would allow its drones to detect nearby objects and avoid collisions. The company says it showed the FAA in demonstrations that its drones could fly safely in the same airspace as helicopters, planes, and hot air balloons. 

In May, Amazon announced the FAA had granted the company a waiver and permission to expand operations in Texas, more than a decade after the Prime Air project started. And in July, the FAA cleared one more roadblock by allowing two companies—Zipline as well as Google’s Wing Aviation—to fly in the same airspace simultaneously without the need for visual observers. 

While all this means your chances of receiving a package via drone have ticked up ever so slightly, the more compelling use case might be medical deliveries. Shakiba Enayati, an assistant professor of supply chains at the University of Missouri–St. Louis, has spent years researching how drones could conduct last-mile deliveries of vaccines, antivenom, organs, and blood in remote places. She says her studies have found drones to be game changers for getting medical supplies to underserved populations, and if the FAA extends these regulatory changes, it could have a real impact. 

That’s especially true in the steps leading up to an organ transplant, she says. Before an organ can be transmitted to a recipient, a number of blood tests must be sent back-and-forth to make sure the recipient can accept it, which takes a time if the blood is being transferred by car or even helicopter. “In these cases, the clock is ticking,” Enayati says. If drones were allowed to be used in this step at scale, it would be a significant improvement.

“If the technology is supporting the needs of organ delivery, it’s going to make a big change in such an important arena,” she says.

That development could come sooner than using drones for delivery of the actual organs, which have to be transported under very tightly controlled conditions to preserve them.

Domesticating the drone supply chain

Signed into law last December, the American Security Drone Act bars federal agencies from buying drones from countries thought to pose a threat to US national security, such as Russia and China. That’s significant. China is the undisputed leader when it comes to manufacturing drones and drone parts, with over 90% of law enforcement drones in the US made by Shenzhen-based DJI, and many drones used by both sides of the war in Ukraine are made by Chinese companies. 

The American Security Drone Act is part of an effort to curb that reliance on China. (Meanwhile, China is stepping up export restrictions on drones with military uses.) As part of the act, the US Department of Defense’s Defense Innovation Unit has created the Blue UAS Cleared List, a list of drones and parts the agency has investigated and approved for purchase. The list applies to federal agencies as well as programs that receive federal funding, which often means state police departments or other non-federal agencies. 

Since the US is set to spend such significant sums on drones—with $1 billion earmarked for the Department of Defense’s Replicator initiative alone—getting on the Blue List is a big deal. It means those federal agencies can make large purchases with little red tape. 

Allan Evans, CEO of US-based drone part maker Unusual Machine, says the list has sparked a significant rush of drone companies attempting to conform to the US standards. His company manufactures a first-person view flight controller that he hopes will become the first of its kind to be approved for the Blue List.

The American Security Drone Act is unlikely to affect private purchases in the US of drones used by videographers, drone racers, or hobbyists, which will overwhelmingly still be made by China-based companies like DJI. That means any US-based drone companies, at least in the short term, will only survive by catering to the US defense market.  

“Basically any US company that isn’t willing to have ancillary involvement in defense work will lose,” Evans says. 

The coming months will show the law’s true impact: Because the US fiscal year ends in September, Evans says he expects to see a host of agencies spending their use-it-or-lose-it funding on US-made drones and drone components in the next month. “That will indicate whether the marketplace is real or not, and how much money is actually being put toward it,” he says.

Autonomous weapons in Ukraine

The drone war in Ukraine has largely been one of attrition. Drones have been used extensively for surveying damage, finding and tracking targets, or dropping weapons since the war began, but on average these quadcopter drones last just three flights before being shot down or rendered unnavigable by GPS jamming. As a result, both Ukraine and Russia prioritized accumulating high volumes of drones with the expectation that they wouldn’t last long in battle. 

Now they’re having to rethink that approach, according to Andriy Dovbenko, founder of the UK-Ukraine Tech Exchange, a nonprofit that helps startups involved in Ukraine’s war effort and eventual reconstruction raise capital. While working with drone makers in Ukraine, he says, he has seen the demand for technology shift from big shipments of simple commercial drones to a pressing need for drones that can navigate autonomously in an environment where GPS has been jammed. With 70% of the front lines suffering from jamming, according to Dovbenko, both Russian and Ukrainian drone investment is now focused on autonomous systems. 

That’s no small feat. Drone pilots usually rely on video feeds from the drone as well as GPS technology, neither of which is available in a jammed environment. Instead, autonomous drones operate with various types of sensors like LiDAR to navigate, though this can be tricky in fog or other inclement weather. Autonomous drones are a new and rapidly changing technology, still being tested by US-based companies like Shield AI. The evolving war in Ukraine is raising the stakes and the pressure to deploy affordable and reliable autonomous drones.  

The transition toward autonomous weapons also raises serious yet largely unanswered questions about how much humans should be taken out of the loop in decision-making. As the war rages on and the need for more capable weaponry rises, Ukraine will likely be the testing ground for if and how the moral line is drawn. But Dovbenko says stopping to find that line during an ongoing war is impossible. 

“There is a moral question about how much autonomy you can give to the killing machine,” Dovbenko says. “This question is not being asked right now in Ukraine because it’s more of a matter of survival.”