Mass-market military drones have changed the way wars are fought

Mass-market military drones are one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

When the United States first fired a missile from an armed Predator drone at suspected Al Qaeda leaders in Afghanistan on November 14, 2001, it was clear that warfare had permanently changed. During the two decades that followed, drones became the most iconic instrument of the war on terror. Highly sophisticated, multimillion-dollar US drones were repeatedly deployed in targeted killing campaigns. But their use worldwide was limited to powerful nations.

Then, as the navigation systems and wireless technologies in hobbyist drones and consumer electronics improved, a second style of military drone appeared—not in Washington, but in Istanbul. And it caught the world’s attention in Ukraine in 2022, when it proved itself capable of holding back one of the most formidable militaries on the planet. 

The Bayraktar TB2 drone, a Turkish-made aircraft from the Baykar corporation, marks a new chapter in the still-new era of drone warfare. Cheap, widely available drones have changed how smaller nations fight modern wars. Although Russia’s invasion of Ukraine brought these new weapons into the popular consciousness, there’s more to their story.

Explosions in Armenia, broadcast on YouTube in 2020, revealed this new shape of war to the world. There, in a blue-tinted video, a radar dish spins underneath cyan crosshairs until it erupts into a cloud of smoke. The action repeats twice: a crosshair targets a vehicle mounted with a spinning dish sensor, its earthen barriers no defense against aerial attack, leaving an empty crater behind.

The clip, released on YouTube on September 27, 2020, was one of many the Azerbaijan military published during the Second Nagorno-Karabakh War, which it launched against neighboring Armenia that same day. The video was recorded by the TB2.

It encompasses all the horrors of war, with the added voyeurism of an unblinking camera.

In that conflict and others, the TB2 has filled a void in the arms market created by the US government’s refusal to export its high-end Predator family of drones. To get around export restrictions on drone models and other critical military technologies, Baykar turned to technologies readily available on the commercial market to make a new weapon of war.

The TB2 is built in Turkey from a mix of domestically made parts and parts sourced from international commercial markets. Investigations of downed Bayraktars have revealed components sourced from US companies, including a GPS receiver made by Trimble, an airborne modem/transceiver made by Viasat, and a Garmin GNC 255 navigation radio. Garmin, which makes consumer GPS products, released a statement noting that its navigation unit found in TB2s “is not designed or intended for military use, and it is not even designed or intended for use in drones.” But it’s there.

Commercial technology makes the TB2 appealing for another reason: while the US-made Reaper drone costs $28 million, the TB2 only costs about $5 million. Since its development in 2014, the TB2 has shown up in conflicts in Azerbaijan, Libya, Ethiopia, and now Ukraine. The drone is so much more affordable than traditional weaponry that Lithuanians have run crowdfunding campaigns to help buy them for Ukrainian forces.

The TB2 is just one of several examples of commercial drone technology being used in combat. The same DJI Mavic quadcopters that help real estate agents survey property have been deployed in conflicts in Burkina Faso and the Donbas region of Ukraine. Other DJI drone models have been spotted in Syria since 2013, and kit-built drones, assembled from commercially available parts, have seen widespread use.

These cheap, good-enough drones that are free of export restrictions have given smaller nations the kind of air capabilities previously limited to great military powers. While that proliferation may bring some small degree of parity, it comes with terrible human costs. Drone attacks can be described in sterile language, framed as missiles stopping vehicles. But what happens when that explosive force hits human bodies is visceral, tragic. It encompasses all the horrors of war, with the added voyeurism of an unblinking camera whose video feed is monitored by a participant in the attack who is often dozens, if not thousands, of miles away.

Emergency responders work to clear debris from a Russian Shahed-136 strike on a building in Kyiv as smoke pours out into the sky
Emergency responders work to clear debris from a building in Kyiv after a Russian strike by a Shahed-136 drone.
ED RAM / GUARDIAN / EYEVINE VIA REDUX

What’s more, as these weapons proliferate, larger powers will increasingly employ them in conventional warfare rather than rely on targeted killings. When Ukraine proved it was capable of holding back the Russian invasion, Russia unleashed a terror campaign against Ukrainian civilians via Iranian-made Shahed-136 drones. These self-detonating drones, which Russia launches in salvos, contain commercial parts from US companies. The waves of drone attacks have largely been intercepted by Ukrainian air defenses, but some have killed civilians. Because the Shahed-136 drones are so cheap to make, estimated at around $20,000, intercepting them with a more expensive missile incurs a cost to the defender. 

Export potential

The TB2 was developed by MIT graduate Selcuk Bayraktar, who researched advanced vertical landing patterns for drones while at the university. His namesake drone is a fixed-wing plane with modest specifications. It can communicate at a range of around 186 miles from its ground station and travels at 80 mph to 138 mph. At those speeds, a TB2 can stay in the sky for over 24 hours, comparable to higher-end drones like the Reaper and Gray Eagle.

From altitudes of up to 25,000 feet, the TB2 surveys the ground below, sharing video to coordinate long-range attacks or movements, or releasing laser-guided bombs on people, vehicles, or buildings.

But its most unique characteristic, says James Rogers, associate professor in war studies at the Danish Institute for Advanced Study, is that it’s “the first mass-produced drone system that medium and smaller states can get hold of.”

Before Baykar developed the TB2, the Turkish military wanted to buy Predator and Reaper drones from the US. Those are the remotely piloted planes that defined the US’s long wars in Afghanistan and Iraq. But drone exports from the US are governed by the Missile Technology Control Regime, a treaty whose members agree to limit access to particular types of weapons. The Trump administration relaxed adherence to these rules in 2020 (a change upheld by the Biden administration), but the previous enforcement of the rules, combined with concern that Turkey would use the drones to violate human rights, prevented a sale in 2012.

Turkey is not alone in being denied the ability to purchase US-made drones. Critics of the treaty point out that the US could sell fighter jets that require human pilots to Egypt and other countries, but won’t sell those same countries armed drones.

But commercial and military technology have a way of driving each other. Silicon Valley is largely an outgrowth of Cold War military technology research, and consumer electronics, especially those tied to computing and navigation systems, have long been subsidized by military research. GPS was once a military technology so sensitive that civilian use of the signal was intentionally degraded until 2000.

Now, commercial access to the full signal, in conjunction with cheap and powerful commercial GPS receivers like the one found in the Bayraktar, allows drones to perform at near-military standards, without special access to military signals or congressional oversight. 

The Turkish military debuted the Bayraktar in 2016, targeting members of the PKK, a Kurdish militia. Since then, the drone has seen action with several other militaries, most famously Ukraine and Azerbaijan but also on one side of the Libyan Civil War. In 2022, the small West African nation of Togo, with a military budget of just under $114 million, purchased a consignment of Bayraktar TB2s.

“We got to the point where these drones are deciding the fate of nations.”

James Rogers

“I think Turkey has made a real conscious decision to focus on the purchase and development of the TB2, making it cheaper and more widely available—in some cases ‘free’ through donations,” says Rogers.

In 2021 Ethiopia received the TB2 and other foreign-supplied drones, which it used to halt and then reverse an advance by Tigrayan rebels on the capital that its ground forces couldn’t stop. Battlefield casualties directly resulting from the drones are hard to assess, but drone strikes on Tigrayan-held areas after the advance was halted killed at least 56 civilians.

“It is astonishing to think that Turkish drones, if we believe the accounts in Ethiopia, made the difference between an African nation’s regime falling or surviving. We got to the point where these drones are deciding the fate of nations,” says Rogers.

War hobbyists

The TB2, while modest in its abilities relative to other military drones, is an advanced piece of equipment that requires ground stations and a stretch of road to launch. But it reflects only one end of the spectrum of mass-market drones that have found their way onto battlefields. At the other end is the humble quadcopter.

By 2016, ISIS had modified DJI Phantom quadcopters to drop grenades. These weapons joined the arsenal of scratch-built ISIS drones, using parts that investigators with Conflict Armament Research had traced to mass-market commercial suppliers. This tactic spread and was soon common among armed groups. In 2018, Ukrainian forces fighting in Donetsk used a modified DJI Mavic to drop bombs on trenches held by Russian-backed separatists. Today these Chinese drones are found virtually anywhere in the world where there is combat. 

grid of DJI drones on top of cases on an airstrip
DJI Matrice 300 RTK drones purchased for the Armed Forces of Ukraine.
EVGEN KOTENKO/UKRINFORM/ABACA/SIPA USA VIA AP IMAGES

“When it comes to this war in Ukraine, it is truly the competent use of quadcopters for a variety of tasks, including for artillery and mortar units, that has really made this cheap, available, expendable (unmanned aerial vehicle), very lethal and very dangerous,” says Samuel Bendett, an analyst at the Center for Naval Analysis and adjunct senior fellow at the Center for a New American Security.

In April 2022, China’s hobbyist drone maker DJI announced it was suspending all sales in Ukraine and Russia. But its quadcopters, especially the popular and affordable Mavic family, still find their way into military use, as soldiers buy and deploy the drones themselves. Sometimes regional governments even pitch in.

Even if these drones don’t release bombs, soldiers have learned to fear the buzzing of quadcopter engines overhead as the flights often presage an incoming artillery barrage. In one moment, a squad is a flicker of light, visible in thermal imaging, captured by a drone camera and shared with the tablet of an enemy hiding nearby. In the next, the soldiers’ execution is filmed from above, captured in 4K resolution by a weapon available for sale at any Best Buy.

Kelsey D. Atherton is a military technology journalist based in Albuquerque, New Mexico. His work has appeared in Popular Science, the New York Times, and Slate.

These simple design rules could turn the chip industry on its head

RISC-V is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

Python, Java, C++, R. In the seven decades or so since the computer was invented, humans have devised many programming languages—largely mishmashes of English words and mathematical symbols—to command transistors to do our bidding. 

But the silicon switches in your laptop’s central processor don’t inherently understand the word “for” or the symbol “=.” For a chip to execute your Python code, software must translate these words and symbols into instructions a chip can use.  

Engineers designate specific binary sequences to prompt the hardware to perform certain actions. The code “100000,” for example, could order a chip to add two numbers, while the code “100100” could ask it to copy a piece of data. These binary sequences form the chip’s fundamental vocabulary, known as the computer’s instruction set. 

For years, the chip industry has relied on a variety of proprietary instruction sets. Two major types dominate the market today: x86, which is used by Intel and AMD, and Arm, made by the company of the same name. Companies must license these instruction sets—which can cost millions of dollars for a single design. And because x86 and Arm chips speak different languages, software developers must make a version of the same app to suit each instruction set. 

Lately, though, many hardware and software companies worldwide have begun to converge around a publicly available instruction set known as RISC-V. It’s a shift that could radically change the chip industry. RISC-V proponents say that this instruction set makes computer chip design more accessible to smaller companies and budding entrepreneurs by liberating them from costly licensing fees. 

“There are already billions of RISC-V-based cores out there, in everything from earbuds all the way up to cloud servers,” says Mark Himelstein, the CTO of RISC-V International, a nonprofit supporting the technology. 

In February 2022, Intel itself pledged $1 billion to develop the RISC-V ecosystem, along with other priorities. While Himelstein predicts it will take a few years before RISC-V chips are widespread among personal computers, the first laptop with a RISC-V chip, the Roma by Xcalibyte and DeepComputing, became available in June for pre-order.

What is RISC-V?

You can think of RISC-V (pronounced “risk five”) as a set of design norms, like Bluetooth, for computer chips. It’s known as an “open standard.” That means anyone—you, me, Intel—can participate in the development of those standards. In addition, anyone can design a computer chip based on RISC-V’s instruction set. Those chips would then be able to execute any software designed for RISC-V. (Note that technology based on an “open standard” differs from “open-source” technology. An open standard typically designates technology specifications, whereas “open source” generally refers to software whose source code is freely available for reference and use.)

A group of computer scientists at UC Berkeley developed the basis for RISC-V in 2010 as a teaching tool for chip design. Proprietary central processing units (CPUs) were too complicated and opaque for students to learn from. RISC-V’s creators made the instruction set public and soon found themselves fielding questions about it. By 2015, a group of academic institutions and companies, including Google and IBM, founded RISC-V International to standardize the instruction set. 

The most basic version of RISC-V consists of just 47 instructions, such as commands to load a number from memory and to add numbers together. However, RISC-V also offers more instructions, known as extensions, making it possible to add features such as vector math for running AI algorithms. 

With RISC-V, you can design a chip’s instruction set to fit your needs, which “gives the freedom to do custom, application-driven hardware,” says Eric Mejdrich of Imec, a research institute in Belgium that focuses on nanoelectronics.

Previously, companies seeking CPUs generally bought off-the-shelf chips because it was too expensive and time-consuming to design them from scratch. Particularly for simpler devices such as alarms or kitchen appliances, these chips often had extra features, which could slow the appliance’s function or waste power. 

Himelstein touts Bluetrum, an earbud company based in China, as a RISC-V success story. Earbuds don’t require much computing capability, and the company found it could design simple chips that use RISC-V instructions. “If they had not used RISC-V, either they would have had to buy a commercial chip with a lot more [capability] than they wanted, or they would have had to design their own chip or instruction set,” says Himelstein. “They didn’t want either of those.”

RISC-V helps to “lower the barrier of entry” to chip design, says Mejdrich. RISC-V proponents offer public workshops on how to build a CPU based on RISC-V. And people who design their own RISC-V chips can now submit those designs to be manufactured free of cost via a partnership between Google, semiconductor manufacturer SkyWater, and chip design platform Efabless. 

What’s next for RISC-V

Balaji Baktha, the CEO of Bay Area–based startup Ventana Micro Systems, designs chips based on RISC-V for data centers. He says design improvements they’ve made—possible only because of the flexibility that an open standard affords—have allowed these chips to perform calculations more quickly with less energy. In 2021, data centers accounted for about 1% of total electricity consumed worldwide, and that figure has been rising over the past several years, according to the International Energy Agency. RISC-V chips could help lower that footprint significantly, according to Baktha.

However, Intel and Arm’s chips remain popular, and it’s not yet clear whether RISC-V designs will supersede them. Companies need to convert existing software to be RISC-V compatible (the Roma supports most versions of Linux, the operating system released in the 1990s that helped drive the open-source revolution). And RISC-V users will need to watch out for developments that “bifurcate the ecosystem,” says Mejdrich—for example, if somebody develops a version of RISC-V that becomes popular but is incompatible with software designed for the original.

RISC-V International must also contend with geopolitical tensions that are at odds with the nonprofit’s open philosophy. Originally based in the US, they faced criticism from lawmakers that RISC-V could cause the US to lose its edge in the semiconductor industry and make Chinese companies more competitive. To dodge these tensions, the nonprofit relocated to Switzerland in 2020. 

Looking ahead, Himelstein says the movement will draw inspiration from Linux. The hope is that RISC-V will make it possible for more people to bring their ideas for novel technologies to life. “In the end, you’re going to see much more innovative products,” he says. 

Sophia Chen is a science journalist based in Columbus, Ohio, who covers physics and computing. In 2022, she was the science communicator in residence at the Simons Institute for the Theory of Computing at the University of California, Berkeley.

These simple design rules could turn the chip industry on its head

RISC-V is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

Python, Java, C++, R. In the seven decades or so since the computer was invented, humans have devised many programming languages—largely mishmashes of English words and mathematical symbols—to command transistors to do our bidding. 

But the silicon switches in your laptop’s central processor don’t inherently understand the word “for” or the symbol “=.” For a chip to execute your Python code, software must translate these words and symbols into instructions a chip can use.  

Engineers designate specific binary sequences to prompt the hardware to perform certain actions. The code “100000,” for example, could order a chip to add two numbers, while the code “100100” could ask it to copy a piece of data. These binary sequences form the chip’s fundamental vocabulary, known as the computer’s instruction set. 

For years, the chip industry has relied on a variety of proprietary instruction sets. Two major types dominate the market today: x86, which is used by Intel and AMD, and Arm, made by the company of the same name. Companies must license these instruction sets—which can cost millions of dollars for a single design. And because x86 and Arm chips speak different languages, software developers must make a version of the same app to suit each instruction set. 

Lately, though, many hardware and software companies worldwide have begun to converge around a publicly available instruction set known as RISC-V. It’s a shift that could radically change the chip industry. RISC-V proponents say that this instruction set makes computer chip design more accessible to smaller companies and budding entrepreneurs by liberating them from costly licensing fees. 

“There are already billions of RISC-V-based cores out there, in everything from earbuds all the way up to cloud servers,” says Mark Himelstein, the CTO of RISC-V International, a nonprofit supporting the technology. 

In February 2022, Intel itself pledged $1 billion to develop the RISC-V ecosystem, along with other priorities. While Himelstein predicts it will take a few years before RISC-V chips are widespread among personal computers, the first laptop with a RISC-V chip, the Roma by Xcalibyte and DeepComputing, became available in June for pre-order.

What is RISC-V?

You can think of RISC-V (pronounced “risk five”) as a set of design norms, like Bluetooth, for computer chips. It’s known as an “open standard.” That means anyone—you, me, Intel—can participate in the development of those standards. In addition, anyone can design a computer chip based on RISC-V’s instruction set. Those chips would then be able to execute any software designed for RISC-V. (Note that technology based on an “open standard” differs from “open-source” technology. An open standard typically designates technology specifications, whereas “open source” generally refers to software whose source code is freely available for reference and use.)

A group of computer scientists at UC Berkeley developed the basis for RISC-V in 2010 as a teaching tool for chip design. Proprietary central processing units (CPUs) were too complicated and opaque for students to learn from. RISC-V’s creators made the instruction set public and soon found themselves fielding questions about it. By 2015, a group of academic institutions and companies, including Google and IBM, founded RISC-V International to standardize the instruction set. 

The most basic version of RISC-V consists of just 47 instructions, such as commands to load a number from memory and to add numbers together. However, RISC-V also offers more instructions, known as extensions, making it possible to add features such as vector math for running AI algorithms. 

With RISC-V, you can design a chip’s instruction set to fit your needs, which “gives the freedom to do custom, application-driven hardware,” says Eric Mejdrich of Imec, a research institute in Belgium that focuses on nanoelectronics.

Previously, companies seeking CPUs generally bought off-the-shelf chips because it was too expensive and time-consuming to design them from scratch. Particularly for simpler devices such as alarms or kitchen appliances, these chips often had extra features, which could slow the appliance’s function or waste power. 

Himelstein touts Bluetrum, an earbud company based in China, as a RISC-V success story. Earbuds don’t require much computing capability, and the company found it could design simple chips that use RISC-V instructions. “If they had not used RISC-V, either they would have had to buy a commercial chip with a lot more [capability] than they wanted, or they would have had to design their own chip or instruction set,” says Himelstein. “They didn’t want either of those.”

RISC-V helps to “lower the barrier of entry” to chip design, says Mejdrich. RISC-V proponents offer public workshops on how to build a CPU based on RISC-V. And people who design their own RISC-V chips can now submit those designs to be manufactured free of cost via a partnership between Google, semiconductor manufacturer SkyWater, and chip design platform Efabless. 

What’s next for RISC-V

Balaji Baktha, the CEO of Bay Area–based startup Ventana Micro Systems, designs chips based on RISC-V for data centers. He says design improvements they’ve made—possible only because of the flexibility that an open standard affords—have allowed these chips to perform calculations more quickly with less energy. In 2021, data centers accounted for about 1% of total electricity consumed worldwide, and that figure has been rising over the past several years, according to the International Energy Agency. RISC-V chips could help lower that footprint significantly, according to Baktha.

However, Intel and Arm’s chips remain popular, and it’s not yet clear whether RISC-V designs will supersede them. Companies need to convert existing software to be RISC-V compatible (the Roma supports most versions of Linux, the operating system released in the 1990s that helped drive the open-source revolution). And RISC-V users will need to watch out for developments that “bifurcate the ecosystem,” says Mejdrich—for example, if somebody develops a version of RISC-V that becomes popular but is incompatible with software designed for the original.

RISC-V International must also contend with geopolitical tensions that are at odds with the nonprofit’s open philosophy. Originally based in the US, they faced criticism from lawmakers that RISC-V could cause the US to lose its edge in the semiconductor industry and make Chinese companies more competitive. To dodge these tensions, the nonprofit relocated to Switzerland in 2020. 

Looking ahead, Himelstein says the movement will draw inspiration from Linux. The hope is that RISC-V will make it possible for more people to bring their ideas for novel technologies to life. “In the end, you’re going to see much more innovative products,” he says. 

Sophia Chen is a science journalist based in Columbus, Ohio, who covers physics and computing. In 2022, she was the science communicator in residence at the Simons Institute for the Theory of Computing at the University of California, Berkeley.

How the James Webb Space Telescope broke the universe

The James Webb Space Telescope is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

Natalie Batalha was itching for data from the James Webb Space Telescope. It was a few months after the telescope had reached its final orbit, and her group at the University of California, Santa Cruz, had been granted time to observe a handful of exoplanets—planets that orbit around stars other than our sun.

Among the targets was WASP-39b, a scorching world that orbits a star some 700 light-years from Earth. The planet was discovered years ago. But in mid-July, when Batalha and her team got their hands on the first JWST observations of the distant world, they saw a clear signature of a gas that is common on Earth but had never been spotted before in the atmosphere of an exoplanet: carbon dioxide. On Earth, carbon dioxide is a key indicator of plant and animal life. WASP-39b, which takes just four Earth days to orbit its star, is too hot to be considered habitable. But the discovery could well herald more exciting detections—from more temperate worlds—in the future. And it came just a few days into the lifetime of JWST. “That was a very exciting moment,” says Batalha, whose group had gathered to glimpse the data for the first time. “The minute we looked, the carbon dioxide feature was just beautifully drawn out.”

This was no accident. JWST, a NASA-led collaboration between the US, Canada, and Europe, is the most powerful space telescope in history and can view objects 100 times fainter than what the Hubble Space Telescope can see. Almost immediately after it started full operations in July of 2022, incredible vistas from across the universe poured down, from images of remote galaxies at the dawn of time to amazing landscapes of nebulae, the dust-filled birthplaces of stars. “It’s just as powerful as we had hoped, if not more so,” says Gabriel Brammer, an astronomer at the University of Copenhagen in Denmark.

But the speed at which JWST has made discoveries is due to more than its intrinsic capabilities. Astronomers prepared for years for the observations it would make, developing algorithms that can rapidly turn its data into usable information. Much of the data is open access, allowing the astronomical community to comb through it almost as fast as it comes in. Its operators have also built on lessons learned from the telescope’s predecessor, Hubble, packing its observational schedule as much as possible.

For some, the sheer volume of extraordinary data has been a surprise. “It was more than we expected,” says Heidi Hammel, a NASA interdisciplinary scientist for JWST and vice president for science at the Association of Universities for Research in Astronomy in Washington, DC. “Once we went into operational mode, it was just nonstop. Every hour we were looking at a galaxy or an exoplanet or star formation. It was like a firehose.”

Now, months later, JWST continues to send down reams of data to astonished astronomers on Earth, and it is expected to transform our understanding of the distant universe, exoplanets, planet formation, galactic structure, and much more. Not all have enjoyed the flurry of activity, which at times has reflected an emphasis on speed over the scientific process, but there’s no doubt that JWST is enchanting audiences across the globe at a tremendous pace. The floodgates have opened—and they’re not shutting anytime soon.

Opening the pipe

JWST orbits the sun around a stable point 1.5 million kilometers from Earth. Its giant gold-coated primary mirror, which is as tall as a giraffe, is protected from the sun’s glare by a tennis-court-size sunshield, allowing unprecedented views of the universe in infrared light.

The telescope was a long time coming. First conceived in the 1980s, it was once planned for launch around 2007 at a cost of $1 billion. But its complexity caused extensive delays, devouring money until at one point it was dubbed “the telescope that ate astronomy.” When JWST finally launched, in December 2021, its estimated cost had ballooned to nearly $10 billion

Even post-launch, there have been anxious moments. The telescope’s journey to its target location beyond the moon’s orbit took a month, and hundreds of moving parts were required to deploy its various components, including its enormous sunshield, which is needed to keep the infrared-­sensitive instruments cool.

The aim is to keep the telescope as busy as possible: “The worst thing we could do is have an idle telescope.”

But by now, the delays, the budget overruns, and most of the tensions have been overcome. JWST is hard at work, its activities carefully choreographed by the Space Telescope Science Institute (STScI) in Baltimore. Every week, a team plans out the telescope’s upcoming observations, pulling from a long-term schedule of hundreds of approved programs to be run in its first year of science, from July 2022 to June 2023.

The aim is to keep the telescope as busy as possible. “The worst thing we could do is have an idle telescope,” says Dave Adler at STScI, the head of long-range planning for JWST. “It’s not a cheap thing.” In the 1990s, Hubble would occasionally find itself twiddling its thumbs in space if programs were altered or canceled; JWST’s schedule is deliberately oversubscribed to prevent such issues. Onboard thrusters and reaction wheels, which spin to change the orientation, move the telescope with precision between various targets across the sky. “The goal is always to minimize the amount of time we’re not doing science,” says Adler.

The result of this packed schedule is that every day, JWST can collect more than 50 gigabytes of data, compared with just one or two gigabytes for Hubble. The data, which contains images and spectroscopic signatures (essentially light broken apart into its elements), is fed through an algorithm run by STScI. Known as a “pipeline,” it turns the telescope’s raw images and numbers into useful information. Some of this is released immediately on public servers, where it is picked up by eager scientists or even by Twitter bots such as the JWST Photo Bot. Other data is handed to scientists on programs that have proprietary windows, enabling them to take time analyzing their own data before it is released to the masses.

The galaxies of Stephan’s Quintet, in an image created with data from two of JWST’s infrared instruments. The leftmost galaxy appears to be part of the group but sits much closer to Earth.
NASA, ESA, CSA, STSCI

Pipelines are essentially pieces of code, made with programming languages like Python. They have long been used in astronomy but advanced considerably in 2004  after astronomers used Hubble to spend 1 million seconds observing an empty patch of sky. The goal was to look for remote galaxies in the distant universe, but 800 exposures would be taken, so Hubble’s planners knew it would be too daunting a task to do by hand.

Instead, they developed a pipeline to turn the exposures into a usable image, a taxing technical challenge given that each image required its own calibration and alignment. “There was no way you could expect the community at that time to combine 800 exposures on their own,” says Anton Koekemoer, a research astronomer at STScI. “The goal was to enable science to be done much more quickly.” The incredible image resulting from those efforts revealed 10,000 galaxies stretching across the universe, in what came to be known as the Hubble Ultra Deep Field. 

With JWST, a single master pipeline developed by STScI takes images and data from all its instruments and makes them science-ready. Many astronomers, both amateur and professional, then use their own pipelines developed in the months and years before launch to further investigate the data. That’s why when JWST’s data began streaming down to Earth, astronomers were able to almost immediately understand what they were seeing, turning what would normally be months of analysis time into just hours of processing time.

“We were sitting there ready,” says Brammer. “All of a sudden, the pipe was open. We were ready to go.”

Galaxies everywhere 

Orbiting just a few hundred miles above Earth’s surface, the Hubble Space Telescope is close enough for astronauts to visit. And over the years, they did, undertaking a series of missions to repair and upgrade the telescope, starting with a trip to fix its infamously misshapen mirror—a problem discovered shortly after launch in 1990. JWST, which sits farther away than the moon, is on its own.   

Lee Feinberg, JWST’s optical telescope element manager at NASA’s Goddard Space Flight Center, was among those waiting to see whether the telescope would actually deliver. “We spent 20 years simulating the alignment of the telescope,” he says—that is, making sure that it could accurately point at targets across the sky. 

By March, the wait was over. JWST had reached its target location beyond the moon, and Feinberg and his colleagues were finally ready to start taking test images. As he walked into STScI one morning, one of those images, a test image of a star, was put up on screen. It contained an amazing surprise. “There were literally hundreds of galaxies,” says Feinberg. “We were just blown away.” So detailed was the image that it revealed galaxies stretching away into the distant universe, even though it hadn’t been taken for such a purpose. “Everybody was in disbelief how well it was working,” he says.

Following a further process of testing and calibrating instruments to get the telescope up and running, one of JWST’s earliest tasks was to look at WASP-39b with its cryogenically cooled Mid-Infrared Instrument (MIRI). This tool is the one aboard the telescope that observes most deeply in the infrared part of the spectrum, where many of the signatures of planetary atmospheres can be readily detected. MIRI’s spectrograph allowed scientists to pick apart the light from WASP-39b’s atmosphere. Rather than analyzing the observations manually, however, the team used a pipeline called Eureka!, developed by Taylor Bell, an astronomer at the Bay Area Environmental Research Institute at NASA’s Ames Research Center in California. “The objective was to go from the raw data that comes down to information about the atmospheric spectrum,” says Bell. Analyzing information from an exoplanet like this would usually require months of work. But within hours of the observations, the signature of carbon dioxide leaped out. A host of other details have since been released about the planet, including a detailed analysis of its composition and the presence of patchy clouds.

Others have used pipelines for much more distant targets. In July, studying early images from JWST, a team led by Rohan Naidu at MIT discovered GLASS-z13, a remote galaxy whose light could date from just 300 million years after the Big Bang—earlier than any galaxy known before. The discovery caused a global furor because it suggested that galaxies may have formed earlier than previously expected, perhaps by a few hundred million years—meaning our universe took shape faster than previously believed. 

Naidu’s discovery was made possible by EAZY, a pipeline Brammer developed to somewhat crudely analyze the light of galaxies in JWST images. “It estimates the distance of the objects using these imaging observations,” says Brammer, who posted the tool on the software website GitHub for anybody to use. 

Rush hour

Traditionally in science, researchers will submit a scientific paper to a journal, where it is then reviewed by peers in the field and finally approved for publication or rejected. This process can take months, even years, sometimes delaying publication—but always with accuracy and scientific rigor in mind.

There are ways to bypass this process, however. A popular method is to post early versions of scientific papers on the website arXiv prior to peer review. This means that research can be read or publicized before it is published in a journal. In some cases, the research is never submitted to a journal, instead remaining solely on arXiv and discussed openly by scientists on Twitter and other forums.

Posting on arXiv is popular when there is a new discovery that scientists are keen to publish quickly, sometimes before competing papers come out. In the case of JWST, about a fifth of its first-year programs are open access, meaning the data is immediately released publicly when it is transferred down to Earth. That puts the research team that proposed the program in immediate competition with others watching the data stream in. When the telescope’s firehose of data was switched on in July, many researchers turned to arXiv to publish early results—for better or worse.

“When you’re dealing with something this new and this unknown, things should be checked 10 or 100 times. That’s not how things went.”

Emiliano Merlin

“There was a rush to publish anything as soon as possible,” says Emiliano Merlin, an astronomer at the Astronomical Observatory of Rome who was involved in early JWST analysis efforts such as the race to find galaxies in the distant universe after the Big Bang. The discovery of GLASS-z13 and a dozen or so other intriguing candidates was published before follow-up observations could confirm the age of their light. “It was not something I personally really liked,” says Merlin. “When you’re dealing with something this new and this unknown, things should be checked 10 or 100 times. That’s not how things went.”

One concern was that early calibration issues with the telescope could have resulted in errors. But so far many of the early results have stood up to scrutiny. Follow-up observations have confirmed GLASS-z13 to be a record-breaking early galaxy, although its age has been slightly reduced, leading to a renaming of the galaxy to GLASS-z12. The possible discovery of other galaxies that formed even earlier than GLASS-z12 suggests that our understanding of how structure emerged in the universe may very likely need to be rethought, perhaps even hinting at more radical models for the early universe.  

The Near-Infrared Camera aboard JWST captured this snapshot of Neptune in July. Researchers said it was the clearest view of the giant planet’s rings since the Voyager 2 flyby in 1989.
This image of a star was taken during testing of JWST’s optical alignment. But it incidentally showcased the sensitivity of the telescope, with a number of galaxies appearing in the background.

Ernie Wright stands near the JWST mirrors
Segments of JWST’s primary mirror are prepped for cryogenic testing in 2011. The full mirror, made of gold-coated beryllium, consists of 18 segments and spans 6.5 meters. It was designed to be folded up for launch.
NASA/MSFC/DAVID HIGGINBOTHAM

While many of JWST’s programs publicly release data immediately, sometimes resulting in a frantic rush to post results early, about 80% of them have a proprietary period, allowing the researchers running them exclusive access to their data for 12 months. This enables scientists, especially smaller groups that lack the resources of large institutions, to more carefully scrutinize their own data before releasing it to the public.

“Proprietary time evens out the lumps and bumps in resources,” says Mark McCaughrean, senior advisor for science and exploration at the European Space Agency and a JWST scientist. “If you take away proprietary periods, you stack it back in the direction of the big teams.”

Many scientists do not use their full 12-month allocation, however, which means they will only add to the constant stream of discoveries from JWST. Alongside the open-access observations being taken, there will be more and more proprietary results released to the public. “Now that the firehose is open, we will be seeing papers continuously for the next 10 years and beyond,” says Hammel. Perhaps well past that—Feinberg says the telescope may have more than 20 years of fuel, allowing operations to continue far into the 2040s.

“We’re cracking open an entirely new window on the universe,” says Hammel. “That’s just a really exciting moment to be a part of, for us as a species.” 

A version of this story appeared in the January/February 2023 issue of the magazine.

Next up for CRISPR: Gene editing for the masses?

CRISPR for high cholesterol is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

We know the basics of healthy living by now. A balanced diet, regular exercise, and stress reduction can help us avoid heart disease—the world’s biggest killer. But what if you could take a vaccine, too? And not a typical vaccine—one shot that would alter your DNA to provide lifelong protection? 

That vision is not far off, researchers say. Advances in gene editing, and CRISPR technology in particular, may soon make it possible. In the early days, CRISPR was used to simply make cuts in DNA. Today, it’s being tested as a way to change existing genetic code, even by inserting all-new chunks of DNA or possibly entire genes into someone’s genome.

These new techniques mean CRISPR could potentially help treat many more conditions—not all of them genetic. In July 2022, for example, Verve Therapeutics launched a trial of a CRISPR-based therapy that alters genetic code to permanently lower cholesterol levels

The first recipient—a volunteer in New Zealand—has an inherited risk for high cholesterol and already has heart disease. But Kiran Musunuru, cofounder and senior scientific advisor at Verve, thinks that the approach could help almost anyone. 

The treatment works by permanently switching off a gene that codes for a protein called PCSK9, which seems to play a role in maintaining cholesterol levels in the blood.

“Even if you start with a normal cholesterol level, and you turn off PCSK9 and bring cholesterol levels even lower, that reduces the risk of having a heart attack,” says Musunuru. “It’s a general strategy that would work for anyone in the population.”

CRISPR’s evolution

While newer innovations are still being explored in lab dishes and research animals, CRISPR treatments have already entered human trials. It’s a staggering accomplishment when you consider that the technology was first used to edit the genomes of cells about 10 years ago. “It’s been a pretty quick journey to the clinic,” says Alexis Komor at the University of California, San Diego, who developed some of these newer forms of CRISPR gene editing.

Gene-editing treatments work by directly altering the DNA in a genome. The first generation of CRISPR technology essentially makes cuts in the DNA. Cells repair these cuts, and this process usually stops a harmful genetic mutation from having an effect.

Newer forms of CRISPR work in slightly different ways. Take base editing, which some describe as “CRISPR 2.0.” This technique targets the core building blocks of DNA, which are called bases.

There are four DNA bases: A, T, C, and G. Instead of cutting the DNA, CRISPR 2.0 machinery can convert one base letter into another. Base editing can swap a C for a T, or an A for a G. “It’s no longer acting like scissors, but more like a pencil and eraser,” says Musunuru.

In theory, base editing should be safer than the original form of CRISPR gene editing. Because the DNA is not being cut, there’s less chance that you’ll accidentally excise an important gene, or that the DNA will come back together in the wrong way.

Verve’s cholesterol-lowering treatment uses base editing, as do several other experimental therapies. A company called Beam Therapeutics, for example, is using the approach to create potential treatments for sickle-cell disease and other disorders.

And then there’s prime editing, or “CRISPR 3.0.” This technique allows scientists to replace bits of DNA or insert new chunks of genetic code. It has only been around for a few years and is still being explored in lab animals. But its potential is huge.

That’s because prime editing vastly expands the options. “CRISPR 1.0” and base editing are somewhat limited—you can only use them in situations where cutting DNA or changing a single letter would be useful. Prime editing could allow scientists to insert entirely new genes into a person’s genome.

That would open up many more genetic disorders as potential targets. If you want to correct a specific mutation that is beyond the reach of base editing, “prime editing is your only option,” says Musunuru. 

If it works, it could be revolutionary. A hundred people with a disorder might have all kinds of genetic influences that made them vulnerable to it. But inserting a corrective gene could potentially cure all of them, says Musunuru. “If you can put in a fresh new working copy of the gene, it may not matter what mutation you have,” he says. “You’re putting in a working copy, and that’s good enough.”

Together, these new forms of CRISPR could dramatically broaden the scope of gene-editing treatments—making them potentially available to many more people, and for a much broader range of disorders. The target diseases don’t even have to be caused by genetic mutations. In fact, even some of the older CRISPR approaches could be used to target diseases that aren’t necessarily the result of a rogue gene. Verve’s treatment to permanently lower cholesterol is a first example of a CRISPR treatment that could benefit the majority of adults, according to Musnuru.

Genetic vaccinations

Verve’s approach involves swapping a base letter in the gene that codes for the PCSK9 protein. This disables the gene, so much less protein is made. Because the PCSK9 protein plays an important role in maintaining levels of LDL cholesterol—the type associated with clogged arteries—cholesterol levels drop too. 

In experiments, when mice and monkeys were given the treatment, their blood cholesterol levels dropped by around 60 to 70% within a few days, says Musunuru. “And once it’s down, it stays down,” he adds. The company expects its first human clinical trial to run for a few years. If the trial is successful, the company will continue with larger trials. The treatment will have to be approved by the US Food and Drug Administration before it can be prescribed by doctors in the US. “It will be a while before any [CRISPR treatments] are actually approved for use,” says Musunuru. 

But in the future, he says, we might be able to use the same approach to protect people from high blood pressure and diabetes. 

Komor of UC San Diego says a CRISPR-based treatment to prevent Alzheimer’s might also be desirable. But she cautions that editing the genomes of healthy people is ethically ambiguous and could be an unnecessary gamble for people who are otherwise well. “If I was given the opportunity to do editing of my liver cells to reduce cholesterol potentially in the future, I would probably say no,” she says. “I want to keep my genome as is, unless there’s a problem.”

Any new treatment has to be at least as safe as what is already available, says Tania Bubela, who studies the legal and ethical implications of new technologies at Simon Fraser University in Burnaby, British Columbia. Plenty of drugs have side effects. “The difference is that with a drug, you can … change the person’s medication,” says Bubela. “With a gene therapy, I can’t see how you would do that.”

The price, as well as the safety, of any gene-editing treatment will determine whether it can really help the masses, Bubela says: “I find it difficult to believe that a gene-based therapy like CRISPR will ever be either safer or more cost-effective than a very simple cholesterol pill.” But she accepts that these treatments could become cheaper, and that the “one-shot” approach might appeal to some.

There’s a good reason the first trials of CRISPR have focused on people with rare disorders who have few options, says Komor: “Those are the people most in need.” While broadening the applications of CRISPR is exciting, she says, “we have an ethical obligation to help those people before we help the general masses.” 

This is where Tesla’s former CTO thinks battery recycling is headed

Battery recycling is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

As Tesla’s former chief technology officer, JB Straubel has been a major player in bringing electric vehicles to the world. He’s often credited with inventing key pieces of Tesla’s battery technology and establishing the company’s charging network. After leaving Tesla in 2019, Straubel began a new venture: Redwood Materials, a battery recycling company. 

Redwood has raised nearly $800 million in venture funding. It’s building a billion-dollar facility in Nevada and recently announced plans for a second campus outside Charleston, South Carolina. In these plants, Redwood plans to extract valuable metals such as cobalt, lithium, and nickel from used batteries and produce cathodes and anodes for new ones. 

I spoke to Straubel about the role he sees battery recycling playing in the transition to renewable energy, his plans for Redwood, and what’s next. You can read my full piece about battery recycling here.

Our conversation has been edited for clarity and length. (Note: I worked as an intern at Tesla in 2016, while Straubel was still CTO, though we didn’t work directly together.

Why did you decide to leave Tesla, and why did you pick battery recycling as your next step? 

Certainly Tesla was an amazing adventure, but as it was succeeding, I think it was becoming more obvious that battery scaling would present the need to get so many more raw materials, components, and batteries themselves. That was this looming bottleneck and challenge for the whole industry, even way back then. And I think it’s even more clear today. 

The idea was pretty unconventional at the time. Even your question kind of hints at it—it’s like, why did you leave this glamorous, exciting high-performance car company to go work on garbage? I think entrepreneurship involves being a little bit contrarian. And I think to really make meaningful innovation, it’s often not very conventional.

Why do you see battery recycling as an important part of the energy transition? 

Increasingly, the solution to some of these sustainability problems is to electrify it and to add a battery to it, which is great, and I spent the majority of my career championing that and helping accelerate that. And if we don’t electrify everything, I think our climate goals are completely sunk. But at the same time, it’s a phenomenal amount of batteries. And I just think we really need to figure out a robust solution at the end of life. 

I think this entire new sustainable economy as we’re envisioning it, with everything electrified, simply can’t work unless you have a closed loop for the raw materials. There aren’t enough new raw materials to keep building and throwing them away; it would fundamentally be impossible. 

Battery recycling is an intuitive solution to those two issues, but tell me more about the technical challenge of pulling it off, and how it would work.

It’s more complicated than I think many people appreciate. There’s just a whole ton of chemistry, chemical engineering, and production engineering that has to happen to make and refine all of the components that go into a battery. It’s not just a sorting or garbage management problem. 

There’s a lot of room for innovation, and these things haven’t been well optimized, or even done at all in some cases. So that’s really the fun stuff as an engineer, where you get to invent and innovate things that haven’t been done two, three, four times already.  

But something that isn’t intuitive is just what a high level of reusability the metals inside of a battery have. All of those materials we put into a battery and into an EV don’t go anywhere. They’re all still there. They don’t get degraded, they don’t get compromised—99% of those metals, or perhaps more, can be reused again and again and again. Literally hundreds, perhaps thousands of times.

I don’t believe we’re appropriately internalizing how bad climate change is going to be.

JB Straubel

There are not going to be a lot of electric vehicles coming off the roads for a long time. How are you thinking about navigating that and facing shortages in your supply of used batteries? 

I really see our position as a sustainable battery materials company. One of our key objectives and goals is to look at the very long term and to make sure we’re architecting the most efficient systems for the long term, where recycled material content is the majority of supply. 

But in the meantime, we’re taking a pragmatic view. We have to blend in a certain amount of virgin material—whatever we can get in the most environmentally friendly way—to augment the ramp-up while we need to transition away from fossil fuels. 

Was that a clear decision to you, to supplement with mined material versus sticking to only using recycled material? 

I’d say it’s a very natural decision to make. Our goal is to help decarbonize batteries and reduce the energy impact and the embedded CO2. And I think it’s better for the world to remove a fossil-fuel vehicle than to say, “Well, we can’t build an electric vehicle because we don’t have enough recycled material.” 

When I visited, I definitely felt a sense of urgency. Do you feel like you’re moving fast enough, and do you feel like this industry is moving fast enough? 

I generally don’t think we’re going fast enough. I don’t think anyone is. You know, I do have this sense of paranoia and urgency and almost—not exactly—panic. That’s not helpful. 

But I guess it really derives from a deep feeling that I don’t believe we’re appropriately internalizing how bad climate change is going to be. So I guess I have this anxiety and fear that it’s going to get a whole lot worse than I think most people are expecting. 

And there’s such inertia to it, so now is our only time to really prepare and react. And the scale of all this is so big that even when we’re running flat out as fast as we can, with all that urgency that you felt and hopefully more, it’ll still take us decades.

Do you feel you can handle any battery chemistry that industry comes up with? What if everybody goes to cheaper chemistries like iron phosphate, or if everybody starts moving to really different technologies, like solid state?

I’m really genuinely pretty agnostic on this. I want to make sure that we are focused on the bigger picture, which is figuring out how we enable a transition to sustainability overall. And therefore, we really are rooting for whatever battery technology ends up having the best performance.

And I think it will be a mix. We’re going to see a bigger diversity of battery chemistries and technologies. 

So when we’re designing this circular system, we need to think about all the different technologies, and they have pros and cons. Some are more challenging in different ways. Obviously, iron phosphate has a lower total commodity metal value, but it’s certainly not zero. There’s a great opportunity to recycle lithium and copper from those. So I think each one has its own set of characteristics that we have to manage.

What do you see as Redwood’s biggest challenge in the next year, and then in the long term?

Over the next year, we’re just in an incredibly rapid growth and deployment phase. We are innovating across a whole bunch of different areas simultaneously. It’s really exciting and fun, but it’s also just quite challenging to manage all of the parallel threads as we’re doing it. It’s like a huge multiplayer game of chess or something. 

In the longer term, it’s increasingly going to be about scale and efficiency of scaling. This is just a huge, huge industry. The physical size of these facilities is massive, the amount of materials is massive, and the capital requirements are really massive as well. So I think over decades into the future, I’d say, where our focus and challenges will be is making sure we’re hyper-efficient about scaling up to terawatt-hour scale, literally.

How old batteries will help power tomorrow’s EVs

Battery recycling is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

To Redwood Materials, the rows of cardboard boxes in its gravel parking lot represent both the past and the future of electric vehicles. The makeshift storage space stretches for over 10 acres at Redwood’s new battery recycling site just outside Reno, Nevada. Most of the boxes are about the size of a washing machine and are wrapped in white plastic. But some lie open, revealing their contents: wirelesss keyboards, discarded toys, chunks of used Honda Civic batteries.

Far from trash, the battery materials in all these discarded items are a prize—the metals are valuable ingredients that could be critical to meeting exploding demand for electric vehicles.

Redwood Materials is one of a growing number of recycling companies working to provide an alternative to the landfill for lithium-ion batteries used in electronics and EVs. The company announced its plans for this $3.5 billion plant in Reno in mid-2022. The facility is expected to produce material for 1 million lithium-ion EV batteries by 2025, ramping up to 5 million by 2030. Redwood plans to start construction on an additional facility in the eastern US in 2023. 

pile of broken tablets and phones
Redwood runs a collection program for old phones, tablets, and other devices that use lithium-ion batteries.
REDWOOD MATERIALS

Meanwhile, the Canadian firm Li-Cycle currently operates four commercial facilities that can together recycle about 30,000 metric tons of batteries annually, with an additional three sites planned. Other US-based startups, like American Battery Technology Company, have also announced large commercial tests, joining an established recycling market in China and Europe.

While these new recycling ventures are better for the environment than burying metals in landfills, they’re also spurred by a booming market for electric vehicles. EV adoption is exploding in the US and around the world, bringing new demand for the metals that go into their batteries, especially lithium, nickel, and cobalt. EVs are expected to account for 13% of new vehicle sales in 2022, a number that’s expected to climb to about 30% by 2030. Supplying all those cars with batteries will require far more metals than are currently available. 

More than 200 new mines could be needed by 2035 to provide enough material for just the cobalt, lithium, and nickel needed for EV batteries. Lithium production will need to grow by 20 times to meet demand for EVs by 2050

Recycling could represent a major new source of raw materials. Globally, there was over 600,000 metric tons of recyclable lithium-ion batteries and related manufacturing scrap in 2021. That number is expected to top 1.6 million metric tons by 2030, according to the consulting firm Circular Energy Storage. And it could really take off after that, as the first generation of electric cars heads for the junkyards. 

New advances in the recycling process for lithium-­ion batteries are transforming the industry, allowing recyclers to separate and recover enough of these valuable metals to make the process economical. Recycling can’t address material shortages alone, because demand for the metals outstrips the amount circulating in batteries used today. But thanks to these advances, it could account for a significant fraction of supply in the coming decades. 

When I visited in September, Redwood was preparing to ship its first product, a small sample of copper foil used in battery anodes. It’s sending the foil to the battery maker Panasonic to use in the Nevada Gigafactory, which produces battery cells for Tesla vehicles less than five miles away. 

On the way to Redwood’s factory, I saw tumbleweeds leap across the highway, and some of the area’s wild horses idled on a hillside. Later, I’d spot a coyote skittering across the parking lot. 

But down the dirt road at the site, the Old West vibes quickly fell away, replaced by a sense of urgency radiating from nearly everyone there. Several massive buildings were under construction, and engineers and construction workers in safety vests and hard hats hurried around the site, ducking between temporary trailers serving as makeshift offices, labs, and meeting rooms. 

When construction is finished, the Redwood site will produce two major products: the copper foil for anodes and a mixture of lithium, nickel, and cobalt known as cathode active material. These components account for over half the cost of battery cells. By 2025, Redwood projects, its facility will produce enough of them to make batteries for more than a million EVs every year. 

Down the hill from the trailers, the building for copper foil production was the furthest along, with a roof and walls; a machine for making the foil was tucked away in the corner. But the two other major buildings still looked far from completion—one was missing walls, and the other was only a foundation.

Redwood has big plans and plenty of construction ahead.

“A sense of paranoia”

Redwood Materials was founded by JB Straubel, who as Tesla’s chief technical officer during the early 2010s led many of the company’s battery breakthroughs, including the beginnings of its network of charging stations. But even as Tesla was transforming the way electric cars were manufactured and sold, Straubel was worried about how overwhelming the need for more battery materials would become. He began to think of ways to lower the cost of batteries and help reduce the carbon emissions associated with making them. 

Straubel started Redwood while still working at Tesla (he left in 2019); he wanted, as he puts it, to create a sustainable battery materials company. These days he talks about his mission with a breathless excitement coupled with the precision of an engineer, sometimes pausing in the middle of a thought to start over as he explains his vision for the future of battery production. 

“It simply can’t work unless you have a closed loop for the raw materials,” he says. “There aren’t enough new raw materials to keep building and throwing them away.” 

pile of copper scrap
close up of pile of metal sulfates

Redwood uses a process called hydrometallurgy to recover valuable metals such as cobalt, lithium, and nickel from the batteries it collects.

Creating a closed loop of materials, where old batteries become feedstock for new ones, sounds like an obvious idea, but executing it isn’t trivial. “It’s not just a sorting or a garbage management problem,” Straubel says. 

Chemically separating the crucial metals locked in batteries is an intricate task. Labs, startups, and established companies alike are all searching for the ideal process to recover the highest possible amounts of valuable materials in the purest possible form. 

The details of how Redwood solves this problem are closely held—they’re the company’s secret sauce. But its process is also very much a work in progress, and the urgency of figuring it out is clear.

“I do have this kind of sense of paranoia and urgency and almost—not exactly—panic, that’s not helpful. It really derives from a deep feeling that I don’t believe we’re appropriately internalizing how bad climate change is going to be,” Straubel says. 

“I generally don’t think we’re going fast enough. I don’t think anyone is.”

Recycling’s role

Most recycling facilities for lithium-ion batteries use a set of chemical processes called hydrometallurgy, where materials in the batteries are dissolved and separated using a range of acids and solvents. In addition to nickel, cobalt, and other materials like graphite and copper, recent developments have allowed hydrometallurgy to recover lithium at high rates as well. 

After some additional processing, recovered materials can then be used in new products. Whereas some materials, such as plastics, can degrade over time with recycling, researchers have found that metals recovered from batteries work just as well as mined ones for charging and storing power. 

Many batteries arriving at Redwood need to be disassembled by hand before processing. This is the case for batteries coming in full EV battery packs, which are the size of a mattress and too large for Redwood’s equipment, as well as batteries still attached to their products, like laptops or power tools. All these battery types generally contain lithium, nickel, and cobalt, though the relative amounts vary; batteries in consumer electronic devices, for example, tend to be more cobalt-heavy than those in EVs.

One of Redwood’s first products is copper foil, which is used in lithium battery anodes. Here a Redwood technician inspects the product as it rolls off the manufacturing line.
Two employees at work in the Redwood facility
Redwood plans to produce copper foil at its new campus outside Reno, Nevada. Delivery to Panasonic was planned for December.

Redwood began construction on its battery materials campus in late 2021. The facility is expected to produce enough battery materials for 1 million EVs by 2025.
Two workers disassembling an energy storage unit
Large batteries, like these from an energy storage system, often need to be disassembled by hand before recycling.

Hand disassembly won’t be ideal once the company starts taking in more materials, says Andy Hamilton, Redwood’s VP of manufacturing. Eventually, Redwood hopes to automate more of this sorting process, though building automated systems that can deal with the variety of batteries the company takes in will likely be a challenge.  

After sorting and disassembly, the batteries that still hold charge can be loaded onto a conveyor belt and carried up into one of four massive chambers for a process called calcination, where batteries are cooked at high temperatures to discharge them and remove solvents.

The material is then crushed into powder before it enters the hydrometallurgical process to separate individual elements. 

Despite recent technical progress, recycling won’t meet demand for battery materials anytime soon, says Alissa Kendall, an energy systems researcher at the University of California, Davis. Since demand is still rising exponentially, recycled batteries will at best account for about half the nickel and lithium supply by 2050.

However, as battery chemistries evolve, that percentage could change, as is happening already with cobalt. Batteries in EVs contain less cobalt today than they used to, and cell makers are continuously finding ways to use even less of the expensive metal. As a result, recycled cobalt could make up 85% of the supply needed by 2040, Kendall says.

Even if recycling can’t fully supplant mining, cutting the need for more mines could reduce the social and environmental burden of producing new batteries. Many metals for batteries are mined in Africa, Asia, and Central and South America. Mining in these regions is often associated with human rights violations, including forced and child labor, as well as significant air and water pollution, according to the International Energy Agency.  

Waiting for the battery tsunami

Some in the battery recycling business argue that the industry won’t need much policy support, since the materials in batteries will be valuable enough to justify recycling them. But recent policy moves in the US could give recyclers like Redwood a further boost. 

Since Redwood’s manufacturing plant is in the US, the company could be eligible for production tax credits in the recently passed Inflation Reduction Act. The IRA will also drive demand for raw materials from outfits like Redwood. For cars to qualify for $7,500 tax credits, automakers will need to source their materials and manufacture their batteries in the US or with free-trade partners. 

Critics have warned that industry may not be able to meet the timeline for these EV tax credits, especially for material sourcing, since it can take up to a decade to build new mines. A recycling facility, on the other hand, could be built more quickly, and some are pointing to recycling as a possible avenue for battery and car makers hoping to qualify for the credits. 

Other governments are considering additional regulations to boost battery recycling. In Europe, recently proposed legislation includes provisions like requiring the original manufacturers of a battery to be responsible for it at its end of life. The EU has also considered requiring new batteries to have a certain fraction of recycled content.

 

Still, there could be a short-term shortage of batteries for recycling.The wave of old EV batteries expected in the coming decades is for now just a trickle, since only a small number of EVs are coming off the roads.

About half of what Redwood accepts these days has never been used in a product. This material ranges from assembled and charged batteries that failed quality checks to what’s left of a sheet of metal when the desired pieces are cut out of it. Two semi trucks arrive at the Redwood facilities every day with manufacturing scrap from the Tesla/Panasonic Gigafactory.

Redwood has also made what Straubel calls a “pragmatic” choice to include freshly mined metals in its products for now. The nickel and lithium in its first batch of cathode active material will only be about 30% from recycled sources—the remainder will come from mining.

The goal is to be ready when the battery tsunami arrives, says Straubel, and that means optimizing the recycling process now. 

The path forward

While construction continued at the larger site, I walked through Redwood’s headquarters in Carson City, where its scientists are still experimenting with the hydrometallurgy process.

Researchers have been working to use chemistry to recover metals from lithium-ion battery materials since the late 1990s. Companies in China have moved fastest, building a widespread network of recycling centers with government support. 

But designing a system that can recover high levels of all the most expensive metals in batteries hasn’t been easy. Lithium has proved especially difficult. Straubel says that of the four metals Redwood is most focused on, they can reach close to 100% recovery of cobalt, copper, and nickel. For lithium, the figure is about 80%. 

Moving from the lab to real-world conditions can also make things even more complicated. 

Mary Lou Lindstrom, Redwood’s head of hydrometallurgy, showed me around the pilot lab space in Carson City, which resembled a craft beer operation, with stainless-steel equipment distributed around a cavernous room. Researchers were huddled around a computer and one of the large metal tanks.

overview of a warehouse filled with rows of cardboard boxes
Used batteries and assorted manufacturing scrap from battery producers are stored in one of Redwood’s massive warehouses as the company ramps up its recycling process.
REDWOOD MATERIALS

Lindstrom explained that they were working to produce the feedstock for the first batch of commercial copper foil; production would be starting up in the coming weeks. Delivery to Panasonic was scheduled to take place in December. 

A technicality still stands in the way of Straubel’s vision for a closed-loop battery ecosystem. So far, the copper Redwood was using to make foil came from industrial copper scrap, not batteries. The company hopes to use at least some battery material in the copper foil that eventually gets delivered to Panasonic for use in new cells. But industrial copper scrap is a more predictable material to work with.

This transition speaks to one major potential challenge for battery recyclers moving forward: they’ll need to deal with unpredictable inputs while creating predictable, high-quality products. If battery recyclers are competing for material, this challenge will be magnified, since startups may have to accept less-ideal material to survive.

For now, Redwood can supplement its processes with manufacturing scrap, which is generally easier to work with, as well as mined material. But as volumes of old batteries grow and the supply of mined lithium stretches thin, challenges for recyclers will mount. 

“Increasingly, the solution to some of these sustainability problems is to electrify it and add a battery to it,” Straubel says. “I spent the majority of my career championing that and helping accelerate that.” 

“At the same time,” he says, “it’s a phenomenal amount of batteries.” 

EVs and other electrified transit options are becoming a practical choice. It’s already cheaper in many parts of the world to own and drive an EV than a conventional car. And that’s good news for the climate: in most cases, EVs will produce less in greenhouse-gas emissions over their lifetime than gas-powered vehicles. 

Practical, economical battery recycling is key to fulfilling the promise of EVs. While the wave of dead batteries may be slow to build, the recycling industry is preparing now for what’s coming, because executing this new vision will take decades of steady progress and innovation. Redwood’s parking lot full of discarded batteries is just the start.

The entrepreneur dreaming of a factory of unlimited organs

Organs on demand is one of MIT Technology Review’s 10 Breakthrough Technologies of 2023. Explore the rest of the list here.

I met the entrepreneur Martine Rothblatt for the first time at a meeting at West Point in 2015 that was dedicated to exploring how technology might expand the supply of organs for transplant. At any given time, the US transplant waiting list is about 100,000 people long. Even with a record 41,356 transplants last year in the US, 6,897 people died while waiting. Many thousands more never made the list at all. 

Rothblatt arrived at West Point by helicopter, powering down over the Hudson River. It was an arrival suitable for a president, but it also brought to mind the delivery of an organ packed in dry ice, arriving somewhere just in time to save a person’s life. I later learned that Rothblatt, an avid pilot with a flying exploit registered by Guinness World Records, had been at the controls herself. 

Rothblatt’s dramatic personal story was already well known. She had been a successful satellite entrepreneur, but after her daughter Jenesis was diagnosed with a fatal lung disease, she had started a biotechnology company, United Therapeutics. Drugs like the one that United developed are now keeping many patients like Jenesis alive. But she might eventually need a lung transplant. Rothblatt therefore had set out to solve that problem too, using technology to create what she calls an “unlimited supply of transplantable organs.”

Lawyer and entrepreneur Martine Rothblatt in a 2014 photo.
PETER HAPAK/TRUNK ARCHIVE

The entrepreneur explained her plans with the help of an architect’s rendering of an organ farm set on a lush green lawn, its tube-like sections connected whimsically in a snowflake pattern. Solar panels dotted the roofs, and there were landing pads for electric drones. The structure would house a herd of a thousand genetically modified pigs, living in strict germ-free conditions. There would be a surgical theater and veterinarians to put the pigs to sleep before cutting out their hearts, kidneys, and lungs. These lifesaving organs—designed to be compatible with human bodies—would be loaded into electric copters and whisked to transplant centers. 

Back then, Rothblatt’s vision seemed not only impossible but “phantasmagoric,” as she has called it. But in the last year it has come several steps closer to reality. In September 2021, a surgeon in New York connected a kidney from a genetically engineered pig developed by Rothblatt’s company to a brain-dead person—an experiment to see whether the kidney survived. It did. Since then, US doctors have attempted another six pig-to-human transplants.

The most dramatic of these, and the only one in a living person, was a 2022 case in Maryland, where a 57-year-old man with heart failure lived two months with a pig heart supplied by Rothblatt’s company. The surgeon, Bartley Griffith, said it was “quite amazing” to be able to converse with a man with a pig’s heart beating in his chest. The patient eventually died, but the experiment nonetheless demonstrated the first life-sustaining pig-to-human organ transplant. According to United, formal trials of pig organs could get underway in 2024.

At the center of all this is Rothblatt, a lawyer with a PhD in medical ethics whom New York magazine dubbed the “Trans-Everything CEO.” That isn’t only because she changed her gender from male to female in midlife, as she writes in her book From Transgender to Transhuman. She’s also a prolific philosopher on the ethics of the future who has advocated civil rights for computer programs, compared the traditional division of the sexes to racial apartheid, and founded a transhumanist religion, Terasem, which holds that “death is optional and God is technological.” She is a frank proponent of human immortality, whether it’s achieved by creating software versions of living people or, perhaps, by replacing their organs as they age.

Since the pig organ transplants garnered front-page headlines, Rothblatt has been on a tour of medical meetings, taking the podium to describe the work. But she has rebuffed calls from journalists, including me. The reason: “I promised myself no more interviews until I accomplished something I felt worthy of one,” she wrote in an email. She included a list of the further successes she is aiming for. These include keeping a pig heart beating for three months in a patient, saving a person’s life with a pig kidney, or keeping any animal alive with a 3D-printed lung, another technology United is developing.

The next big step for pig organs will be an organized clinical trial to prove they save lives consistently. United and two competitors, eGenesis and Makana Therapeutics, which have their own pigs, are all in consultation with the US Food and Drug Administration about how to conduct such a trial. Kidney transplants are likely to be first.

“Many people are not on the list because of the scarcity of organs. Only the most ideal patients get listed.”

Robert Montgomery

Before the larger human trials can begin, companies and doctors say, the FDA is asking them to perform one more series of experiments on monkeys. The agency is looking for “consistent” survival of animals for six months or more, and it is requiring that the pigs be raised in special germ-free facilities. “If you don’t have those two things, it’s going to be a hard stop,” says Joseph Tector, a surgeon at the University of Miami and the founder of Makana. 

Which company or hospital will start a trial first isn’t clear. Tector says the atmosphere of competition is kept in check by the risk of missteps. Just two or three failed transplants could doom a program. “Do we want to do the first trial? Sure we do. But it’s really, really, important that we don’t treat this like a race,” he says. “It’s not the America’s Cup.” 

Maybe not, but leading transplant centers are jockeying to be part of the trials and help make history. “It’s ‘Who will be the astronauts?’” says Robert Montgomery, the New York University surgeon who carried out the first transplant of a pig kidney. “We believe it’s going to work and that it’s going to change everything.” 

And that’s not because pig organs will replace human-to-human transplants. Those work so well—kidney transplants succeed 98% of the time and often last 10 or 20 years—that pig organs almost certainly won’t be as good. The difference is that if “unlimited organs” really become available, it’s going to vastly increase the number of people who might be eligible, uncorking needs currently masked by strict transplant rules and procedures. 

“Many people are not on the list because of the scarcity of organs. Only the most ideal patients get listed—the ones who have the highest likelihood of doing well,” says Montgomery. “There is a selection procedure that goes on. We don’t really talk about it, but if there were unlimited organs, you could replace dialysis, replace heart assist devices, even replace medicines that don’t work that well. I think there are a million people with heart failure, and how many get a transplant? Only 3,500.”

A sick child

Before becoming a biotech entrepreneur, Rothblatt had started a satellite company; she’d been early to see that with a powerful enough satellite in stationary orbit over the Earth, receivers could shrink to the size of a playing card, an idea that became SiriusXM Radio. But her plans took a turn in the early 1990s, when her young daughter was diagnosed with pulmonary arterial hypertension. That’s a rare disease in which the pressure in the artery between the lungs and the heart is too high. It is fatal within a few years. 

Martine Rothblatt, CEO of United Therapeutics, stands by a photograph of her daughter, Jenesis, at her office
Rothblatt started a biotechnology company, United Therapeutics, after learning that her daughter Jenesis (pictured in background) suffered from a deadly lung disease.
AP PHOTO/JACQUELYN MARTIN

“We had a problem: I was going to die,” Jenesis—who now works for United in a project leader role—recalled during a 2017 speech.

Rothblatt and her wife were shocked when doctors said there wasn’t a cure. Rothblatt has compared her feelings then to seeing black or rolling on the floor in helpless pain. But instead of giving up, she began attacking the problem. She would duck out of the ICU where her daughter was and visit the hospital library, reading everything she could about the disease, she has recalled. 

Eventually she read about a drug that could lower arterial pressure but had been mothballed by the drug giant Glaxo. She badgered the company until they sold it to her for $25,000 and a promise of a 10% royalty, she recalls. According to Rothblatt, she received in return one bag of the chemical, a patent, and declarations that the drug would never work.

The drug, treprostinil sodium, did work; it was approved in 2002. You might expect that with just a few thousand patients affected by the disease, it would never make money. Once the drug was available, though, patients started to live, not die, and they needed to keep taking it. A family of related drugs now generates $1.5 billion in sales each year for United. 

Though these drugs work well to ease symptoms, patients may eventually need new lungs. Rothblatt understood early on that the drugs were only a life-extending bridge to a lung transplant. Yet there aren’t nearly enough human lungs to help everyone. And that was the real problem. 

The most obvious place to get a lot of organs was from animals, but at the time “xenotransplantation”—moving organs between species—didn’t seem to have good prospects. Tests showed that organs from pigs would be viciously destroyed by the human immune system; this “hyper-acute” rejection takes just minutes or hours. In the US, some scientists called for a moratorium in the face of public panic over whether a pig virus could jump to humans and cause a pandemic. 

In 2011 United Therapeutics paid $7.6 million to purchase Revivicor, a struggling biotech company that, under its earlier name PPL Therapeutics, had funded the Scottish scientist Ian Wilmut’s cloning of Dolly the sheep in 1996. Using cloning techniques, Revivicor had already produced pigs lacking one sugar molecule, alpha-gal, whose presence everywhere on pig organs was known to cause organ rejection within minutes. Now Rothblatt convened experts to prioritize a further eight to 12 genes for modification and undertake “a moonshot to edit additional genes until we have an animal that could provide us with tolerable organs.” She gave herself 10 years to do it, keeping in mind that time was running out for patients like Jenesis. 

Getting into humans

By last year, United had settled on a list of 10 gene modifications. Three of these were “knockouts,” pig genes removed from the genome to eliminate molecules that alarm the human immune system. Another six were added human genes, which would give the organ a kind of stealth coating—helping to cover over differences between the pig and human immune systems that had developed since apes like us and pigs diverged from a common ancestor, 80 to 100 million years ago. A final touch: disabling a receptor that senses growth hormone. Pigs are bigger than we are; this change would keep the organ from growing too large. 

Rothblatt understood early on that the drugs were only a life-extending bridge to a lung transplant. 

Organs with these modifications, especially when combined with new types of immune suppression drugs, have been proving successful in monkeys. “I think the genetic modifications they have made to these organs have been incredible. I will tell you that we have primates going for a year with a [pig] kidney with good function,” says Leonardo Riella, director of kidney transplantation at Massachusetts General Hospital, in Boston.

By 2021, some transplant surgeons were ready to try the organs in humans—and so was Rothblatt. The obstacle was that before green-lighting a formal trial in humans, the FDA, in a meeting that fall, had asked for one further set of monkey experiments that would have all the planned procedures, drugs, and tests locked in and standardized. The FDA also wanted to see consistent evidence that the organs survive for a long time in monkeys—half a year or more,  people briefed by the agency say.

Each experiment cost $750,000, according to Griffith, a transplant surgeon at the University of Maryland, and some doctors felt the monkeys could no longer tell them much more. “We left that meeting [thinking], ‘Does that mean we are sentenced for the next two years to keep doing what we were doing?’” Griffith remembers. What they really needed to see was how the organs fared in a human being—a question more monkeys wouldn’t answer. “We knew we hadn’t learned enough,” he says.

Montgomery, the NYU surgeon, recalls an hours-long conversation with Rothblatt after which United agreed he could try a kidney in a brain-dead person being kept alive on a ventilator. Because the individual was dead, no FDA approval would be needed. “The thing about a xenograft is that it’s far more complex than a drug. And that has been its Achilles’ heel. That is why it has remained in animal models,” he says. “So this was an attempt to do an intermediary step to get it into the target species.” That surgery occurred in September 2021, and the organ was attached to the subject for only 54 hours. 

In Maryland, Griffith, a heart surgeon, conceived a different strategy. He asked the FDA to approve a “compassionate use” study—essentially a Hail Mary attempt to save one life. To his surprise, the agency agreed, and in early 2022 he transplanted a pig heart into the chest of David Bennett Sr., a man with advanced heart failure who wasn’t eligible for a human heart transplant. According to Rothblatt, Bennett was interviewed by four psychologists before undergoing surgery. 

doctor's gloved hands holding a jar containing a heart
A genetically modified pig heart is prepared for transplantation at New York University in July 2022.
JOE CARROTTA/NYU LANGONE HEALTH

To observers like Arthur Caplan, a bioethicist at New York University, the use of one-off transplants to gain information raises an ethical question. “So are you thinking, ‘This guy is a goner—maybe we can learn something’? But the guy is thinking, ‘Maybe I can survive and get a bridge to a human heart,’” says Caplan. “I think there is a little bit of a back-door experiment being carried out.”

Bennett survived two months before his new heart gave out, making him the first person in the world to get a lifesaving transplant from a genetically engineered pig. To Rothblatt, it meant success—even on autopsy, there were no evident signs the organ had been rejected, exactly the result she had been working toward. “There is no way to know if we could have made a better heart in the allotted time … [but] this 10-gene heart seemed to work very well,” she told an audience of doctors last April. In Griffith’s view, the organ performed like a “rock star.”

But in the end Bennett died. And in Rothblatt’s lectures, she has elided a serious misstep, one that some doctors suspect is what actually killed the patient. When Bennett was still alive in the hospital, researchers monitoring his blood discovered that the transplanted heart was infected with a pig virus. The germ, called cytomegalovirus, is well known to cause transplants to fail. The Maryland team could have further hurt Bennett’s chances as they battled the infection, changing his drugs and giving him plasma.

Without the virus, would the heart have gone on beating? The closest Rothblatt has come to acknowledging the problem in public was telling a legal committee of the National Academy of Sciences that she didn’t put the blame on the pig heart. “If I were to put it in layman’s terms, I would say the heart did not fail the patient,” she said. 

The bigger problem with the infection, and with Rothblatt’s failure to own the error, is that United’s pigs were supposed to be tested and free of germs. United’s silence is unnerving, because if this virus could slip through, it’s possible other, more harmful germs could as well. Rothblatt did not answer our questions about the virus.

Printing lungs

United says that it is now building a new, germ-proof pig facility, which will be ready in 2023 and support a clinical trial starting the following year. It’s not the fantastical commercial pig factory shown in Rothblatt’s architectural rendering, but it is a stepping-­stone toward it. Eventually, Rothblatt believes, a single facility could supply organs for the whole country, delivering them via all-electric air ambulances. Over the summer, she claims, an aeronautics company she invested in, Beta Technologies, flew a vertical-lift electric plane from North Carolina to Arkansas, more than 1,000 nautical miles. 

Ironically, pigs may never be a source of the lungs that Rothblatt’s daughter may need. That is because lungs are delicate and more susceptible to immune attack. By 2018, the results were becoming clear. Each time the company added a new gene edit to the pigs, hearts and kidneys transplanted into monkeys would last an extra few weeks or months. But the lungs weren’t improving. Time and again, after being transplanted into monkeys, the pig lungs would last two weeks and then suddenly fail. 

“I actually believe there is no part of the body that cannot be 3D-printed.” 

Martine Rothblatt

To create lungs, Rothblatt is betting on a different approach, establishing an “organ manufacturing” company that is trying to make lungs with 3D printers. That effort is now operating out of a former textile mill in Manchester, New Hampshire, where researchers print detailed models of lungs from biopolymers. The eventual idea is to seed these structures with human cells, including (in one version of the technology) cells grown from the tissue of specific patients. These would be perfect matches, without the risk of immune rejection. 

This past spring, Rothblatt unveiled a set of printed “lungs” that she called “the most complex 3D-printed object of any sort, anywhere, ever.” According to United, the spongy structure, about the size of a football, includes 4,000 kilometers of capillary channels, detailed spaces mimicking lung sacs, and a total of 44 trillion “voxels,” or individual printed locations. The printing was performed with a method called digital light processing, which works by aiming a projector into a vat of polymer that solidifies wherever the light beams touch. It takes a while—three weeks—to print a structure this detailed, but the method permits the creation any shape, some no larger than a single cell. Rothblatt compared the precision of the printing process to driving across the US and never deviating more than the width of a human hair from the center line.

MICHAEL BYERS

“I actually believe there is no part of the body that cannot be 3D-printed … including colons and brain tissue,” Rothblatt said while presenting the printed lung scaffolds in June at a meeting in California. 

Some scientists say bioprinting remains a research project and question whether the lifeless polymers, no matter how detailed, should be compared to a real organ. “It’s a long way to go from that to a lung,” says Jennifer Lewis, who works with bioprinting at Harvard University. “I don’t want to rain on the parade, and there has been significant investment, so some smart minds see something there. But from my perspective, that has been pretty hyped. Again, it’s a scaffold. It’s a beautiful shape, but it’s not a lung.” Lewis and other researchers question how feasible it will be to breathe real life into the printed structures. Sticking human cells into a scaffold is no guarantee they will organize into working tissue with the complex functions of a lung.  

Rothblatt is aware of the doubters and knows how difficult the technology is. She knows that other people think it won’t ever work. That isn’t stopping her. Instead, she sees it as her next chance to solve problems other people can’t. During an address to surgeons this year, Rothblatt rattled off the list of challenges ahead—including growing the trillions of cells that will be needed. “What I do know is that doing so does not violate any laws of physics,” she said, predicting that the first manufactured lungs would be placed in a person’s chest cavity this decade. 

She closed her talk with a scene from 2001: A Space Odyssey, the one where an ape-man hurls a bone upward and it takes flight as a space station circling the Earth. Except Rothblatt substituted a photograph of herself piloting the zero-carbon electric plane she believes will someday deliver unlimited organs around the country.

Mass-market military drones: 10 Breakthrough Technologies 2023

For decades, high-end precision-strike American aircraft, such as the Predator and Reaper, dominated drone warfare. The war in Ukraine, however, has been defined by low-budget models made in China, Iran, or Turkey. Their widespread use has changed how drone combat is waged and who can wage it. 

Some of these new drones are off-the-shelf quadcopters, like those from DJI, used for both reconnaissance and close-range attacks. Others, such as the $30,000 Iranian-made exploding Shahed drones, which Russia has used to attack civilians in Kiev, are capable of longer-range missions. But the most notable is the $5 million Bayraktar TB2, made by Turkey’s Baykar corporation.

The TB2 is a collection of good-enough parts put together in a slow-flying body. It travels at speeds up to 138 miles per hour and has a communication range of around 186 miles. Baykar says it can stay aloft for 27 hours. But when combined with cameras that can share video with ground stations, the TB2 becomes a powerful tool for both targeting the laser-guided bombs carried on its wings and helping direct artillery barrages from the ground.

Most important is simply its availability. US-made drones like the Reaper are more capable but costlier and subject to stiff export controls. The TB2 is there for any country that wants it. 

Turkey’s military used the drones against Kurds in 2016. Since then, they’ve been used in Libya, Syria, and Ethiopia, and by Azerbaijan during its war against Armenia. Ukraine bought six in 2019 for military operations in the Donbas, but the drones caught the world’s attention in early 2022, when they helped thwart Russian invaders. 

The tactical advantages are clear. What’s also sadly clear is that these weapons will take an increasingly horrible toll on civilian populations around the world.

The inevitable EV: 10 Breakthrough Technologies 2023

Electric vehicles are transforming the auto industry.

While sales have slowly ticked up for years, they’re now soaring. The emissions-free cars and trucks will likely account for 13% of all new auto sales globally in 2022, up from 4% just two years earlier, according to the International Energy Agency. They’re on track to make up about 30% of those sales by the end of this decade.

A mix of forces has propelled the vehicles from a niche choice to a mainstream option. 

Governments have enacted policies compelling automakers to retool and incentivizing consumers to make the switch. Notably, California and New York will require all new cars, trucks, and SUVs to be zero-emissions by 2035, and the EU had nearly finalized a similar rule at press time. 

Auto companies, in turn, are setting up supply chains, building manufacturing capacity, and releasing more models with better performance, across price points and product types. 

The Hongguang Mini, a tiny car that starts a little below $5,000, has become the best-selling electric vehicle in the world, reinforcing China’s dominance as the largest manufacturer of EVs.

A growing line-up of two- and three-wheelers from Hero Electric, Ather, and other companies helped EV sales triple in India over the last year (though the total number is still only around 430,000). And models ranging in size and price from the Chevy Bolt to the Ford F-150 Lightning are bringing more Americans into the electric fold.

There are still big challenges ahead. Most of the vehicles must become cheaper. Charging options need to be more convenient. Clean electricity generation will have to increase dramatically to accommodate the surge in vehicle charging. And it will be a massive undertaking to make enough batteries. But it’s now clear that the heyday of the gas-guzzler is dimming.