Dennis Whyte’s fusion quest

Ever since nuclear fusion was discovered in the 1930s, scientists have wondered if we could somehow replicate and harness the phenomenon behind starlight—the smashing together of hydrogen atoms to form helium and a stupendous amount of clean energy. Fusing hydrogen would yield 200 million times more energy than simply burning it. Unlike nuclear fission, which powers the world’s 440 atomic reactors, hydrogen fusion produces no harmful radiation, only neutrons that are captured and added back to the reaction. Instead of radioactive wastes with long, lethal half-lives, fusion’s by-product is helium, the most stable element—and a year’s worth from a fusion plant wouldn’t supply a party balloon business.

Dennis Whyte’s part in the fusion quest began in graduate school, in a lab belonging to the electric utility Hydro-Québec, just outside Montreal. There he was shown a device built to replicate stellar fusion on an earthly scale. It was a doughnut-shaped hollow chamber, big enough for a lanky physicist like him to stand inside, based on a design conceived in 1950 by the future Nobel Peace Prize laureate Andrei Sakharov, who also developed hydrogen bombs for the Soviet Union. It was called a tokamak, a word derived from a Russian phrase meaning “ring-shaped chamber with magnetic coils.”

Dennis Whyte in profile speaking in front of trade show banners
Dennis Whyte, then director of the Plasma Science and Fusion Center, describes efforts to address climate change through carbon-free power at a conference in 2019.
GRETCHEN ERTL

The idea is straightforward: Fill the doughnut with hydrogen gas, and then heat that gas until it turns to electrically charged plasma. In this ionic state, plasma would be held in place by magnets positioned around the tokamak. Achieving fusion on Earth without the immense pressure of a star’s interior, scientists calculated, would require temperatures nearly 10 times hotter than our sun’s center—around 100 million degrees Celsius. So the trick would be to suspend the hot plasma so perfectly in a surrounding magnetic field that it wouldn’t touch inner surfaces of the chamber. Such contact would instantly cool it, stopping the fusion reaction.

The good part about that was safety. In a failure, a fusion power plant wouldn’t melt down—just the opposite. The bad part was that gaseous plasma wasn’t very cooperative—any slight irregularity in the chamber walls could cause destabilizing turbulence. But the concept was so tantalizing that by the mid-1980s, 75 universities and governmental institutes around the world had tokamaks. If anyone could get fusion—the most energy-dense reaction in the universe—to work, the deuterium in a liter of seawater could meet one person’s electricity needs for a year. It would be, effectively, a limitless resource.

Besides turbulence, there were two other big obstacles. The magnets surrounding the plasma needed to be really powerful—meaning really big. In 1986, 35 nations representing half the world’s population—including the US, China, India, Japan, what is now the entire European Union, South Korea, and Russia—agreed to jointly build the International Thermonuclear Experimental Reactor, a $40 billion giant tokamak in southern France. Standing 100 feet tall on a 180-acre site, ITER (the acronym also formed the Latin word for “journey”) is equipped with 18 magnets weighing 360 tons apiece, made from the best superconductors then available. If it works, ITER will produce 500 megawatts of electricity—but not before 2035, if then. It’s still under construction. The second obstacle is the biggest: Many tokamaks have briefly achieved fusion, but doing so always took more energy than they produced. 

After earning his doctorate in 1992, Whyte worked on an ITER prototype at San Diego’s National Fusion Facility, taught at the University of Wisconsin, and in 2006 was hired by MIT. By then, he understood how huge the stakes were, and how life-changing commercial-scale fusion energy could be—if it could be sustained, and if it could be produced affordably.

MIT had been trying since 1969. The red brick buildings of its Plasma Science and Fusion Center, where Whyte came to work, had originally housed the National Biscuit Company. PSFC’s sixth tokamak, Alcator C-Mod, built in 1991, was in Nabisco’s old Oreo cookie factory. C-Mod’s magnets were coiled with copper to serve as a conductor (think of how copper wire wrapped around a nail and connected to a battery turns it into an electromagnet). Before C-Mod was finally decommissioned, its magnetic fields, 160,000 times stronger than Earth’s, set the world record for the highest plasma pressure in a tokamak.

As Ohm’s law describes, however, metals like copper have internal resistance, so it could run for only four seconds before overheating—and needed more energy to ignite its fusion reactions than what came out of it. Like the now 160 similar tokamaks around the world, C-Mod was an interesting science experiment but mainly reinforced the joke that fusion energy was 20 years away and always would be.

Each year, Whyte had challenged PhD students in his fusion design classes to conjure something just as compact as C-Mod, one-800th the scale of ITER, that could achieve and sustain fusion—with an energy gain. But in 2013, as he neared 50, he increasingly had doubts. He’d devoted his career to the fusion dream, but unless something radically changed, he feared it wouldn’t happen in his lifetime.

The US Department of Energy decided to scale back on fusion. It informed MIT that funding for Alcator C-Mod would end in 2016. So Whyte decided he would either quit fusion and do something else or try something different to get there faster. 

There was a new generation of ceramic “high-temperature” superconductors, not available when ITER’s huge magnets were being wrapped in metallic superconducting cable, which has to be chilled to 4 kelvin above absolute zero (–452.47 °F) for its resistance to current to fall to zero. Discovered accidentally in 1986 in a Swiss lab, the new ceramic superconductors still needed to be cooled to 20 K (–423.7 °F). But with far smaller power requirements, their output was so much greater that a year later the discoverers won a Nobel Prize.

The potential applications were limitless, but because ceramic is so brittle, coiling it around electromagnets wasn’t feasible. Then one day Whyte ran into research scientist Leslie Bromberg ’73, PhD ’77, in the hallway holding a fistful of what resembled unspooled tape from a VCR cassette. “What’s all that?” he asked.

“Superconducting tape, new stuff.” The filmy strips were coated with ceramic crystals of rare-earth barium copper oxide. “It’s called ReBCO,” Bromberg said.

ReBCO’s rare-earth component, yttrium, is 400 times more common than silver. Could superconducting tape, Whyte immediately wondered, be wound like copper wire to make much smaller but far more powerful magnets?

The class met in a windowless room in a former Nabisco cracker factory, surrounded by blackboards.

He assigned his 2013 fusion design class to see. If the students managed to double the strength of a magnetic field surrounding hot plasma, he knew, they might multiply fusion’s power density sixteenfold. They came up with an eye-­opening design they called Vulcan. It yielded five peer-reviewed papers—but whether layers of wound ReBCO tape could stand the stress of the current needed to hold plasma suspended while being superheated to ignite a fusion reaction was unknown.

For two years, his classes refined Vulcan. By 2015, with ReBCO more consistent in quality and supply, he challenged his students—11 male and one female, including an Argentine, a Russian, and a Korean—to outdo what 35 nations had been attempting for nearly 30 years.

“Let’s see if ReBCO lets us build a 500-megawatt tokamak—the same as ITER, only way smaller.”

If superconducting tape could let them make a fusion reactor to fit the footprint of a decommissioned coal-fired plant, he told them, it could plug right into existing power lines. To then make enough carbon-­free energy to stop pushing Earth’s climate past the edge, its components would have to be mass-­producible, so any competent contractor could assemble and service them.

The class met in a windowless room in a former Nabisco cracker factory, surrounded by blackboards. Divided into teams, the students set about figuring out how thin-tape electromagnets could be made robust, and how to capture neutrons expelled from fusion reactions so their heat could be used for turning a turbine—and so they could be harnessed to breed more tritium for the plasma. That’s crucial, because natural tritium is exceedingly rare. Since ReBCO-wrapped magnets would be so much smaller, shrinking the dimensions of one component rippled through everything else. One team’s innovations fed another’s, and parts of the design started to link together. As excitement spread through PSFC, members of earlier classes, now postdocs or faculty members, pitched in. Whyte’s students, some with doctoral dissertations due, were putting in 50-hour weeks on this, reminding him of why he’d dreamed of fusion in first place.

Whyte looked it over for the thousandth time. He was pretty sure they hadn’t broken any laws of physics.

And then, at the semester’s end, out popped their design. Just over 10 feet in diameter, it actually looked like a prototype power plant. While ITER had massive shielding, their tokamak would be wrapped in a compact blanket containing a molten-salt mixture of lithium fluoride and beryllium fluoride to absorb the heat of the neutrons escaping from the fusion reaction. Those neutrons would also react with the lithium to breed more tritium.

The blanket’s heat would be tapped for electricity—except one-fifth of the heat energy would remain in the plasma, meaning the reaction was now heating itself and was self-sustaining, producing more energy than was needed to ignite it. Net fusion energy had been achieved.

The ReBCO magnets, although just a 40th the size of ITER’s, could deliver a magnetic field strength of 23 tesla (a hospital MRI machine typically operates at 1.5 tesla). That was more than enough to achieve a fusion reaction, yet it would require less electricity than its copper-clad C-Mod predecessor by a factor of 2,000. Everything was designed for easy maintenance, and parts could be replaced without having to dismantle the entire reactor.

Most important, the calculated energy output was more than 13 times the input.

Whyte looked it over for the thousandth time. He was pretty sure they hadn’t broken any laws of physics. He calculated the cost per watt and was astonished. Suddenly their goal wasn’t just building a much smaller ITER. It was being commercially competitive.

Stunned, he told his wife, “This can actually work.”


They called it ARC, for “affordable, robust, compact.” “Buildable in a decade,” Whyte predicted.The peer-reviewed article his 12 students published in FusionEngineering and Design estimated it would cost around $5 billion. In 2015, that wasn’t much more than the cost of a comparably sized coal-fired plant, and one-eighth ITER’s price tag.

That May, Whyte gave a keynote about ARC at a fusion engineering symposium in Austin, Texas. Four of his students attended. When he described their plan for a workable reactor by 2025, in just 10 years, conferees were astounded—everyone else was talking decades. Afterward, the MIT contingent went to lunch at Stubb’s Bar-B-Q. It was clear that with the climate eroding and the Intergovernmental Panel on Climate Change warning that yet-­uninvented technologies were needed to keep temperatures from soaring into dreaded realms, they had to do this. But since the DOE had pulled its funding, how could they?

On a napkin, Whyte started listing what they’d need to do and what each step might cost. Over ribs, they crafted a proposal to spin off a startup to raise venture capital to finance a SPARC (for “soon-as-possible ARC”) demo fusion reactor to show that this could really happen. Then they’d build a commercial-scale ARC.

group photo of team standing in the warehouse in front of the reactor
In 2021, teams from MIT’s Plasma Science Fusion Center and MIT spinout Commonwealth Fusion Systems used just 30 watts of energy to produce a magnetic field strong enough to sustain a fusion reaction.
GRETCHEN ERTL

Forming a company would free them from academic and government funding cycles, but they were plasma physicists, most still in their 20s, without business backgrounds. Nevertheless, Whyte and Martin Greenwald, deputy director of the PSFC, agreed to join them, and in 2018 Commonwealth Fusion Systems, CFS, was born. Three of his former students would run the company, and three would remain at MIT’s Plasma Science and Fusion Center, which—in a profit-sharing agreement—would be CFS’s research arm.

They opened shop up the street, in The Engine, MIT’s “tough tech” startup incubator, and gained the attention of climate-concerned backers like Bill Gates, George Soros, and Jeff Bezos. But they weren’t the only ones competing for fusion funds, and it became a race to see who could make commercial-scale fusion first. 

The CFS team may have been young, but because of its partnership with MIT and its more than a hundred experienced fusion scientists, it had a running start.

By the end of 2021, Commonwealth Fusion Systems had raised more than $2 billion and was breaking ground on 47 acres outside Boston for a commercial fusion energy campus, to build SPARC by 2025—and commercial-scale, mass-­producible ARC by 2030.


Gaining and actually sustaining net energy is perpetually called fusion’s yet-unreached “holy grail,” but by September 2021, the CFS team of CEO Bob Mumgaard, SM ’15, PhD ’15 (a coauthor of the Vulcan design), chief science officer Brandon Sorbom, PhD ’17 (lead author of the 2015 fusion design class’s breakthrough paper), Whyte, and their 200 CFS colleagues were confident they could do it—if their magnets held. For three years, straight through the pandemic, they’d worked in PSFC’s West Cell laboratory, the cavernous former Oreo factory that had housed Alcator C-Mod, furiously solving problems like how to solder thin-film ReBCO tape together into a structure strong enough to withstand 40,000 amps passing through it—enough to power a small town.

The completed SPARC would have 18 magnets encircling its plasma chamber, but for this test they’d built just one. It was composed of 16 layers, each a D-shaped, 10-foot-high steel disk grooved like an LP. On one side, the grooves held tight spirals of ReBCO film, 270 kilometers in all—the distance from Boston to Albany. “Yet all that ReBCO holds just a sprinkling of rare earth,” said Sorbom. “That’s the magic of superconductors: A tiny bit of material can carry so much current. By comparison, a wind turbine’s rare-earth neodymium magnets weigh tons.”

On each disk’s flip side, the grooves channeled liquid helium to cool the superconductor for zero resistance. (The design dates to history’s first high-field magnet, built at MIT in the 1930s, which used copper conductors and water for coolant.) Each layer was built on an automated assembly line. “The idea,” said Mumgaard, “is to make 100,000 magnets a year someday. This can’t be a scientific curiosity. This needs to be an energy source.”

Although covid-19 had waned, an outbreak could foil everything, so they maintained coronavirus protocols, moving computer terminals outside beneath a tent to avoid crowding within. Others worked virtually. For a month, dozens worked eight-hour, continuous shifts. Some operated the electromagnetic coil, encased in stainless steel in the middle of the room, which over a week had to be gradually supercooled from room temperature of 298 K down to 20 K before slowly ramping up to full magnetic strength. Others constantly compared real-time data with redundant models. As the temperature dropped, the internal connections, welds, and valves contracted at different rates, so they watched for leaks. 

On September 2, 2021, the Thursday before Labor Day, they started ramping up by a few kiloamps, stopping frequently to check what the current was revealing, how the cooling characteristics had changed, and how the stresses on the ReBCO coil increased as the magnetic field strengthened to record heights.

Two nights later, they cranked the amperage toward their goal: a 20-tesla magnetic field, powerful enough to lift 421 Boeing 747s or contain a continuous fusion reaction. They’d been aiming for 7:00 a.m. on Sunday, the 5th. At 3:30, the large screen in the design center showed that they’d reached 40 kiloamps, and the magnetic field had reached 19.56 tesla.

At 4:30 a.m., they were at 19.98 tesla. Things got very quiet. At 5:20 a.m., every redundant on-screen meter read 20 tesla, and nothing had leaked or exploded—except under the tent, where champagne corks were popping.

Five years earlier, on its final four-­second run, C-Mod’s copper-conducting magnet had consumed 200 million watts of energy to reach 5.7 tesla. This took 30 watts—less energy by a factor of around 10 million, Whyte told reporters—to produce a magnetic field strong enough to sustain a fusion reaction. The joints that transferred current from one layer to the next actually performed better than expected. That was the biggest unknown, because there was only one way to test them: in the magnet itself. They looked spectacular.

After five hours, the team ramped down the power. “It’s a Kitty Hawk moment,” Mumgaard said.

Adapted from Hope Dies Last: Visionary People Across the World, Fighting to Find Us a Future by Alan Weisman, published by Dutton, an imprint of Penguin Random House. © 2025 by Alan Weisman.

Hands-on engineering

Jaden Chizuruoke May ’29 worked with teammates Rihanna Arouna ’29 and Marian Akinsoji ’29 to design the chemically powered model car whose framework he is building in this scene from the Huang-Hobbs BioMaker Space, where students have a chance to work safely and independently with biological systems.

The assignment to build the car—and the layered electrochemical battery that powers it—came in a class called “Hands-On Engineering: Squishy Style Making with Biology and Chemistry” taught by the lab’s director, Justin Buck, PhD ’12. “It is definitely one of my favorite classes,” says May, who appreciates that after being trained, students are given the freedom to figure out how to tackle each task in a project.

Located in the basement of Building 26, the BioMaker Space welcomes novices and expert mentors alike, offering workshops in such things as bacterial photography, biobots, lateral flow assay, CRISPR, and DNA origami.

For May, the makerspace has been a hub for collaboration. “I could never have done anything in that lab without my peers and counselors helping me, and the emphasis placed on teamwork is what makes the class feel both welcoming and exciting,” he says, adding that he made some of his first friends at MIT there: “It has been a great introduction to campus.”

May says he’s thinking of double majoring in Course 10-ENG (energy) and Course 21W (writing)—but the class has gotten him interested in biology, too.

Investing in the promise of quantum

As MIT navigates a difficult and constantly changing higher education landscape, I believe our best response is not easy but simple: Keep doing our very best work. The presidential initiatives we’ve launched since fall 2024 are a vital part of our strategy to advance excellence within and across high-impact fields, from health care, climate, and education to AI and manufacturing—and now quantum. On December 8, we launched Quantum at MIT, or QMIT—the name rhymes with qubit, the basic unit of quantum information—to elevate MIT’s long-standing strengths in quantum science and engineering across computing, communication, and sensing.

More than 40 years ago, MIT helped kick off what is widely considered the second quantum revolution as host of the first Physics of Computation Conference at Endicott House, bringing together physics and computing researchers to explore the promise of quantum computing. Now we’re investing further in that promise.

Like all MIT’s strategic priorities, QMIT will help ensure that new technologies are used for the benefit of society. Faculty director Danna Freedman, the Frederick George Keyes Professor of Chemistry, is leading the initiative with a focus that extends beyond research and discovery to the way quantum technologies are developed and deployed. QMIT will enable scientists and engineers to co-develop quantum tools, generating unprecedented capabilities in science, technology, industry, and national security. 

Although QMIT is a new initiative, it grew naturally from the Center for Quantum Engineering (CQE), created in 2019 to help bridge the gap between PIs at MIT and Lincoln Laboratory. A key to QMIT’s success will be integration with Lincoln Lab, with its deep and broad expertise in scaling and deployment.

And CQE has already gotten us started with industry collaborations through its Quantum Science and Engineering Consortium (QSEC), which brings together companies—from startups to large multinationals—that can help us realize positive, practical impact. We’re even envisioning a physical home for quantum at the heart of campus, a space for academic, industry, and public engagement with quantum systems.

As we set out for this new frontier, QMIT will allow us to shape the future of quantum, with a focus on solving “MIT-hard” problems. We hope that as the initiative evolves, our alumni and friends will be inspired to join us in supporting this exciting new effort to build on MIT’s quantum legacy.

Secrets of the sleep-deprived brain

Nearly everyone has experienced it—after a night of poor sleep, your brain might seem foggy, and your mind drifts off when you should be paying attention. A new MIT study reveals what happens biologically as these momentary lapses occur: Your brain is performing essential maintenance that it usually takes care of while you sleep. 

During a normal night of sleep, the cerebrospinal fluid (CSF) that cushions the brain helps flush away metabolic waste that has built up during the day. In a 2019 study, MIT electrical engineering and computer science professor Laura Lewis, PhD ’14, and colleagues showed that the CSF flows rhythmically in and out in a way that’s linked to changes in brain waves.

To explore what might happen to this CSF flow in a sleep-deprived brain, Lewis, who is also a member of MIT’s Institute for Medical Engineering and Science, and her colleagues tested 26 volunteers on several cognitive tasks after they’d been kept awake in the lab and when they were well-rested. Using both electroencephalograms and functional magnetic resonance imaging, the researchers measured heart rate, breathing rate, pupil diameter, blood oxygenation in the brain, and flow of CSF in and out of the brain as participants tried to press a button when they heard a beep or saw a visual change on a screen.

Unsurprisingly, sleep-deprived participants performed much worse than well-rested ones. Their response times were slower, and in some cases the participants never noticed the stimulus at all.

The researchers identified several physiological changes during these lapses of attention. Most significant was a flow of CSF out of the brain just as a lapse occurred—and back in as it ended. The researchers hypothesize that when the brain is sleep-deprived, it “attempts to catch up on this process by initiating pulses of CSF flow,” as Lewis says, even at the cost of one’s ability to pay attention.

“One way to think about those events is because your brain is so in need of sleep, it tries its best to enter into a sleep-like state to restore some cognitive functions,” says Zinong Yang, a postdoctoral associate and lead author of a paper on the work. 

The researchers also found several other physiological events linked to attentional lapses, including decreases in breathing and heart rate, along with constriction of the pupils. They found that pupil constriction began about 12 seconds before CSF flowed out of the brain, and pupils dilated again after attention returned.

“When your attention fails, you might feel it perceptually and psychologically, but it’s also reflecting an event that’s happening throughout the brain and body,” Lewis says.

“These results suggest to us that there’s a unified circuit that’s governing both what we think of as very high-level functions of the brain—our attention, our ability to perceive and respond to the world—and then also really basic, fundamental physiological processes.” 

The researchers did not explore what this circuit might be, but one good candidate, they say, is the noradrenergic system, which regulates many cognitive and bodily functions through the neurotransmitter norepinephrine—and has recently been shown to oscillate during normal sleep.

Listening to battery failure

Lithium-ion batteries produce faint sounds as they charge, discharge, and degrade. But until now, nobody could interpret those sounds to detect when a battery might be about to lose power, fail, or burst into flames.

Now, MIT engineers have found a way to do that, even with noisy data. The findings could provide the basis for relatively simple, totally passive, and nondestructive devices that could continuously monitor the health of battery systems like those in electric vehicles or grid-scale storage facilities.

“Through some careful scientific work, our team has managed to decode the acoustic emissions,” says Martin Z. Bazant, a professor of chemical engineering and mathematics. They were able to classify them as coming from gas bubbles generated by side reactions or from fractures caused by expansion and contraction of the active material, two primary mechanisms of degradation and failure.

The team coupled electrochemical testing of working batteries with recordings of their acoustic emissions, using signal processing to correlate sound characteristics with voltage and current. Then they took the batteries apart and studied them under an electron microscope to detect fracturing.

With Oak Ridge National Laboratory researchers, the team has also shown that acoustic emissions can warn of gas generation before thermal runaway, which can lead to fires. As Bazant says, it’s “like seeing the first tiny bubbles in a pot of heated water, long before it boils.” 

Under 10% of an earthquake’s energy makes the ground shake

Earthquakes are driven by energy stored up in rocks over millennia—energy that, once released, we perceive mainly in the form of the ground’s shaking. But a quake also generates a flash of heat and fractures and damages underground rocks. And exactly how much energy goes into each of these three processes is exceedingly difficult to measure in the field.

Now, with the help of carefully controlled miniature “lab quakes,” MIT geophysicist Matěj Peč and colleagues have quantified this so-called energy budget. Only about 1% to 10% of a lab quake’s energy causes physical shaking, they found, while 1% to 30% goes into breaking up rock and creating new surfaces. The vast majority heats up the area around a quake’s epicenter, producing a temperature spike that can actually melt surrounding material.

The team also found that the fractions of quake energy producing heat, shaking, and rock fracturing can shift depending on the tectonic activity the region has experienced in the past. “The deformation history—essentially what the rock remembers—really influences how destructive an earthquake could be,” says postdoc Daniel Ortega-Arroyo, PhD ’25, lead author of a paper on the work. “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”

The lab quakes—which involve subjecting specially prepared samples of powdered granite and magnetic particles to steadily increasing pressure in a custom-built apparatus—are a simplified analogue of what occurs during a natural earthquake. Down the road, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable that region is to future quakes.

Building materials are getting closer to doubling as batteries

Concrete already builds our world, and an MIT-invented variant known as electron-­conducting carbon concrete (ec3, pronounced “e c cubed”) holds out the possibility of helping power it, too. Now that vision is one step closer. 

Made by combining cement, water, ultra-fine carbon black, and electrolytes, ec3 creates a conductive “nanonetwork” that could enable walls, sidewalks, and bridges to store and release electrical energy like giant batteries. To date, the technology has been limited by low voltage and scalability challenges. But the latest work by the MIT team that invented ec3 has increased the energy storage capacity by an order of magnitude. With the improved technology, about five cubic meters of concrete—the volume of a typical basement wall—could store enough energy to meet the daily needs of the average home.

A weight-bearing arch made of electron-conducting carbon concrete (ec3) integrates supercapacitor electrodes to power a light.
MIT EC³ HUB

The researchers achieved this progress by using high-resolution 3D imaging to learn more about how the conductive carbon network—essentially, the electrode—functions and interacts with electrolytes. Equipped with their new understanding, the team experimented with different electrolytes and their concentrations. “We found that there is a wide range of electrolytes that could be viable candidates for ec3,” says Damian Stefaniuk, a research scientist at the MIT Electron-Conducting Carbon-Cement-Based Materials Hub, led by associate professor Admir Masic. “This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way electrolytes were added to the mix, making it possible to cast thicker electrodes that stored more energy.

While ec3 doesn’t rival conventional batteries in energy density, itcan in principle be incorporated directly into architectural elements and last as long as the structure itself. To show how structural form and energy storage can work together, the team built a miniature arch that supported its own weight and an additional load while powering an LED light. 

5 Content Marketing Ideas for February 2026

Between Valentine’s Day, Presidents’ Day, and the start of the spring buying cycle, ecommerce marketers have plenty of opportunities to publish relevant and timely content in February 2026.

Content marketing is the process of creating, publishing, and promoting articles, videos, and podcasts to attract, engage, and retain customers. For ecommerce businesses, content does more than inform. It differentiates, builds trust, and supports discovery when shoppers are researching rather than buying.

Content is also foundational for lifecycle and social-media marketing, and search-engine and genAI optimization.

What follows are five content marketing ideas your ecommerce business can use in February 2026.

Valentine How-to

Photo of a male and female preparing a meal in a kitchen

Valentine-themed content could include something as simple as planning a dinner at home.

Valentine’s Day remains one of the most reliable seasonal content opportunities, especially when merchants focus on guidance as well as promotions.

Content that provides useful and actionable information can attract shoppers unfamiliar with a brand or its products.

The content should align closely with the products sold. A wine shop can explain pairings. A jewelry retailer can address how to choose materials or styles. A home goods boutique might describe how to set a table or create a Valentine’s Day dinner.

Consider the sample titles:

  • “How to Choose the Perfect Wine for Valentine’s Dinner,”
  • “Build a Thoughtful Valentine’s Gift Box,”
  • “How to Create a Romantic Valentine’s at Home,”
  • “Helpful Tips to Match Valentine’s Jewelry to Her Style.”

Here are a few articles from publishers:

Presidents’ Day

Two males in a factory-setting making apparel

Presidents’ Day content can focus on U.S. patriotism and domestic manufacturing.

Celebrated on February 16 in 2026, Presidents’ Day is a storytelling opportunity, more than a sales event. Ecommerce marketers can publish articles or videos that explain the holiday’s origins, its meaning, or the historical figures it honors, e.g., George Washington.

Patriotic holidays can celebrate domestic companies, such as brands with made-in-America products.

Here’s an example. Origin is an apparel brand in Farmington, Maine. President Dwight Eisenhower visited the city in June 1955, passing close to what is now Origin’s manufacturing facility. The company could recount Eisenhower’s visit and retell its own story in the process.

In 2026, Presidents’ Day has extra relevance. The United States is entering its semiquincentennial year, marking 250 years since independence. Celebrations will peak in July, yet February is not too early to publish 250th-themed content.

A Complete Guide

Female shop owner visiting with a male customer

A “complete” guide is akin to a store owner explaining her wares to an in-person shopper.

Content marketers are familiar with “complete guides” or “ultimate guides.” These are typically long, “pillar” articles that demonstrate topical authority.

The goal is usefulness, not brevity. A merchant that sells loose-leaf tea could publish a comprehensive guide to tea types, brewing methods, and storage. A cycling retailer could create a guide to bike maintenance or gear selection.

Over time, these guides become evergreen assets that support internal linking, featured snippets, and AI-generated summaries. They can be gold for optimizing for search engines, generative AI platforms, and answer engines, especially when updated annually.

Examples of guides include “Complete Guide to Loose-Leaf Tea” and “Ultimate Guide to Choosing Cookware.”

The idea is clear enough: Pick a product or category and be the authority.

Curated Newsletters

This idea aims to help businesses that struggle to produce content. Instead of composing or generating (and then editing) loads of articles, a company can mix product info with content from other publishers.

Put another way, curated newsletters allow ecommerce businesses to publish consistently without creating content from scratch. The idea is to select quality articles, videos, or social posts from trusted sources and add brief editorial context.

Home page for Better Kitchen Gear

The newsletter for Better Kitchen Gear, an affiliate marketing site, links to external recipes.

Consider an example from Better Kitchen Gear, an affiliate marketing site. Its email newsletter blends curated recipes with links to affiliate content. A recent issue on sourdough bread included summaries and links to recipes from the King Arthur Baking Company and cookbook author Alexandra Stafford.

Another link was to an original article titled “The Tools Behind Great Sourdough,” which included six products on Amazon.

Merchants could do much the same. For example, a golf accessories seller could publish a weekly newsletter featuring curated golf news and links to products.

American Heart Month

Photo of a female on a treadmill

American Heart Month is an opportunity for stores selling health or fitness products.

President Lyndon Johnson established American Heart Month in 1964 with a proclamation encouraging Americans to focus on cardiovascular health.

It occurs in February because of Valentine’s Day, reinforcing the symbolic connection between the heart and daily life. Since then, the month-long observance has promoted education about heart health, prevention, and sustainable lifestyle habits.

Ecommerce marketers promoting products in fitness, food, wellness, apparel, and home categories can focus content on everyday behaviors, routines, and product use that support an active, balanced lifestyle.

Imagine a content marketer for a fitness gear retailer. She wants to honor American Heart Month while promoting the company’s products. She decides on an article titled “5 Ways to Turn a Spare Room into a Cardio Studio.”

The Guardian: Google AI Overviews Gave Misleading Health Advice via @sejournal, @MattGSouthern

The Guardian published an investigation claiming health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting and says many examples were based on incomplete screenshots.

The Guardian said it tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google told The Guardian the “vast majority” of AI Overviews are factual and helpful.

What The Guardian Reported Finding

The Guardian said it tested a range of health queries and asked health organizations to review the AI-generated summaries. Several reviewers said the summaries included misleading or incorrect guidance.

One example involved pancreatic cancer. Anna Jewell, director of support, research and influencing at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was “completely incorrect.” She added that following that guidance “could be really dangerous and jeopardise a person’s chances of being well enough to have treatment.”

The reporting also highlighted mental health queries. Stephen Buckley, head of information at Mind, said some AI summaries for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could lead people to avoid seeking help.”

The Guardian cited a cancer screening example too. Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said a pap test being listed as a test for vaginal cancer was “completely wrong information.”

Sophie Randall, director of the Patient Information Forum, said the examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.”

The Guardian also reported that repeating the same search could produce different AI summaries at different times, pulling from different sources.

Google’s Response

Google disputed both the examples and the conclusions.

A spokesperson told The Guardian that many of the health examples shared were “incomplete screenshots,” but from what the company could assess they linked “to well-known, reputable sources and recommend seeking out expert advice.”

Google told The Guardian the “vast majority” of AI Overviews are “factual and helpful,” and that it “continuously” makes quality improvements. The company also argued that AI Overviews’ accuracy is “on a par” with other Search features, including featured snippets.

Google added that when AI Overviews misinterpret web content or miss context, it will take action under its policies.

The Broader Accuracy Context

This investigation lands in the middle of a debate that’s been running since AI Overviews expanded in 2024.

During the initial rollout, AI Overviews drew attention for bizarre results, including suggestions involving glue on pizza and eating rocks. Google later said it would reduce the scope of queries that trigger AI-written summaries and refine how the feature works.

I covered that launch, and the early accuracy problems quickly became part of the public narrative around AI summaries. The question then was whether the issues were edge cases or something more structural.

More recently, data from Ahrefs suggests medical YMYL queries are more likely than average to trigger AI Overviews. In its analysis of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That’s more than double the overall baseline rate in the dataset.

Separate research on medical Q&A in LLMs has pointed to citation-support gaps in AI-generated answers. One evaluation framework, SourceCheckup, found that many responses were not fully supported by the sources they cited, even when systems provided links.

Why This Matters

AI Overviews appear above ranked results. When the topic is health, errors carry more weight.

Publishers have spent years investing in documented medical expertise to meet. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

The Guardian’s reporting also highlights a practical problem. The same query can produce different summaries at different times, making it harder to verify what you saw by running the search again.

Looking Ahead

Google has previously adjusted AI Overviews after viral criticism. Its response to The Guardian indicates it expects AI Overviews to be judged like other Search features, not held to a separate standard.

State Of AI Search Optimization 2026 via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Every year, after the winter holidays, I spend a few days ramping up by gathering the context from last year and reminding myself of where my clients are at. I want to use the opportunity to share my understanding of where we are with AI Search, so you can quickly get back into the swing of things.

As a reminder, the vibe around ChatGPT turned a bit sour at the end of 2025:

  • Google released the superior Gemini 3, causing Sam Altman to announce a Code Red (ironically, three years after Google did the same at the launch of ChatGPT 3.5).
  • OpenAI made a series of circular investments that raised eyebrows and questions about how to finance them.
  • ChatGPT, which sends the majority of all LLMs, reaches at most 4% of the current organic (mostly Google) referral traffic.

Most of all, we still don’t know the value of a mention in an AI response. However, the topic of AI and LLMs couldn’t be more important because the Google user experience is turning from a list of results to a definitive answer.

A big “thank you” to Dan Petrovic and Andrea Volpini for reviewing my draft and adding meaningful concepts.

AI Search Optimization
Image Credit: Kevin Indig

Retrieved → Cited → Trusted

Optimizing for AI search visibility follows a pipeline similar to the classic “crawl, index, rank” for search engines:

  1. Retrieval systems decide which pages enter the candidate set.
  2. The model selects which sources to cite.
  3. Users decide which citation to trust and act on.

Caveats:

  1. A lot of the recommendations overlap strongly with common SEO best practices. Same tactics, new game.
  2. I don’t pretend to have an exhaustive list of everything that works.
  3. Controversial factors like schema or llms.txt are not included.

Consideration: Getting Into The Candidate Pool

Before any content enters the model’s consideration (grounding) set, it must be crawled, indexed, and fetchable within milliseconds during real-time search.

The factors that drive consideration are:

  • Selection Rate and Primary Bias.
  • Server response time.
  • Metadata relevance.
  • Product feeds (in ecommerce).

1. Selection Rate And Primary Bias

  • Definition: Primary bias measures the brand-attribute associations a model holds before grounding in live search results. Selection Rate measures how frequently the model chooses your content from the retrieval candidate pool.
  • Why it matters: LLMs are biased by training data. Models develop confidence scores for brand-attribute relationships (e.g., “cheap,” “durable,” “fast”) independent of real-time retrieval. These pre-existing associations influence citation likelihood even when your content enters the candidate pool.
  • Goal: Understand which attributes the model associates with your brand and how confident it is in your brand as an entity. Systematically strengthen those associations through targeted on-page and off-page campaigns.

2. Server Response Time

  • Definition: The time between a crawler request and the server’s first byte of response data (TTFB = Time To First Byte).
  • Why it matters: When models need web results for reasoning answers (RAG), they need to retrieve the content like a search engine crawler. Even though retrieval is mostly index-based, faster servers help with rendering, agentic workflows, and freshness, and compound query fan-out. LLM retrieval operates under tight latency budgets during real-time search. Slow responses prevent pages from entering the candidate pool because they miss the retrieval window. Consistently slow response times trigger crawl rate limiting.
  • Goal: Maintain server response times <200ms>. Sites with <1s load times receive> 3x more Googlebot requests than sites >3s. For LLM crawlers (GPTBot, Google-Extended), retrieval windows are even tighter than traditional search.

3. Metadata Relevance

  • Definition: Title tags, meta descriptions, and URL structure that LLMs parse when evaluating page relevance during live retrieval.
  • Why it matters: Before picking content to form AI answers, LLMs parse titles for topical relevance, descriptions as document summaries, and URLs as context clues for page relevance and trustworthiness.
  • Goal: Include target concepts in titles and descriptions (!) to match user prompt language. Create keyword-descriptive URLs, potentially even including the current year to signal freshness.

4. Product Feed Availability (Ecommerce)

  • Definition: Structured product catalogs submitted directly to LLM platforms with real-time inventory, pricing, and attribute data.
  • Why it matters: Direct feeds bypass traditional retrieval constraints and enable LLMs to answer transactional shopping queries (”where can I buy,” “best price for”) with accurate, current information.
  • Goal: Submit merchant-controlled product feeds to ChatGPT’s merchant program (chatgpt.com/merchants) in JSON, CSV, TSV, or XML format with complete attributes (title, price, images, reviews, availability, specs). Implement ACP (Agentic Commerce Protocol) for agentic shopping.

Relevance: Being Selected For Citation

The Attribution Crisis in LLM Search Results” (Strauss et al., 2025) reports low citation rates even when models access relevant sources.

  • 24% of ChatGPT (4o) responses are generated without explicitly fetching any online content.
  • Gemini provides no clickable citation in 92% of answers.
  • Perplexity visits about 10 relevant pages per query but cites only three to four.

Models can only cite sources that enter the context window. Pre-training mentions often go unattributed. Live retrieval adds a URL, which enables attribution.

5. Content Structure

  • Definition: The semantic HTML hierarchy, formatting elements (tables, lists, FAQs), and fact density that make pages machine-readable.
  • Why it matters: LLMs extract and cite specific passages. Clear structure makes pages easier to parse and excerpt. Since prompts average 5x the length of keywords, structured content answering multi-part questions outperforms single-keyword pages.
  • Goal: Use semantic HTML with clear H-tag hierarchies, tables for comparisons, and lists for enumeration. Increase fact and concept density to maximize snippet contribution probability.

6. FAQ Coverage

  • Definition: Question-and-answer sections that mirror the conversational phrasing users employ in LLM prompts.
  • Why it matters: FAQ formats align with how users query LLMs (”How do I…,” “What’s the difference between…”). This structural and linguistic match increases citation and mention likelihood compared to keyword-optimized content.
  • Goal: Build FAQ libraries from real customer questions (support tickets, sales calls, community forums) that capture emerging prompt patterns. Monitor FAQ freshness through lastReviewed or DateModified schema.

7. Content Freshness

  • Definition: Recency of content updates as measured by “last updated” timestamps and actual content changes.
  • Why it matters: LLMs parse last-updated metadata to assess source recency and prioritize recent information as more accurate and relevant.
  • Goal: Update content within the past three months for maximum performance. Over 70% of pages cited by ChatGPT were updated within 12 months, but content updated in the last three months performs best across all intents.

8. Third-Party Mentions (”Webutation”)

  • Definition: Brand mentions, reviews, and citations on external domains (publishers, review sites, news outlets) rather than owned properties.
  • Why it matters: LLMs weigh external validation more heavily than self-promotion the closer user intent comes to a purchase decision. Third-party content provides independent verification of claims and establishes category relevance through co-mentions with recognized authorities. They increase the entitithood inside large context graphs.
  • Goal: 85% of brand mentions in AI search for high purchase intent prompts come from third-party sources. Earn contextual backlinks from authoritative domains and maintain complete profiles on category review platforms.

9. Organic Search Position

  • Definition: Page ranking in traditional search engine results pages (SERPs) for relevant queries.
  • Why it matters: Many LLMs use search engines as retrieval sources. Higher organic rankings increase the probability of entering the LLM’s candidate pool and receiving citations.
  • Goal: Rank in Google’s top 10 for fan-out query variations around your core topics, not just head terms. Since LLM prompts are conversational and varied, pages ranking for many long-tail and question-based variations have higher citation probability. Pages in the top 10 show a strong correlation (~0.65) with LLM mentions, and 76% of AI Overview citations pull from these positions. Caveat: Correlation varies by LLM. For example, overlap is high for AI Overviews but low for ChatGPT.

User Selection: Earning Trust And Action

Trust is critical because we’re dealing with a single answer in AI search, not a list of search results. Optimizing for trust is similar to optimizing for click-through rates in classic search, just that it takes longer and is harder to measure.

10. Demonstrated Expertise

  • Definition: Visible credentials, certifications, bylines, and verifiable proof points that establish author and brand authority.
  • Why it matters: AI search delivers single answers rather than ranked lists. Users who click through require stronger trust signals before taking action because they’re validating a definitive claim.
  • Goal: Display author credentials, industry certifications, and verifiable proof (customer logos, case study metrics, third-party test results, awards) prominently. Support marketing claims with evidence.

11. User-Generated Content Presence

  • Definition: Brand representation in community-driven platforms (Reddit, YouTube, forums) where users share experiences and opinions.
  • Why it matters: Users validate synthetic AI answers against human experience. When AI Overviews appear, clicks on Reddit and YouTube grow from 18% to 30% because users seek social proof.
  • Goal: Build positive presence in category-relevant subreddits, YouTube, and forums. YouTube and Reddit are consistently in the top 3 most cited domains across LLMs.

From Choice To Conviction

Search is moving from abundance to synthesis. For two decades, Google’s ranked list gave users a choice. AI search delivers a single answer that compresses multiple sources into one definitive response.

The mechanics differ from early 2000s SEO:

  • Retrieval windows replace crawl budgets.
  • Selection rate replaces PageRank.
  • Third-party validation replaces anchor text.

The strategic imperative is identical: earn visibility in the interface where users search. Traditional SEO remains foundational, but AI visibility demands different content strategies:

  • Conversational query coverage matters more than head-term rankings.
  • External validation matters more than owned content.
  • Structure matters more than keyword density.

Brands that build systematic optimization programs now will compound advantages as LLM traffic scales. The shift from ranked lists to definitive answers is irreversible.


Featured Image: Paulo Bobita/Search Engine Journal