What Japan’s “megaquake” warning really tells us

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

On August 8, at 16:42 local time, a magnitude-7.1 earthquake shook southern Japan. The temblor, originating off the shores of mainland island of Kyūshū, was felt by nearly a million people across the region, and initially, the threat of a tsunami emerged. But only a diminutive wave swept ashore, buildings remained upright, and nobody died. The crisis was over as quickly as it began.

But then, something new happened. The Japan Meteorological Agency, a government organization, issued a ‘megaquake advisory’ for the first time. This pair of words may appear disquieting—and to some extent, they are. There is a ticking bomb below Japanese waters, a giant crevasse where one tectonic plate dives below another. Stress has been accumulating across this boundary for quite some time, and inevitably, it will do what it has repeatedly done in the past: part of it will violently rupture, generating a devastating earthquake and a potentially huge tsunami.

The advisory was in part issued because it is possible that the magnitude-7.1 quake is a foreshock – a precursory quake – to a far larger one, a tsunami-making monster that could kill a quarter of a million people.

The good news, for now, is that scientists think it is very unlikely that that magnitude-7.1 quake is a prelude to a cataclysm. Nothing is certain, but “the chances that this actually is a foreshock are really quite low,” says Harold Tobin, the director of the Pacific Northwest Seismic Network.

The advisory, ultimately, isn’t prophetic. Its primary purpose is to let the public know that scientists are aware of what’s going on, that they are cognizant of the worst-case scenario—and that everyone else should be mindful of that grim possibility too. Evacuation routes should be memorized, and emergency supplies should be obtained, just in case.

“Even if the probability is low, the consequences are so high,” says Judith Hubbard, an earthquake scientist at Cornell University. “It makes sense to worry about some of these low probabilities.”

Japan, which sits atop a tectonic jigsaw, is no stranger to large earthquakes. Just this past New Year’s Day, a magnitude-7.6 temblor convulsed the Noto Peninsula, killing 230 people. But special attention is paid to certain quakes even when they cause no direct harm.

The August 8 event took place on the Nankai subduction zone: here, the Philippine Sea plate creeps below Japan, which is attached to the Eurasian plate. This type of plate boundary is the sort capable of producing ‘megaquakes’, those of a magnitude-8.0 and higher. (The numerical difference may seem small, but the scale is logarithmic: a magnitude-8.0 quake unleashes 32 times more energy than a magnitude-7.0 quake.)

Consequently, the Nankai subduction zone (or Nankai Trough) has created several historical tragedies. A magnitude-7.9 quake in 1944 was followed by a magnitude-8.0 quake in 1946; both events were caused by part of the submarine trench jolting. The magnitude-8.6 quake of 1707, however, involved the rupture of the entire Nankai Trough. Thousands died on each occasion.

Predicting disaster

Predicting when and where the next major quake will happen anywhere on Earth is currently impossible. Nankai is no different: as recently noted by Hubbard on her blog Earthquake Insights – co-authored with geoscientist Kyle Bradley – there isn’t a set time between Nankai’s major quakes, which range from days to several centuries.

But as stress is continually accumulating on that plate boundary, it’s certain that, one day, the Nankai Trough will let loose another great quake, one which could push a vast volume of seawater toward a large swath of western and central Japan, making a tsunami 100 feet tall. The darkest scenario suggests that 230,000 could perish, two million buildings would be damaged or destroyed, and the country would be left with a $1.4 trillion bill.

Naturally, a magnitude-7.1 quake on that Trough worries scientists. Aftershocks (a series of smaller magnitude quakes) are a guaranteed feature of potent quakes. But there is a small chance that a large quake will be followed by an even larger quake, retrospectively making the first a foreshock.

“The earthquake changes the stress in the surrounding crust a little bit,” says Hubbard. Using the energy released during the August 8 rupture, and decoding the seismic waves created during the quake, scientists can estimate how much stress gets shifted to surrounding faults.

The worry is that some of the stress released by one quake gets transferred to a big fault that hasn’t ruptured in a very long time but is ready to fold like an explosive house of cards. “You never know which increment of stress is gonna be the one that pushes it over the edge.”

Scientists cannot tell whether a large quake is a foreshock until a larger quake occurs. But the possibility remains that the August 8 temblor is a foreshock to something considerably worse. Statistically, it’s unlikely. But there is additional context to why that megaquake advisory was issued: the specter of 2011’s magnitude-9.1 Tōhoku earthquake and tsunami, which killed 18,000 people, still haunts the Japanese government and the nation’s geoscientists. 

Hubbard explains that, two days before that quake struck off Japan’s eastern seaboard, there was a magnitude-7.2 event in the same area—now known to be a foreshock to the catastrophe. Reportedly, authorities in Japan regretted not highlighting that possibility in advance, which may have meant people on the eastern seaboard would have been more prepared, and more capable, of escaping their fate.

A sign to get prepared

In response, Japan’s government created new protocols for signaling that foreshock possibility. Most magnitude-7.0-or-so quakes would not be followed by a ‘megaquake advisory’. Only those happening in tectonic settings able to trigger truly gigantic quakes will—and that includes the Nankai Trough.

Crucially, this advisory is not a warning that a megaquake is imminent. It means: “be ready for when the big earthquake comes,” says Hubbard. Nobody is mandated to evacuate, but they are asked to know their escape routes. Meanwhile, local news reports that nursing homes and hospitals in the region are tallying emergency supplies while moving immobile patients to higher floors or other locations. The high-speed Shinkansen railway trains are running at a reduced maximum speed, and certain flights are carrying more fuel than usual in case they need to divert.

Earthquake advisories aren’t new. “California has something similar, and has issued advisories before,” says Wendy Bohon, an independent earthquake geologist. In September 2016, for example, a swarm of hundreds of modest quakes caused the U.S. Geological Survey to publicly advise that, for a week, there was a 0.03 to 1% chance of a magnitude-7.0-or-greater quake rocking the Southern San Andreas Fault—an outcome that fortunately didn’t come to pass.

But this megaquake advisory is Japan’s first, and it will have both pros and cons. “There are economic and social consequences to this,” says Bohon. Some confusion about how to respond has been reported, and widespread cancellations of travel to the region will come with a price tag. 

But calm reactions to the advisory seem to be the norm, and (ideally) this advisory will result in an increased understanding of the threat of the Nankai Trough. “It really is about raising awareness,” says Adam Pascale, chief scientist at the Seismology Research Centre in Melbourne, Australia. “It’s got everyone talking. And that’s the point.”

Geoscientists are also increasingly optimistic that the August 8 quake isn’t a harbinger of a seismic pandemonium. “This thing is way off to the extreme margin of the actual Nankai rupture zone,” says Tobin—meaning it may not even count as being in the zone of tectonic concern. 

A blog post co-authored by Shinji Toda, a seismologist at Tōhoku University in Sendai, Japan, also estimates that any stress transferal to the dangerous parts of the Trough is negligible. There is no clear evidence that the plate boundary is acting weirdly. And with each day that goes by, the odds of the August 8 quake being a foreshock drop even further.

Tech defenses

But if a megaquake did suddenly emerge, Japan has a technological shield that may mitigate a decent portion of the disaster. 

Buildings are commonly fitted with dampeners that allow them to withstand dramatic quake-triggered shaking. And like America’s West Coast, the entire archipelago has a sophisticated earthquake early-warning system: seismometers close to the quake’s origins listen to its seismic screams, and software makes a quick estimate of the magnitude and shaking intensity of the rupture, before beaming it to people’s various devices, giving them invaluable seconds to get to cover. Automatic countermeasures also slow trains down, control machinery in factories, hospitals, and office buildings, to minimize damage from the incoming shaking.

A tsunami early-warning system also kicks into gear if activated, beaming evacuation notices to phones, televisions, radios, sirens, and myriad specialized receivers in buildings in the afflicted region—giving people several minutes to flee. A megaquake advisory may be new, but for a population highly knowledgeable about earthquake and tsunami defense, it’s just another layer of protection.

The advisory has had other effects too: it’s caused those in another imperiled part of the world to take notice. The Cascadia Subduction Zone offshore from the US Pacific Northwest is also capable of producing both titanic quakes and prodigious tsunamis. Its last grand performance, in 1700, created a tsunami that not only inundated large sections of the North American coast, but it also swamped parts of Japan, all the way across the ocean.

Japan’s megaquake advisory has got Tobin thinking: “What would we do if our subduction zone starts acting weird?” he says—which includes a magnitude-7.0 quake in the Cascadian depths. “There is not a protocol in place the way there is in Japan.” Tobin speculates that a panel of experts would quickly assemble, and a statement – perhaps one not too dissimilar to Japan’s own advisory – would emerge from the U.S. Geological Survey. Like Japan, “we would have to be very forthright about the uncertainty,” he says.

Whether it’s Japan or the US or anywhere else, such advisories aren’t meant to engender panic. “You don’t want people to live their lives in fear,” says Hubbard. But it’s no bad thing to draw attention to the fact that Earth can sometimes be an unforgiving place to live.

Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How To Kill An Asteroid (October 2024).

This researcher wants to replace your brain, little by little

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.

His idea? Replace your body parts. All of them. Even your brain. 

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains. 

President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called  “bold, urgent innovation” with transformative potential. 

The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.

As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.

The idea has a halo of plausibility since there are already liver transplants and titanium hips, artificial corneas and substitute heart valves. The trickiest part is your brain. That ages, too, shrinking dramatically in old age. But you don’t want to swap it out for another—because it is you.

And that’s where Hébert’s research comes in. He’s been exploring ways to “progressively” replace a brain by adding bits of youthful tissue made in a lab. The process would have to be done slowly enough, in steps, that your brain could adapt, relocating memories and your self-identity.  

During a visit this spring to his lab at Albert Einstein, Hébert showed MIT Technology Review how he has been carrying out initial experiments with mice, removing small sections of their brains and injecting slurries of embryonic cells. It’s a step toward proving whether such youthful tissue can survive and take over important functions.

To be sure, the strategy is not widely accepted, even among researchers in the aging field. “On the surface it sounds completely insane, but I was surprised how good a case he could make for it,” says Matthew Scholz, CEO of aging research company Oisín Biotechnologies, who met with Hébert this year. 

Scholz is still skeptical though. “A new brain is not going to be a popular item,” he says. “The surgical element of it is going to be very severe, no matter how you slice it.”

Now, though, Hébert’s ideas appear to have gotten a huge endorsement from the US government. Hébert told MIT Technology Review that he had proposed a $110 million project to ARPA-H to prove his ideas in monkeys and other animals, and that the government “didn’t blink” at the figure. 

ARPA-H confirmed this week that it had hired Hébert as a program manager. 

The agency, modeled on DARPA, the Department of Defense organization that developed stealth fighters, gives managers unprecedented leeway in awarding contracts to develop novel technologies. Among its first programs are efforts to develop at-home cancer tests and cure blindness with eye transplants.

President Biden created ARPA-H in 2022 to pursue “bold, urgent innovation” with transformative potential.

It may be several months before details of the new project are announced, and it’s possible that ARPA-H will establish more conventional goals like treating stroke victims and Alzheimer’s patients, whose brains are damaged, rather than the more radical idea of extreme life extension. 

If it can work, forget aging; it would be useful for all kinds of neurodegenerative disease,” says Justin Rebo, a longevity scientist and entrepreneur.

But defeating death is Hébert’s stated aim. “I was a weird kid and when I found out that we all fall apart and die, I was like, ‘Why is everybody okay with this?’ And that has pretty much guided everything I do,” he says. “I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

Hébert, now 58, also recalls when he began thinking that the human form might not be set in stone. It was upon seeing the 1973 movie Westworld, in which the gun-slinging villain, played by Yul Brynner, turns out to be an android. “That really stuck with me,” Hébert said.

Lately, Hébert has become something of a star figure among immortalists, a fringe community devoted to never dying. That’s because he’s an established scientist who is willing to propose extreme steps to avoid death. “A lot of people want radical life extension without a radical approach. People want to take a pill, and that’s not going to happen,” says Kai Micah Mills, who runs a company, Cryopets, developing ways to deep-freeze cats and dogs for future reanimation.

The reason pharmaceuticals won’t ever stop aging, Hébert says, is that time affects all of our organs and cells and even degrades substances such as elastin, one of the molecular glues that holds our bodies together. So even if, say, gene therapy could rejuvenate the DNA inside cells, a concept some companies are exploring, Hébert believes we’re still doomed as the scaffolding around them comes undone.

One organization promoting Hébert’s ideas is the Longevity Biotech Fellowship (LBF), a self-described group of “hardcore” life extension enthusiasts, which this year published a technical roadmap for defeating aging altogether. In it, they used data from Hébert’s ARPA-H proposal to argue in favor of extending life with gradual brain replacement for elderly subjects, as well as transplant of their heads onto the bodies of “non-sentient” human clones, raised to lack a functioning brain of their own, a procedure they referred to as “body transplant.”

Such a startling feat would involve several technologies that don’t yet exist, including a means to attach a transplanted head to a spinal cord. Even so, the group rates “replacement” as the most likely way to conquer death, claiming it would take only 10 years and $3.6 billion to demonstrate.

“It doesn’t require you to understand aging,” says Mark Hamalainen, co-founder of the research and education group. “That is why Jean’s work is interesting.”

Hébert’s connections to such far-out concepts (he serves as a mentor in LBF’s training sessions) could make him an edgy choice for ARPA-H, a young agency whose budget is $1.5 billion a year.

For instance, Hebert recently said on a podcast with Hamalainen that human fetuses might be used as a potential source of life-extending parts for elderly people. That would be ethical to do, Hébert said during the program, if the fetus is young enough that there “are no neurons, no sentience, and no person.” And according to a meeting agenda viewed by MIT Technology Review, Hébert was also a featured speaker at an online pitch session held last year on full “body replacement,” which included biohackers and an expert in primate cloning.

Hébert declined to describe the session, which he said was not recorded “out of respect for those who preferred discretion.” But he’s in favor of growing non-sentient human bodies. “I am in conversation with all these groups because, you know, not only is my brain slowly deteriorating, but so is the rest of my body,” says Hébert. “I’m going to need other body parts as well.”

The focus of Hébert’s own scientific work is the neocortex, the outer part of the brain that looks like a pile of extra-thick noodles and which houses most of our senses, reasoning, and memory. The neocortex is “arguably the most important part of who we are as individuals,” says Hébert, as well as “maybe the most complex structure in the world.”

There are two reasons he believes the neocortex could be replaced, albeit only slowly. The first is evidence from rare cases of benign brain tumors, like a man described in the medical literature who developed a growth the size of an orange. Yet because it grew very slowly, the man’s brain was able to adjust, shifting memories elsewhere, and his behavior and speech never seemed to change—even when the tumor was removed. 

That’s proof, Hébert thinks, that replacing the neocortex little by little could be achieved “without losing the information encoded in it” such as a person’s self-identity.

The second source of hope, he says, is experiments showing that fetal-stage cells can survive, and even function, when transplanted into the brains of adults. For instance, medical tests underway are showing that young neurons can integrate into the brains of people who have epilepsy  and stop their seizures.  

“It was these two things together—the plastic nature of brains and the ability to add new tissue—that, to me, were like, ‘Ah, now there has got to be a way,’” says Hébert.

“I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

One challenge ahead is how to manufacture the replacement brain bits, or what Hebert has called “facsimiles” of neocortical tissue. During a visit to his lab at Albert Einstein, Hébert described plans to manually assemble chunks of youthful brain tissue using stem cells. These parts, he says, would not be fully developed, but instead be similar to what’s found in a still-developing fetal brain. That way, upon transplant, they’d be able to finish maturing, integrate into your brain, and be “ready to absorb and learn your information.”

To design the youthful bits of neocortex, Hébert has been studying brains of aborted human fetuses 5 to 8 weeks of age. He’s been measuring what cells are present, and in what numbers and locations, to try to guide the manufacture of similar structures in the lab.

“What we’re engineering is a fetal-like neocortical tissue that has all the cell types and structure needed to develop into normal tissue on its own,” says Hébert. 

Part of the work has been carried out by a startup company, BE Therapeutics (it stands for Brain Engineering), located in a suite on Einstein’s campus and which is funded by Apollo Health Ventures, VitaDAO, and with contributions from a New York State development fund. The company had only two employees when MIT Technology Review visited this spring, and the its future is uncertain, says Hébert, now that he’s joining ARPA-H and closing his lab at Einstein.

Because it’s often challenging to manufacture even a single cell type from stem cells, making a facsimile of the neocortex involving a dozen cell types isn’t an easy project. In fact, it’s just one of several scientific problems standing between you and a younger brain, some of which might never have practical solutions. “There is a saying in engineering. You are allowed one miracle, but if you need more than one, find another plan,” says Scholz.

Maybe the crucial unknown is whether young bits of neocortex will ever correctly function inside an elderly person’s brain, for example by establishing connections or storing and sending electro-chemical information. Despite evidence the brain can incorporate individual transplanted cells, that’s never been robustly proven for larger bits of tissue, says Rusty Gage, a biologist at the Salk Institute in La Jolla, Calif., and who is considered a pioneer of neural transplants. He says researchers for years have tried to transplant larger parts of fetal animal brains into adult animals, but with inconclusive results. “If it worked, we’d all be doing more of it,” he says.

The problem, says Gage, isn’t whether the tissue can survive, but whether it can participate in the workings of an existing brain. “I am not dissing his hypothesis. But that’s all it is,” says Gage. “Yes, fetal or embryonic tissue can mature in the adult brain. But whether it replaces the function of the dysfunctional area is an experiment he needs to do, if he wants to convince the world he has actually replaced an aged section with a new section.”

In his new role at ARPA-H, it’s expected that Hébert will have a large budget to fund scientists to try and prove his ideas can work. He agrees it won’t be easy. “We’re, you know, a couple steps away from reversing brain aging,” says Hébert. “A couple of big steps away, I should say.”

What’s next for drones

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Drones have been a mainstay technology among militaries, hobbyists, and first responders alike for more than a decade, and in that time the range available has skyrocketed. No longer limited to small quadcopters with insufficient battery life, drones are aiding search and rescue efforts, reshaping wars in Ukraine and Gaza, and delivering time-sensitive packages of medical supplies. And billions of dollars are being plowed into building the next generation of fully autonomous systems. 

These developments raise a number of questions: Are drones safe enough to be flown in dense neighborhoods and cities? Is it a violation of people’s privacy for police to fly drones overhead at an event or protest? Who decides what level of drone autonomy is acceptable in a war zone?

Those questions are no longer hypothetical. Advancements in drone technology and sensors, falling prices, and easing regulations are making drones cheaper, faster, and more capable than ever. Here’s a look at four of the biggest changes coming to drone technology in the near future.

Police drone fleets

Today more than 1,500 US police departments have drone programs, according to tracking conducted by the Atlas of Surveillance. Trained police pilots use drones for search and rescue operations, monitoring events and crowds, and other purposes. The Scottsdale Police Department in Arizona, for example, successfully used a drone to locate a lost elderly man with dementia, says Rich Slavin, Scottsdale’s assistant chief of police. He says the department has had useful but limited experiences with drones to date, but its pilots have often been hamstrung by the “line of sight” rule from the Federal Aviation Administration (FAA). The rule stipulates that pilots must be able to see their drones at all times, which severely limits the drone’s range.

Soon, that will change. On a rooftop somewhere in the city, Scottsdale police will in the coming months install a new police drone capable of autonomous takeoff, flight, and landing. Slavin says the department is seeking a waiver from the FAA to be able to fly its drone past the line of sight. (Hundreds of police agencies have received a waiver from the FAA since the first was granted in 2019.) The drone, which can fly up to 57 miles per hour, will go on missions as far as three miles from its docking station, and the department says it will be used for things like tracking suspects or providing a visual feed of an officer at a traffic stop who is waiting for backup. 

“The FAA has been much more progressive in how we’re moving into this space,” Slavin says. That could mean that around the country, the sight (and sound) of a police drone soaring overhead will become much more common. 

The Scottsdale department says the drone, which it is purchasing from Aerodome, will kick off its drone-as-first-responder program and will play a role in the department’s new “real-time crime center.” These sorts of centers are becoming increasingly common in US policing, and allow cities to connect cameras, license plate readers, drones, and other monitoring methods to track situations on the fly. The rise of the centers, and their associated reliance on drones, has drawn criticism from privacy advocates who say they conduct a great deal of surveillance with little transparency about how footage from drones and other sources will be used or shared. 

In 2019, the police department in Chula Vista, California, was the first to receive a waiver from the FAA to fly beyond line of sight. The program sparked criticism from members of the community who alleged the department was not transparent about the footage it collected or how it would be used. 

Jay Stanley, a senior policy analyst at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, says the waivers exacerbate existing privacy issues related to drones. If the FAA continues to grant them, police departments will be able to cover far more of a city with drones than ever, all while the legal landscape is murky about whether this would constitute an invasion of privacy. 

“If there’s an accumulation of different uses of this technology, we’re going to end up in a world where from the moment you step out of your front door, you’re going to feel as though you’re under the constant eye of law enforcement from the sky,” he says. “It may have some real benefits, but it is also in dire need of strong checks and balances.”

Scottsdale police say the drone could be used in a variety of scenarios, such as responding to a burglary in progress or tracking a driver with suspected connection to a kidnapping. But the real benefit, Slavin says, will come from pairing it with other existing technologies, like automatic license plate readers and hundreds of cameras placed around the city. “It can get to places very, very quickly,” he says. “It gives us real-time intelligence and helps us respond faster and smarter.”

While police departments might indeed benefit from drones in those situations, Stanley says the ACLU has found that many deploy them for far more ordinary cases, like reports of a kid throwing a ball against a garage or of “suspicious persons” in an area.

“It raises the question about whether these programs will just end up being another way in which vulnerable communities are over-policed and nickeled and dimed by law enforcement agencies coming down on people for all kinds of minor transgressions,” he says.

Drone deliveries, again

Perhaps no drone technology is more overhyped than home deliveries. For years, tech companies have teased futuristic renderings of a drone dropping off a package on your doorstep just hours after you ordered it. But they’ve never managed to expand them much beyond small-scale pilot projects, at least in the US, again largely due to the FAA’s line of sight rules. 

But this year, regulatory changes are coming. Like police departments, Amazon’s Prime Air program was previously limited to flying its drones within the pilot’s line of sight. That’s because drone pilots don’t have radar, air traffic controllers, or any of the other systems commercial flight relies on to monitor airways and keep them safe. To compensate, Amazon spent years developing an onboard system that would allow its drones to detect nearby objects and avoid collisions. The company says it showed the FAA in demonstrations that its drones could fly safely in the same airspace as helicopters, planes, and hot air balloons. 

In May, Amazon announced the FAA had granted the company a waiver and permission to expand operations in Texas, more than a decade after the Prime Air project started. And in July, the FAA cleared one more roadblock by allowing two companies—Zipline as well as Google’s Wing Aviation—to fly in the same airspace simultaneously without the need for visual observers. 

While all this means your chances of receiving a package via drone have ticked up ever so slightly, the more compelling use case might be medical deliveries. Shakiba Enayati, an assistant professor of supply chains at the University of Missouri–St. Louis, has spent years researching how drones could conduct last-mile deliveries of vaccines, antivenom, organs, and blood in remote places. She says her studies have found drones to be game changers for getting medical supplies to underserved populations, and if the FAA extends these regulatory changes, it could have a real impact. 

That’s especially true in the steps leading up to an organ transplant, she says. Before an organ can be transmitted to a recipient, a number of blood tests must be sent back-and-forth to make sure the recipient can accept it, which takes a time if the blood is being transferred by car or even helicopter. “In these cases, the clock is ticking,” Enayati says. If drones were allowed to be used in this step at scale, it would be a significant improvement.

“If the technology is supporting the needs of organ delivery, it’s going to make a big change in such an important arena,” she says.

That development could come sooner than using drones for delivery of the actual organs, which have to be transported under very tightly controlled conditions to preserve them.

Domesticating the drone supply chain

Signed into law last December, the American Security Drone Act bars federal agencies from buying drones from countries thought to pose a threat to US national security, such as Russia and China. That’s significant. China is the undisputed leader when it comes to manufacturing drones and drone parts, with over 90% of law enforcement drones in the US made by Shenzhen-based DJI, and many drones used by both sides of the war in Ukraine are made by Chinese companies. 

The American Security Drone Act is part of an effort to curb that reliance on China. (Meanwhile, China is stepping up export restrictions on drones with military uses.) As part of the act, the US Department of Defense’s Defense Innovation Unit has created the Blue UAS Cleared List, a list of drones and parts the agency has investigated and approved for purchase. The list applies to federal agencies as well as programs that receive federal funding, which often means state police departments or other non-federal agencies. 

Since the US is set to spend such significant sums on drones—with $1 billion earmarked for the Department of Defense’s Replicator initiative alone—getting on the Blue List is a big deal. It means those federal agencies can make large purchases with little red tape. 

Allan Evans, CEO of US-based drone part maker Unusual Machine, says the list has sparked a significant rush of drone companies attempting to conform to the US standards. His company manufactures a first-person view flight controller that he hopes will become the first of its kind to be approved for the Blue List.

The American Security Drone Act is unlikely to affect private purchases in the US of drones used by videographers, drone racers, or hobbyists, which will overwhelmingly still be made by China-based companies like DJI. That means any US-based drone companies, at least in the short term, will only survive by catering to the US defense market.  

“Basically any US company that isn’t willing to have ancillary involvement in defense work will lose,” Evans says. 

The coming months will show the law’s true impact: Because the US fiscal year ends in September, Evans says he expects to see a host of agencies spending their use-it-or-lose-it funding on US-made drones and drone components in the next month. “That will indicate whether the marketplace is real or not, and how much money is actually being put toward it,” he says.

Autonomous weapons in Ukraine

The drone war in Ukraine has largely been one of attrition. Drones have been used extensively for surveying damage, finding and tracking targets, or dropping weapons since the war began, but on average these quadcopter drones last just three flights before being shot down or rendered unnavigable by GPS jamming. As a result, both Ukraine and Russia prioritized accumulating high volumes of drones with the expectation that they wouldn’t last long in battle. 

Now they’re having to rethink that approach, according to Andriy Dovbenko, founder of the UK-Ukraine Tech Exchange, a nonprofit that helps startups involved in Ukraine’s war effort and eventual reconstruction raise capital. While working with drone makers in Ukraine, he says, he has seen the demand for technology shift from big shipments of simple commercial drones to a pressing need for drones that can navigate autonomously in an environment where GPS has been jammed. With 70% of the front lines suffering from jamming, according to Dovbenko, both Russian and Ukrainian drone investment is now focused on autonomous systems. 

That’s no small feat. Drone pilots usually rely on video feeds from the drone as well as GPS technology, neither of which is available in a jammed environment. Instead, autonomous drones operate with various types of sensors like LiDAR to navigate, though this can be tricky in fog or other inclement weather. Autonomous drones are a new and rapidly changing technology, still being tested by US-based companies like Shield AI. The evolving war in Ukraine is raising the stakes and the pressure to deploy affordable and reliable autonomous drones.  

The transition toward autonomous weapons also raises serious yet largely unanswered questions about how much humans should be taken out of the loop in decision-making. As the war rages on and the need for more capable weaponry rises, Ukraine will likely be the testing ground for if and how the moral line is drawn. But Dovbenko says stopping to find that line during an ongoing war is impossible. 

“There is a moral question about how much autonomy you can give to the killing machine,” Dovbenko says. “This question is not being asked right now in Ukraine because it’s more of a matter of survival.”

Flywheels Ease Content, Social Media Marketing

Content and social media are distinct marketing disciplines with a shared problem. Both must produce engaging material despite limited time and money.

Social media is essential for consistent and quick engagement and brand building. Content marketing is vital for search engine optimization and long-term relationships, leading to repeat sales.

Both are necessary and resource-intensive.

Combining the two into a single workflow can reduce the demands of each.

Deadline Chaos

An ecommerce business might need to post upward of six times daily to establish a following on X and Threads and perhaps four videos on TikTok and Instagram.

Assuming they maintain this schedule every day, the company’s marketers face roughly 140 weekly social media deadlines.

Meanwhile, the team must attract shoppers elsewhere, boost search engine rankings, and develop lasting customer relationships through blog posts, articles, podcasts, videos, and landing pages.

That’s a lot of content.

The most difficult parts are developing content ideas, producing them, and measuring the results.

A Flywheel

A business flywheel is a circular process wherein each step leads to the next.

The concept has been around for decades. Jeff Bezos famously used a flywheel to describe Amazon’s business model. Author Jim Collins wrote a book about the topic, prompting many businesses to adopt it for routine processes.

Applying a flywheel to social media and content marketing, we can focus on three steps:

  • Content ideas,
  • Content creation,
  • Measurement.

To illustrate, let’s develop a flywheel for articles and social media posts. I’ll focus on two of our three steps: content ideas and measurement.

Let’s assume we work for a content-then-commerce business that sells licensed, science-fiction-themed products. The company attracts potential customers to its website via content that contains related products for purchase.

Here are the steps.

1. Publish a post on X

Take a topic idea and compose an X post. Give the post a measurable call to action, such as “Subscribe to email list,” “Request a sample,” or “Leave a comment.” Record the post in a spreadsheet.

Repeat this step six times per day.

Screenshot of a spreadsheet showoing Star Trek related X posts and thier performance.

Log X posts in a spreadsheet and record their performance. Here the most successful posts were about transporter failures. Click image to enlarge.

2. Measure performance

Seven days after it’s published, measure each post’s results to identify popular topics for humans and X’s algorithm. Add the metric to the spreadsheet.

3. Expand successful X posts into articles

Repurpose top-performing X posts or topics into on-site, long-form articles. Optimize each with organic search keywords.

For example, a successful X post about death in a Star Trek transporter might lead to an article titled “Death and Other Problems with Star Trek’s Transporters.”

Record the articles in the spreadsheet and set a goal for each, such as site traffic or email subscriptions.

The topics of successful X posts are expanded into articles.

4. Measure article performance

Thirty days after publication, track the article’s performance against its goal. The aim is to identify the best performers.

5. Splinter and branch successful articles

For each successful article, identify at least five “splinter” and five “branch” topics. A splinter topic may derive from a sub-heading, while a branch could be a parallel concept.

A splinter topic for the article “Death and Other Problems with Star Trek’s Transporters” could be something like “Star Trek’s Transporters Create an Existential Identity Crisis.” Use each splinter or branch idea for an X post.

This closes the flywheel. We started with an idea, created an X post, and repurposed it into blog articles, which spawned new X posts.

Our example lacks content creation, although the pattern would be similar. We could add the content creation step at “Publish posts on X” and “Expand successful X posts into blog articles.”

The flywheel starts with an idea for an X post and expands it into articles, which spawns new ideas for X posts.

BigCommerce Rising: Q2 2024 Recap

One could argue BigCommerce needs an alternative revenue source. Amazon’s AWS, its cloud computing platform, drives most of the company’s profit. Shopify Payments, a credit card processor, produces nearly two-thirds of overall revenue, far more than core subscriptions to the platform.

Yet core platform subscriptions account for roughly 75% of BigCommerce’s revenue. The rest comes from “partners,” mostly third-party app developers who share their proceeds.

Regardless, BigCommerce is by any measure an ecommerce pioneer.

We first crossed paths in 2008 when Armando Roggio, our longtime senior contributor, profiled Interspire, then a four-year-old Australia-based licensed cart provider. The CEO lamented not having a hosted version.

By 2010, in our next profile, the hosted cart was up and running. The founders called it BigCommerce.

In 2013, BigCommerce caught the attention of Steve Case, the AOL visionary, who invested $40 million of equity via his Revolution Growth fund, essentially controlling the company and installing a new CEO, Brent Bellm.

In August 2020, BigCommerce went public, raising $216 million on the Nasdaq exchange under the “BIGC” symbol.

Q2 2024 Financials

Post IPO, BigCommerce’s share price quickly rose, reaching $120 in September 2020. It now trades at roughly $6 and has long been a favorite of short sellers, investors who bet on the price decreasing

The company has posted net losses every year since the public offering. Its 2023 revenue, $309 million, was a fraction of Shopify’s $7 billion.

But better days are coming. BigCommerce’s Q2 2024 revenue, $81.8 million, is 8% higher than the same period last year. Net losses have steadily shrunk to $11.3 million in Q2.

Notably, the company is cash flow positive. Included in the $11.3 million net loss are non-cash expenses that, when added back, equate to nearly $12 million of positive cash flow for the quarter.

Investors have noticed. Multiple analysts now project net profits for all of 2024 and a target share price of nearly $10, a 67% increase.

BigCommerce At a Glance

Website: BigCommerce.com

Headquarters: Austin, Texas

Number of employees: Approx. 1,300

Year founded: 2004 (Interspire)

Number of live stores: Approx. 43,000

Number of enterprise (large) customers: Approx. 6,000

What Are Display Ads: A Complete Guide for Digital Marketers via @sejournal, @brookeosmundson

Imagine browsing your favorite blog and spotting a visually engaging ad that seamlessly fits the content and stands out just enough to grab your attention.

That’s the power of display advertising at work.

If you’re a digital marketing professional, you’ve likely heard of display networks as part of PPC advertising.

But are you using this channel to the best of your abilities?

In a world where advertising changes daily, it can be difficult to keep up with the best ways to optimize display ads.

In this in-depth guide, we’ll explain display advertising, its different types, and how it differs from search. We’ll also provide strategies and tools to help you take your display ads to the next level.

What Is Display Advertising?

Display advertising is a type of online advertising that typically uses images or videos to showcase your brand.

Thanks to responsive display ads, this ad format becomes much more personalized and can include elements like:

  • Text.
  • Images.
  • Videos.
  • Logos.

Potential customers see these ads while browsing the internet, using other mobile apps, social media platforms, or even connected TV devices.

Display ads are meant to capture the user’s attention in a way that doesn’t disrupt their experience. At the same time, they also encourage them to take action.

While display ads are typically associated with top-of-funnel marketing, advertisers use these ads across the buyer’s entire user journey. Brands can use display ads for:

  • Brand awareness.
  • Product-specific marketing.
  • Promotional sales.
  • Promoting specific content or services.
  • And much more.

Types Of Display Ads

By understanding the various types of display ads available, you can choose the right format to align with your marketing goals and effectively reach your target audience.

Each type offers unique advantages and can be used strategically to maximize engagement and conversions.

Responsive Display Ads

Unique to the Google Display Network, responsive display ads automatically adjust their size, appearance, and format to fit available ad spaces.

Advertisers provide assets such as images, headlines, logos, and descriptions, and Google uses machine learning to create the best possible combinations for different placements, unique to each user.

This flexibility allows responsive display ads to reach a broader audience and perform well across a wide range of devices and websites.

Banner Ads

Banner ads are considered a more traditional type of display advertising.

Banner ads appear across websites and apps and are placed at the top, bottom, or sides of webpages.

They’re typically static in format but can also use animation to catch the user’s eye without being too disruptive to their experience.

Interstitial Ads

Interstitial ads are full-screen ads that cover the whole screen of a webpage or an app.

They typically show up during natural transition points of a web session, like waiting for content to load or going between app screens.

They’re meant to be highly engaging but should be used strategically and sparingly to not overwhelm or annoy the user.

Rich Media Ads

Rich media display ads offer a more interactive experience with a potential customer.

What makes them interactive compared to the other display advertising types?

The beauty of this ad type is the combination of video, image, audio, and clickable elements to engage a user more fully.

Native Ads

The opposite of rich media ads would be native ads. This ad type is meant to blend seamlessly with the content and overall design of a webpage.

Native ads are meant to be non-disruptive to the user experience because they can match the look and feel of the content surrounding the ad.

By blending in more cohesively, it can help increase engagement rates.

Retargeting Ads

Retargeting display ads are intended to re-engage past website or app users who haven’t taken the desired action.

This ad type can look like any of the above-mentioned ad formats, or it could show dynamic content based on the user’s previous browsing history.

Unlike standard display ads, retargeting ads aren’t meant to scale broadly. They have a specific intended audience to invite them back to make a purchase.

Display Advertising Vs. Search Advertising

Display ads and search ads are both essential components of a sound digital marketing strategy.

However, they both serve different purposes and are meant to complement each other – not compete.

Below are the key main differentiators between display and search ads:

  • Targeting. While display ads typically use targeting like demographics, interests, and browsing behavior, search ads are primarily keyword-based and what they search for.
  • Intent. Display ads can help create demand by focusing on awareness and product consideration. Search ads, on the other hand, are intended to capture existing demand.
  • Ad format. Display ads are more visual in nature and utilize elements like images, videos, text, and logos. Search ads are primarily text-based with headlines and descriptions.
  • Reach. Display ads can reach broader audiences across the internet and are easier to scale. Search ads are limited to the specific search engines and their search partner networks, if applicable.

Display Advertising Examples

Display Ads come in many different shapes and sizes. Below are a few examples of ads found across the web in a variety of sizes.

Example: Leaderboard Display Ad

The example below was taken as I was browsing People.com. This ad for US Bank appeared at the top of the page before the hero content.

A leaderboard <span class=

Example: Skyscraper Display Ad

This example was taken as I was browsing Business Insider. An ad for Oracle Netsuite showed on the right-hand side of the page on a desktop device.

A skyscraper ad example on a desktop site.Source: Businessinsider.com, screenshot taken by author, July 2024

Example: Mobile Display Ad

I found this ad when reading a blog post on Southern Living on my mobile device. A display ad for Best Buy was inserted between paragraphs of the blog post.

A mobile display ad example on a phone.Source; Southernliving.com, screenshot taken by author July 2024

Display Advertising Strategy

Just like any other campaign type, display advertising should be driven by a sound strategy.

Let’s take a look at some of the key components of crafting a display advertising strategy.

1. Define Clear Goals

It’s important to establish the objective of each display campaign, such as brand awareness, lead generation, or sales.

If you’re not sure where to start, take a step back and consider your overarching business needs and what you’re trying to achieve.

For example, are you looking to gain new customers or re-engage existing customers? Is brand awareness more important, or are you looking to drive sales of a new product?

In Google Ads, you’ll start the campaign creation by choosing from the following objectives and then choose the ‘Display’ campaign type after choosing an objective:

Google Ads objectives in new campaign creation.Screenshot taken by author, July 2024

2. Choose Your Budget, Bidding Strategy, And Audience

Budgets and bid strategies are set at the campaign level.

The typical bid strategy pricing models for display ads are a cost-per-click (CPC) basis or a cost per 1,000 impressions (CPM) model. You’ll want to choose the one that aligns with the campaign goal and your overall budget.

In this example, I chose “Awareness” as the campaign objective, so Google Ads recommends a Viewable impressions bid strategy.

Choosing bid strategies in Google Ads for display campaign.Screenshot taken by author, July 2024.

Next is to refine the audience targeting for your campaigns.

If the goal is to attract new customers, you can use your own data on existing customers to build audience profiles to target.

Keep in mind the demographics, interests, and overall browsing behavior when putting together your target audience.

Display ads targeting options.Screenshot taken by author, July 2024.

3. Choose Display Ad Type, Format, And Placements

The nice part about Google Ads is the ability to target (or exclude) specific website placements or apps to ensure your ads show up in the right place.

You may be tempted to choose a short list of very specific websites, but by doing so, you could end up limiting your reach immensely. It’s also not guaranteed that your ads will show on those placements if your budget or bid is not competitive enough.

At the beginning, use negative placements to your advantage to exclude sites where your content would be inappropriate.

Now, as for ad size and format, there are two options in Google Ads:

  • Uploaded display ads.
  • Utilize responsive display ads (RDAs).

The main benefit of using uploaded display ads is that you have full control over the design. However, not all websites utilize these formats, and you may be missing out on additional reach if you opt not to use RDAs.

The most typical banner sizes for uploaded display ads include:

  • 728×90 (leaderboard).
  • 300×250 (medium rectangle).
  • 336×280 (large rectangle).
  • 300×50 (mobile banner).
  • 160×600 (skyscraper).

If you opt to use responsive display ads, Google takes the guesswork out of ad sizes for you.

Essentially, you’ll provide the basic elements, and Google will mix and match that content to create personalized ads for each user based on when and where they’re browsing.

Be sure to provide these essentials for a well-formatted ad:

  • Images.
  • Logos.
  • Brand name.
  • Headlines.
  • Descriptions.
  • Custom colors.
  • Call-to-action (CTA) text.

4. Focus On Creating Compelling Ad Content

Expanding on point #3 above, the visual design is your chance to capture the user’s attention.

A boring ad won’t stand out and can turn customers away. When designing ads, make sure to design visually appealing ads that align with your brand.

Additionally, make sure to test different elements and rotate out poor-performing elements.

It’s especially important to remain visually consistent if you’re marketing across different channels like social media. Consistent brand recognition across platforms can pay dividends over time.

5. Track And Optimize Performance

Once your display campaign is launched, you’ll want to monitor the key metrics chosen for the campaign objectives.

It may be tempting to make changes immediately, but it’s important to give the algorithm time to learn before making any major changes.

Unless something serious goes awry, like showing up on inappropriate placements, give the campaign time to run and then make tweaks based on the data coming in.

For example, if an ad shows a lot of impressions but few clicks, you may need to change the creative elements to capture the user’s attention more. Or, it could be the placements that need tweaking.

Or, if an ad is getting a ton of clicks but very few conversions, it may not be the ad itself; it could mean the landing page needs to be optimized. Try segmenting the ads by device to identify if the majority of clicks are coming from mobile and if the corresponding landing page is optimized for mobile delivery.

Ongoing campaign monitoring and optimization are vital for delivering optimal ROI to your display ads.

Read More:

Top Display Advertising Networks

Believe it or not, there’s a ton of different advertising networks to choose from as an alternative to Google.

Depending on your goal and usage of Display ads, you may need a different platform.

Some of the top Display ad network platforms include:

  • AdRoll
  • Amazon
  • StackAdapt
  • AirNow Media
  • Yahoo Ad Tech

You can find a full recommended list of Display Ad networks here.

Read More:

Display Advertising Tools

Depending on which stage you’re in for creating or running display ads, there are multiple tools to help take your display ads to the next level.

Ad Creation And Design Tools

If you’re looking to create display ads where you have full control, there are many user-friendly tools to help guide the ad creation process.

  • Google Web Designer: This is a free tool from Google that allows you to create HTML5 ads and motion graphics.
  • Canva: A more user-friendly option that has tons of templates to start from or the ability to create from scratch.
  • Bannersnack: This tool is specifically for creating banner ads, but it simplifies the design process with drag-and-drop components.

Ad Analysis And Optimization Tools

Analyzing display campaigns can be half the battle, and you need reliable tools to help optimize these campaigns to the fullest.

  • Google Analytics: This tool is essential for tracking and analyzing the performance of your campaigns. It can help marry the metrics like impressions and clicks to user purchase behavior to help you determine where to optimize further.
  • Google Ads Performance Planner: If you need help forecasting potential campaign changes, this tool is for you. It takes historical data and trends into considerations to help provide budget and bidding recommendations.
  • Hotjar: This is a user behavior tool that can provide session recordings, heatmaps, and more to understand how real users interact with your landing page and website.

Ad Management And Automation Tools

  • Google Ads Editor: This tool is great for managing multiple Google Ads campaigns offline, allowing for bulk changes and uploading changes on your own time.
  • Optmzyr: This platform offers more automation and streamlined workflow for PPC campaigns, including display ads.
  • Semrush: This platform can help with competitive analysis for display ads, which can help you refine your strategies.

Summary

Display ads are part of any comprehensive digital marketing strategy.

Because of their scalability and reach, display ads can cast a wide net to make potential customers aware of your brand and increase engagement and, ultimately, sales.

From traditional banner ads to innovative, responsive display ads, each type serves a unique purpose in capturing user attention and driving conversions.

By understanding the differences between display and search advertising, leveraging effective strategies, and utilizing various tools for ad creation, design, analysis, and optimization, you can maximize the impact of your display advertising campaigns.

More resources: 


Featured Image: BestForBest/Shutterstock

Google Ranking Glitch: Live Updates (Unrelated to August Core Update) via @sejournal, @theshelleywalsh

Google is currently addressing a separate issue affecting search rankings, unrelated to the August 2024 core update.

Aging hits us in our 40s and 60s. But well-being doesn’t have to fall off a cliff.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I came across research that suggests aging hits us in waves. You might feel like you’re on a slow, gradual decline, but, at the molecular level, you’re likely to be hit by two waves of changes, according to the scientists behind the work. The first one comes in your 40s. Eek.

For the study, Michael Snyder at Stanford University and his colleagues collected a vast amount of biological data from 108 volunteers aged 25 to 75, all of whom were living in California. Their approach was to gather as much information as they could and look for age-related patterns afterward.

This approach can lead to some startling revelations, including the one about the impacts of age on 40-year-olds (who, I was horrified to learn this week, are generally considered “middle-aged”). It can help us answer some big questions about aging, and even potentially help us find drugs to counter some of the most unpleasant aspects of the process.

But it’s not as simple as it sounds. And midlife needn’t involve falling off a cliff in terms of your well-being. Let’s explore why.

First, the study, which was published in the journal Nature Aging on August 14. Snyder and his colleagues collected a real trove of data on their volunteers, including on gene expression, proteins, metabolites, and various other chemical markers. The team also swabbed volunteers’ skin, stool, mouths, and noses to get an idea of the microbial communities that might be living there.

Each volunteer gave up these samples every few months for a median period of 1.7 years, and the team ended up with a total of 5,405 samples, which included over 135,000 biological features. “The idea is to get a very complete picture of people’s health,” says Snyder.

When he and his colleagues analyzed the data, they found that around 7% of the molecules and microbes measured changes gradually over time, in a linear way. On the other hand, 81% of them changed at specific life stages. There seem to be two that are particularly important: one at around the age of 44, and another around the age of 60.

Some of the dramatic changes at age 60 seem to be linked to kidney and heart function, and diseases like atherosclerosis, which narrows the arteries. That makes sense, given that our risks of developing cardiovascular diseases increase dramatically as we age—around 40% of 40- to 59-year-olds have such disorders, and this figure rises to 75% for 60- to 79-year-olds.

But the changes that occur around the age of 40 came as a surprise to Snyder. He says that, on reflection, they make intuitive sense. Many of us start to feel a bit creakier once we hit 40, and it can take longer to recover from injuries, for example.

Other changes suggest that our ability to metabolize lipids and alcohol shifts when we reach our 40s, though it’s hard to say why, for a few reasons. 

First, it’s not clear if a change in alcohol metabolism, for example, means that we are less able to break down alcohol, or if people are just consuming less of it when they’re older.

This gets us to a central question about aging: Is it an inbuilt program that sets us on a course of deterioration, or is it merely a consequence of living?

We don’t have an answer to that one, yet. It’s probably a combination of both. Our bodies are exposed to various environmental stressors over time. But also, as our cells age, they are less able to divide, and clear out the molecular garbage they accumulate over time.

It’s also hard to tell what’s happening in this study, because the research team didn’t measure more physiological markers of aging, such as muscle strength or frailty, says Colin Selman, a biogerontologist at the University of Glasgow in Scotland.

There’s another, perhaps less scientific, question that comes to mind. How worried should we be about these kinds of molecular changes? I’m approaching 40—should I panic? I asked Sara Hägg, who studies the molecular epidemiology of aging at the Karolinska Institute in Stockholm, Sweden. “No,” was her immediate answer.

While Snyder’s team collected a vast amount of data, it was from a relatively small number of people over a relatively short period of time. None of them were tracked for the two or three decades you’d need to see the two waves of molecular changes occur in a person.

“This is an observational study, and they compare different people,” Hägg told me. “There is absolutely no evidence that this is going to happen to you.” After all, there’s a lot that can happen in a person’s life over 20 or 30 years. They might take up a sport. They might quit smoking or stop eating meat.  

However, the findings do support the idea that aging is not a linear process.

“People have always suggested that you’re on this decline in your life from [around the age of] 40, depressingly,” says Selman. “But it’s not quite as simple as that.”

Snyder hopes that studies like his will help reveal potential new targets for therapies that help counteract some of the harmful molecular shifts associated with aging. “People’s healthspan is 11 to 15 years shorter than their lifespan,” he says. “Ideally you’d want to live for as long as possible [in good health], and then die.”

We don’t have any such drugs yet. For now, it all comes down to the age-old advice about eating well, sleeping well, getting enough exercise, and avoiding the big no-nos like smoking and alcohol.

I happened to speak to Selman at the end of what had been a particularly difficult day, and I confessed that I was looking forward to enjoying an evening glass of wine. That’s despite the fact that research suggests that there is “no safe level” of alcohol consumption.

“A little bit of alcohol is actually quite nice,” Selman agreed. He told me about an experience he’d had once at a conference on aging. Some of the attendees were members of a society that practiced caloric restriction—the idea being that cutting your calories can boost your lifespan (we don’t yet know if this works for people). “There was a big banquet… and these people all had little scales, and were weighing their salads on the scales,” he told me. “To me, that seems like a rather miserable way to live your life.”

I’m all for finding balance between healthy lifestyle choices and those that bring me joy. And it’s worth remembering that no amount of deprivation is going to radically extend our lifespans. As Selman puts it: “We can do certain things, but ultimately, when your time’s up, your time’s up.”


Now read the rest of the Checkup

Read more from MIT Technology Review’s archive

We don’t yet have a drug that targets aging. But that hasn’t stopped a bunch of longevity clinics from cropping up, offering a range of purported healthspan-extending services for the mega-rich. Now, they’re on a quest to legitimize longevity medicine.

Speaking of the uber wealthy, I also tagged along to an event for longevity enthusiasts ready to pump millions of dollars into the search for an anti-aging therapy. It was a fascinating, albeit slightly strange, experience.

There are plenty of potential rejuvenation strategies being explored right now. But the one that has received some of the most attention—and the most investment—is cellular reprogramming. My colleague Antonio Regalado looked at the promise of the field in this feature.

Scientists are working on new ways to measure how old a person is. Not just the number of birthdays they’ve had, but how aged or close to death they are. I took one of these biological aging tests. And I wasn’t all that pleased with the result.

Is there a limit to human life? Is old age a disease? Find out in the Mortality issue of MIT Technology Review’s magazine. 

You can of course read all of these stories and many more on our new app, which can be downloaded here (for Android users) or here (for Apple users).

From around the web

Mpox, the disease that has been surging in the Democratic Republic of the Congo and nearby countries, now constitutes a public health emergency of international concern, according to the World Health Organization. 

“The detection and rapid spread of a new clade [subgroup] of mpox in Eastern DRC, its detection in neighboring countries that had not previously reported mpox, and the potential for further spread within Africa and beyond is very worrying,” WHO director general Tedros Adhanom Ghebreyesus said in a briefing shared on X. “It’s clear that a coordinated international response is essential to stop these outbreaks and save lives.” (WHO)

Prosthetic limbs are often branded with company logos. For users of the technology, it can feel like a tattoo you didn’t ask for. (The Atlantic)

A testing facility in India submitted fraudulent data for more than 400 drugs to the FDA. But these drugs have not been withdrawn from the US market. That needs to be remedied, says the founder and president of a nonprofit focused on researching drug side effects. (STAT)

Antibiotics can impact our gut microbiomes. But the antibiotics given to people who undergo c-sections don’t have much of an impact on the baby’s microbiome. The way the baby is fed seems to be much more influential. (Cell Host & Microbe)

When unexpected infectious diseases show up in people, it’s not just physicians that are crucial. Veterinarian “disease detectives” can play a vital role in tracking how infections pass from animals to people, and the other way around. (New Yorker)

‘AI Snake Oil’ Sorts Promise from Hype

The hype surrounding artificial intelligence is everywhere, from get-rich-quick schemes to fears of sentient robots replacing humans. A quick Amazon search retrieves more than a thousand “books on ChatGPT.” At least three on the first results page include the word “millionaire” in the title. Others are entirely AI-written with bogus claims of legitimate authorship.

Yet AI offers much promise to merchants — content tools, productivity, search engine optimization, you name it.

Cover of AI Snake Oil

AI Snake Oil

A new book, “AI Snake Oil: What AI Can Do, What It Can’t, and How to Tell the Difference,” coming September 24 from Princeton University Press, aims to help non-experts separate reality from hype. The authors are two of “Time” magazine’s “100 Most Influential People in AI.” Arvind Narayanan is a professor of computer science and director of Princeton’s Center for Information Technology Policy. Sayash Kapoor formerly engineered content-moderation software at Facebook and is now a PhD candidate in computer science at Princeton.

They explain what artificial intelligence is, how it works, what it can and can’t do presently, and its likely direction.

AI “snake oil,” per Narayanan and Kapoor, is “AI that does not and cannot work as advertised.”

The book focuses on three AI technologies — predictive, generative, and content moderation — and outlines the capabilities and shortcomings of each, with plenty of real-world examples.

Predictive AI, already popular in business, education, and criminal justice, deserves the “snake oil” label. The book discusses the unverifiable claims made by companies selling these products, problems with their use (such as implicit bias and users who game the system), and the inherent difficulty of forecasting.

They see more potential for generative AI, suggesting when it’s useful and discussing controversies such as academic cheating, copyright infringement, and its likely impact on work.

The authors also detail why AI can’t completely replace human judgment in moderating content, giving examples of shocking failures and concluding that “whether or not a piece of content is objectionable often depends on the context. The inability to discern that context remains a major limitation of AI.” The book’s analysis of social media moderation is enlightening, especially for those of us who have had seemingly innocuous posts banned for no apparent reason.

A chapter titled “Is Advanced AI an Existential Threat?” evaluates “the dire view that AI threatens the future of humanity.” They concede that artificial general intelligence — AI that matches human capabilities — may someday be possible. But they contend “society already has the tools to address its risks calmly,” pointing out that “unlike chatbots, advanced AI can’t be trained on text from the internet and then let loose. That would be like expecting to read a book about biking and then get on a bike and ride.”

The final two chapters, “Why Do Myths about AI Persist?” and “Where Do We Go from Here?” explore the aspects of AI that make it susceptible to hype, suggesting regulations, practices for mitigating negative effects, and best- and worst-case scenarios.

“AI Snake Oil” covers the technology’s key facets in just 285 pages. The explanations are easily understood without being oversimplified.

The authors admirably differentiate fact from opinion, draw from personal experience, give sensible reasons for their views (including copious references), and don’t hesitate to call for action. They also publish a newsletter to monitor developments.

If you’re curious about AI or deciding how to implement it, “AI Snake Oil” offers clear writing and level-headed thinking. The book’s straightforward analysis will help reap AI’s benefits while remaining alert to its drawbacks.

Google Revises Core Update Guidance: What’s Changed? via @sejournal, @MattGSouthern

Google has updated its guidance on core algorithm updates, providing more detailed recommendations for impacted websites.

The revised document, published alongside the August core update rollout, includes several additions and removals.

New Sections Added

The most significant change includes two new sections: “Check if there’s a traffic drop in Search Console” and “Assessing a large drop in position.”

The “Check if there’s a traffic drop in Search Console” section provides step-by-step instructions for using Search Console to determine if a core update has affected a website.

The process involves:

  1. Confirming the completion of the core update by checking the Search Status Dashboard
  2. Waiting at least a week after the update finishes before analyzing Search Console data
  3. Comparing search performance from before and after the update to identify ranking changes
  4. Analyzing different search types (web, image, video, news) separately

The “Assessing a large drop in position” section offers guidance for websites that have experienced a significant ranking decline following a core update.

It recommends thoroughly evaluating the site’s content against Google’s quality guidelines, focusing on the pages most impacted by the update.

Other Additions

The updated document also includes a “Things to keep in mind when making changes” section, encouraging website owners to prioritize substantive, user-centric improvements rather than quick fixes.

It suggests that content deletion should be a last resort, indicating that removing content suggests it was created for search engines rather than users.

Another new section, “How long does it take to see an effect in Search results,” sets expectations for the time required to see ranking changes after making content improvements.

Google states that it may take several months for the full impact to be reflected, possibly requiring waiting until a future core update.

The document adds a closing paragraph noting that rankings can change even without website updates as new content emerges on the web.

Removed Content

Several sections from the previous version of the document have been removed or replaced in the update.

The paragraph stating that pages impacted by a core update “haven’t violated our spam policies” and comparing core updates to refreshing a movie list has been removed.

The “Assessing your own content” section has been replaced by the new “Assessing a large drop in position.”.

The “How long does it take to recover from a core update?” section no longer contains specific details about the timing and cadence of core updates and the factors influencing recovery time.

Shift In Tone & Focus

There’s a noticeable shift in tone and focus with this update.

While the previous guide explained the nature and purpose of core updates, the revised edition has more actionable guidance.

For example, the new sections related to Search Console provide clearer direction for identifying and addressing ranking drops.

In Summary

Here’s a list of added and removed items in Google’s updated Core Algorithm Update Guidance.

Added:

  • “Check if there’s a traffic drop in Search Console” section:
    • Step-by-step instructions for using Search Console to identify ranking changes.
  • “Assessing a large drop in position” section:
    • Guidance for websites experiencing significant ranking declines after a core update.
  • “Things to keep in mind when making changes” section:
    • Encourages substantive improvements over quick fixes.
    • Suggests content deletion as a last resort.
  • “How long does it take to see an effect in Search results” section:
    • Sets expectations for the time to see ranking changes after content improvements.
    • States that full impact may take several months and require a future core update.
  • Closing paragraph:
    • Notes that rankings can change even without website updates as new content emerges.

Removed:

  • A paragraph stating pages impacted by a core update “haven’t violated our spam policies.”
  • Comparing core updates to refreshing a list of best movies.
  • The “Assessing your own content” section from the previous version was replaced by the new “Assessing a large drop in position” section.
  • Specific details about the timing of core updates and factors influencing recovery time.

An archived version of Google’s previous core update guidance can be accessed via the Wayback Machine.


Featured Image: salarko/Shutterstock