It’s officially summer, and the grid is stressed

It’s crunch time for the grid this week. As I’m writing this newsletter, it’s 100 °F (nearly 38 °C) here in New Jersey, and I’m huddled in the smallest room in my apartment with the shades drawn and a single window air conditioner working overtime.  

Large swaths of the US have seen brutal heat this week, with multiple days in a row nearing or exceeding record-breaking temperatures. Spain recently went through a dramatic heat wave too, as did the UK, which is unfortunately bracing for another one soon. As I’ve been trying to stay cool, I’ve had my eyes on a website tracking electricity demand, which is also hitting record highs. 

We rely on electricity to keep ourselves comfortable, and more to the point, safe. These are the moments we design the grid for: when need is at its very highest. The key to keeping everything running smoothly during these times might be just a little bit of flexibility. 

While heat waves happen all over the world, let’s take my local grid as an example. I’m one of the roughly 65 million people covered by PJM Interconnection, the largest grid operator in the US. PJM covers Virginia, West Virginia, Ohio, Pennsylvania, and New Jersey, as well as bits of a couple of neighboring states.

Earlier this year, PJM forecast that electricity demand would peak at 154 gigawatts (GW) this summer. On Monday, just a few days past the official start of the season, the grid blew past that, averaging over 160 GW between 5 p.m. and 6 p.m. 

The fact that we’ve already passed both last year’s peak and this year’s forecasted one isn’t necessarily a disaster (PJM says the system’s total capacity is over 179 GW this year). But it is a good reason to be a little nervous. Usually, PJM sees its peak in July or August. As a reminder, it’s June. So we shouldn’t be surprised if we see electricity demand creep to even higher levels later in the summer.

It’s not just PJM, either. MISO, the grid that covers most of the Midwest and part of the US South, put out a notice that it expected to be close to its peak demand this week. And the US Department of Energy released an emergency order for parts of the Southeast, which allows the local utility to boost generation and skirt air pollution limits while demand is high.

This pattern of maxing out the grid is only going to continue. That’s because climate change is pushing temperatures higher, and electricity demand is simultaneously swelling (in part because of data centers like those that power AI). PJM’s forecasts show that the summer peak in 2035 could reach nearly 210 GW, well beyond the 179 GW it can provide today. 

Of course, we need more power plants to be built and connected to the grid in the coming years (at least if we don’t want to keep ancient, inefficient, expensive coal plants running, as we covered last week). But there’s a quiet strategy that could limit the new construction needed: flexibility.

The power grid has to be built for moments of the absolute highest demand we can predict, like this heat wave. But most of the time, a decent chunk of capacity that exists to get us through these peaks sits idle—it only has to come online when demand surges. Another way to look at that, however, is that by shaving off demand during the peak, we can reduce the total infrastructure required to run the grid. 

If you live somewhere that’s seen a demand crunch during a heat wave, you might have gotten an email from your utility asking you to hold off on running the dishwasher in the early evening or to set your air conditioner a few degrees higher. These are called demand response programs. Some utilities run more organized programs, where utilities pay customers to ramp down their usage during periods of peak demand.

PJM’s demand response programs add up to almost eight gigawatts of power—enough to power over 6 million homes. With these programs, PJM basically avoids having to fire up the equivalent of multiple massive nuclear power plants. (It did activate these programs on Monday afternoon during the hottest part of the day.)

As electricity demand goes up, building in and automating this sort of flexibility could go a long way to reducing the amount of new generation needed. One report published earlier this year found that if data centers agreed to have their power curtailed for just 0.5% of the time (around 40 hours out of a year of continuous operation), the grid could handle about 18 GW of new power demand in the PJM region without adding generation capacity. 

For the whole US, this level of flexibility would allow the grid to take on an additional 98 gigawatts of new demand without building any new power plants to meet it. To give you a sense of just how significant that would be, all the nuclear reactors in the US add up to 97 gigawatts of capacity.

Tweaking the thermostat and ramping down data centers during hot summer days won’t solve the demand crunch on their own, but it certainly won’t hurt to have more flexibility.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

3 things Rhiannon Williams is into right now

The last good Instagram account

It’s a truth universally acknowledged that social media is a Bad Vibe. Thankfully, there is still one Instagram account worth following that’s just as incisive, funny, and scathing today as when it was founded back in 2016: Every Outfit (@everyoutfitonsatc). Originally conceived as an homage to Sex and the City’s iconic fashion, Every Outfit has since evolved into a wider cultural critique and spawned a podcast of the same name that I love listening to while running. Sex and the City may be over, but Every Outfit is forever.

Glorious Exploits, by Ferdia Lennon

Glorious Exploits is one of those rare books that manage to pull off being both laugh-out-loud funny and deeply moving, which is no mean feat. Set in ancient Sicily, it tells the story of unemployed potters Lampo and Gelon’s grand plan to stage the Greek tragedy Medea with a cast of defeated Athenian soldiers who’ve been imprisoned in quarries on the outskirts of Syracuse. The ancient backdrop combined with the characters’ contemporary Irish dialogue (the author was born in Dublin) makes it unlike anything I’ve ever read before; it’s so ambitious it’s hard to believe it’s Lennon’s debut novel. Completely engrossing.

Life drawing

The depressing wave of AI-generated art that’s flooded the internet in recent years has inspired me to explore the exact opposite and make art the old-fashioned way. My art teacher in college always said the best way to learn the correct proportions of the human body was to draw it in person, so I’ve started attending classes near where I live in London. Pencil and paper are generally my medium of choice. Spending a few hours interpreting what’s in front of you in your own artistic style is really rewarding—and has the added bonus of being completely screen-free. I can’t recommend it enough.

Job titles of the future: Pandemic oracle

Officially, Conor Browne is a biorisk consultant. Based in Belfast, Northern Ireland, he has advanced degrees in security studies and medical and business ethics, along with United Nations certifications in counterterrorism and conflict resolution. He’s worked on teams with NATO’s Science for Peace and Security Programme and with the UN High Commissioner for Refugees, analyzing how diseases affect migration and border security.

Early in the emergence of SARS-CoV-2, international energy conglomerates seeking expert guidance on navigating the potential turmoil in markets and transportation became his main clients. Having studied the 2002 SARS outbreak, he predicted the exponential spread of the new airborne virus. He forecast the epidemic’s broadscale impact and its implications for business so accurately that he has come to be seen as a pandemic oracle. 

Browne produces independent research reports and works directly with companies of all sizes. One of his niches is consulting on new diagnostic toolsfor example, in his work with RAIsonance, a startup using machine learning to analyze cough sounds correlated with tuberculosis and covid-19. For multinational corporations, he models threats such as the possibility of avian influenza spreading from human to human. He builds most- and least-likely scenarios for how the global business community might react to an H5N1 outbreak in China or the US. “I never want to be right,” he says of worst-case predictions. 

Navigating uncertainty

Biorisk consultants are often trained in fields related to epidemiology, security, and counterterrorism. Browne also studied psychology to understand how humans respond to disaster. In times of increasing geopolitical volatility, he says, biomedical risk assessment must include sociopolitical forecasting.

Demand for this type of crisis planning exploded in the corporate world in the aftermath of 9/11. Executives learned to create contingency plans for loss of personnel and infrastructure as a result of terrorism, pandemics, and natural disasters. And resilience planning proved crucial early in the covid-19 pandemic, as business leaders were forced to adjust to supply chain disruptions and the realities of remote work. 

Network effects

By adding nuanced qualitative analysis to hard data, Browne creates proprietary guidance that clients can act on. “I give businesses an idea of what is coming, and what they do with that information is up to them,” he says. “I basically tell the future.”

Britta Shoot is a freelance journalist focusing on pandemics, protests, and how people occupy space. 

The AI Hype Index: AI-powered toys are coming

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

AI agents might be the toast of the AI industry, but they’re still not that reliable. That’s why Yoshua Bengio, one of the world’s leading AI experts, is creating his own nonprofit dedicated to guarding against deceptive agents. Not only can they mislead you, but new research suggests that the weaker an AI model powering an agent is, the less likely it is to be able to negotiate you a good deal online. Elsewhere, OpenAI has inked a deal with toymaker Mattel to develop “age-appropriate” AI-infused products. What could possibly go wrong?

The Bank Secrecy Act is failing everyone. It’s time to rethink financial surveillance.

The US is on the brink of enacting rules for digital assets, with growing bipartisan momentum to modernize our financial system. But amid all the talk about innovation and global competitiveness, one issue has been glaringly absent: financial privacy. As we build the digital infrastructure of the 21st century, we need to talk about not just what’s possible but what’s acceptable. That means confronting the expanding surveillance powers quietly embedded in our financial system, which today can track nearly every transaction without a warrant.

Many Americans may associate financial surveillance with authoritarian regimes. Yet because of a Nixon-era law called the Bank Secrecy Act (BSA) and the digitization of finance over the past half-century, financial privacy is under increasingly serious threat here at home. Most Americans don’t realize they live under an expansive surveillance regime that likely violates their constitutional rights. Every purchase, deposit, and transaction, from the smallest Venmo payment for a coffee to a large hospital bill, creates a data point in a system that watches you—even if you’ve done nothing wrong.

As a former federal prosecutor, I care deeply about giving law enforcement the tools it needs to keep us safe. But the status quo doesn’t make us safer. It creates a false sense of security while quietly and permanently eroding the constitutional rights of millions of Americans.

When Congress enacted the BSA in 1970, cash was king and organized crime was the target. The law created a scheme whereby, ever since, banks have been required to keep certain records on their customers and turn them over to law enforcement upon request. Unlike a search warrant, which must be issued by a judge or magistrate upon a showing of probable cause that a crime was committed and that specific evidence of that crime exists in the place to be searched, this power is exercised with no checks or balances. A prosecutor can “cut a subpoena”—demanding all your bank records for the past 10 years—with no judicial oversight or limitation on scope, and at no cost to the government. The burden falls entirely on the bank. In contrast, a proper search warrant must be narrowly tailored, with probable cause and judicial authorization.

In United States v. Miller (1976), the Supreme Court upheld the BSA, reasoning that citizens have no “legitimate expectation of privacy” about information shared with third parties, like banks. Thus began the third-party doctrine, enabling law enforcement to access financial records without a warrant. The BSA has been amended several times over the years (most notoriously in 2001 as a part of the Patriot Act), imposing an ever-growing list of recordkeeping obligations on an ever-growing list of financial institutions. Today, it is virtually inescapable for everyday Americans.

In the 1970s, when the BSA was enacted, banking and noncash payments were conducted predominantly through physical means: writing checks, visiting bank branches, and using passbooks. For cash transactions, the BSA required reporting of transactions over the kingly sum of $10,000, a figure that was not pegged to inflation and remains the same today. And given the nature of banking services and the technology available at the time, individuals conducted just a handful of noncash payments per month. Today, consumers make at least one payment or banking transaction a day, and just an estimated 16% of those are in cash

Meanwhile, emerging technologies further expand the footprint of financial data. Add to this the massive pools of personal information already collected by technology platforms—location history, search activity, communications metadata—and you create a world where financial surveillance can be linked to virtually every aspect of your identity, movement, and behavior.

Nor does the BSA actually appear to be effective at achieving its aims. In fiscal year 2024, financial institutions filed about 4.7 million Suspicious Activity Reports (SARs) and over 20 million currency transaction reports. Instead of stopping major crime, the system floods law enforcement with low-value information, overwhelming agents and obscuring real threats. Mass surveillance often reduces effectiveness by drowning law enforcement in noise. But while it doesn’t stop hackers, the BSA creates a trove of permanent info on everyone.

Worse still, the incentives are misaligned and asymmetrical. To avoid liability, financial institutions are required to report anything remotely suspicious. If they fail to file a SAR, they risk serious penalties—even indictment. But they face no consequences for overreporting. The vast overcollection of data is the unsurprising result. These practices, developed under regulations, require clearer guardrails so that executive branch actors can more safely outsource surveillance duties to private institutions.

But courts have recognized that constitutional privacy must evolve alongside technology. In 2012, the Supreme Court ruled in United States v. Jones that attaching a GPS tracker to a vehicle for prolonged surveillance constituted a search restricted by the Fourth Amendment. Justice Sonia Sotomayor, in a notable concurrence, argued that the third-party doctrine was ill suited to an era when individuals “reveal a great deal of information about themselves to third parties” merely by participating in daily life.

This legal evolution continued in 2018, when the Supreme Court held in Carpenter v. United States that accessing historical cell-phone location records held by a third party required a warrant, recognizing that “seismic shifts in digital technology” necessitate stronger protections and warning that “the fact that such information is gathered by a third party does not make it any less deserving of Fourth Amendment protection.”

The logic of Carpenter applies directly to the mass of financial records being collected today. Just as tracking a person’s phone over time reveals the “whole of their physical movements,” tracking a person’s financial life exposes travel, daily patterns, medical treatments, political affiliations, and personal associations. In many ways, because of the velocity and digital nature of today’s digital payments, financial data is among the most personal and revealing data there is—and therefore deserves the highest level of constitutional protection.

Though Miller remains formally intact, the writing is on the wall: Indiscriminate financial surveillance such as what we have today is fundamentally at odds with the Fourth Amendment in the digital age.

Technological innovations over the past several decades have brought incredible convenience to economic life. Now our privacy standards must catch up. With Congress considering landmark legislation on digital assets, it’s an important moment to consider what kind of financial system we want—not just in terms of efficiency and access, but in terms of freedom. Rather than striking down the BSA in its entirety, policymakers should narrow its reach, particularly around the bulk collection and warrantless sharing of Americans’ financial data.

Financial surveillance shouldn’t be the price of participation in modern life. The systems we build now will shape what freedom looks like for the next century. It’s time to treat financial privacy like what it is: a cornerstone of democracy, and a right worth fighting for.

Katie Haun is the CEO and founder of Haun Ventures, a venture capital firm focused on frontier technologies. She is a former federal prosecutor who created the US Justice Department’s first cryptocurrency task force. She led investigations into the Mt. Gox hack and the corrupt agents on the Silk Road task force. She clerked for US Supreme Court Justice Anthony Kennedy and is an honors graduate of Stanford Law School.

Google’s new AI will help researchers understand how our genes work

When scientists first sequenced the human genome in 2003, they revealed the full set of DNA instructions that make a person. But we still didn’t know what all those 3 billion genetic letters actually do. 

Now Google’s DeepMind division says it’s made a leap in trying to understand the code with AlphaGenome, an AI model that predicts what effects small changes in DNA will have on an array of molecular processes, such as whether a gene’s activity will go up or down. It’s just the sort of question biologists regularly assess in lab experiments.

“We have, for the first time, created a single model that unifies many different challenges that come with understanding the genome,” says Pushmeet Kohli, a vice president for research at DeepMind.

Five years ago, the Google AI division released AlphaFold, a technology for predicting the 3D shape of proteins. That work was honored with a Nobel Prize last year and spawned a drug-discovery spinout, Isomorphic Labs, and a boom of companies that hope AI will be able to propose new drugs.

AlphaGenome is an attempt to further smooth biologists’ work by answering basic questions about how changing DNA letters alters gene activity and, eventually, how genetic mutations affect our health. 

“We have these 3 billion letters of DNA that make up a human genome, but every person is slightly different, and we don’t fully understand what those differences do,” says Caleb Lareau, a computational biologist at Memorial Sloan Kettering Cancer Center who has had early access to AlphaGenome. “This is the most powerful tool to date to model that.”

Google says AlphaGenome will be free for noncommercial users and plans to release full details of the model in the future. According to Kohli, the company is exploring ways to “enable use of this model by commercial entities” such as biotech companies. 

Lareau says AlphaGenome will allow certain types of experiments now done in the lab to be carried out virtually, on a computer. For instance, studies of people who’ve donated their DNA for research often turn up thousands of genetic differences, each slightly raising or lowering the chance a person gets a disease such as Alzheimer’s.

Lareau says DeepMind’s software could be used to quickly make predictions about how each of those variants works at a molecular level, something that would otherwise require time-consuming lab experiments. “You’ll get this list of gene variants, but then I want to understand which of those are actually doing something, and where can I intervene,” he says. “This system pushes us closer to a good first guess about what any variant will be doing when we observe it in a human.”

Don’t expect AlphaGenome to predict very much about individual people, however. It offers clues to nitty-gritty molecular details of gene activity, not 23andMe-type revelations of a person’s traits or ancestry. 

“We haven’t designed or validated AlphaGenome for personal genome prediction, a known challenge for AI models,” Google said in a statement.

Underlying the AI system is the so-called transformer architecture invented at Google that also powers large language models like GPT-4. This one was trained on troves of experimental data produced by public scientific projects.

Lareau says the system will not broadly change how his lab works day to day but could permit new types of research. For instance, sometimes doctors encounter patients with ultra-rare cancers, bristling with unfamiliar mutations. AlphaGenome could suggest which of those mutations are really causing the root problem, possibly pointing to a treatment.

“A hallmark of cancer is that specific mutations in DNA make the wrong genes express in the wrong context,” says Julien Gagneur, a professor of computational medicine at the Technical University of Munich. “This type of tool is instrumental in narrowing down which ones mess up proper gene expression.” 

The same approach could apply to patients with rare genetic disease, many of whom never learn the source of their condition, even if their DNA has been decoded. “We can obtain their genomes, but we are clueless as to which genetic alterations cause the disease,” says Gagneur. He thinks AlphaGenome could give medical scientists a new way to diagnose such cases. 

Eventually, some researchers aspire to use AI to design entire genomes from the ground up and create new life forms. Others think the models will be used to create a fully virtual laboratory for drug studies. “My dream would be to simulate a virtual cell,” Demis Hassabis, CEO of Google DeepMind, said this year. 

Kohli calls AlphaGenome a “milestone” on the road to that kind of system. “AlphaGenome may not model the whole cell in its entirety … but it’s starting to sort of shed light on the broader semantics of DNA,” he says.

See the stunning first images from the Vera C. Rubin Observatory

The first spectacular images taken by the Vera C. Rubin Observatory have been released for the world to peruse: a panoply of iridescent galaxies and shimmering nebulas. “This is the dawn of the Rubin Observatory,” says Meg Schwamb, a planetary scientist and astronomer at Queen’s University Belfast in Northern Ireland.

Much has been written about the observatory’s grand promise: to revolutionize our understanding of the cosmos by revealing a once-hidden population of far-flung galaxies, erupting stars, interstellar objects, and elusive planets. And thanks to its unparalleled technical prowess, few doubted its ability to make good on that. But over the past decade, during its lengthy construction period, “everything’s been in the abstract,” says Schwamb.

Today, that promise has become a staggeringly beautiful reality. 

Rubin’s view of the universe is unlike any that preceded it—an expansive vision of the night sky replete with detail, including hazy envelopes of matter coursing around galaxies and star-paved bridges arching between them. “These images are truly stunning,” says Pedro Bernardinelli, an astronomer at the University of Washington.

During its brief perusal of the night sky, Rubin even managed to spy more than 2,000 never-before-seen asteroids, demonstrating that it should be able to spotlight even the sneakiest denizens, and darkest corners, of our own solar system.

A small section of the Vera C. Rubin Observatory’s view of the Virgo Cluster. Three merging galaxies can be seen on the upper right. The view also includes two striking spiral galaxies (lower right), distant galaxies, and many Milky Way stars.
NSF-DOE VERA C. RUBIN OBSERVATORY

Today’s reveal is a mere amuse-bouche compared with what’s to come: Rubin, funded by the US National Science Foundation and the Department of Energy, is set for at least 10 years of planned observations. But this moment, and these glorious inaugural images, are worth celebrating for what they represent: the culmination of over a decade of painstaking work. 

“This is a direct demonstration that Rubin is no longer in the future,” says Bernardinelli. “It’s the present.”

The observatory is named after the late Vera Rubin, an astronomer who uncovered strong evidence for dark matter, a mysterious and as-yet-undetected something that’s binding galaxies together more strongly than the gravity of ordinary, visible matter alone can explain. Trying to make sense of dark matter—and its equally mysterious, universe-stretching cousin, dubbed dark energy—is a monumental task, one that cannot be addressed by just one line of study or scrutiny of one type of cosmic object.

That’s why Rubin was designed to document anything and everything that shifts or sparkles in the night sky. Sitting atop Chile’s Cerro Pachón mountain range, it boasts a 7,000-pound, 3,200-megapixel digital camera that can take detailed snapshots of a large patch of the night sky; a house-size cradle of mirrors that can drink up extremely distant and faint starlight; and a maze of joints and pistons that allow it to swivel about with incredible speed and precision. A multinational computer network permits its sky surveys to be largely automated, its images speedily processed, any new objects easily detected, and the relevant groups of astronomers quickly alerted.

All that technical wizardry allows Rubin to take a picture of the entire visible night sky once every few days, filling in the shadowed gaps and unseen activity between galaxies. “The sky [isn’t] static. There are asteroids zipping by, and supernovas exploding,” says Yusra AlSayyad, Rubin’s overseer of image processing. By conducting a continuous survey over the next decade, the facility will create a three-dimensional movie of the universe’s ever-changing chaos that could help address all sorts of astronomic queries. What were the very first galaxies like? How did the Milky Way form? Are there planets hidden in our own solar system’s backyard?

Rubin’s first glimpse of the firmament is predictably bursting with galaxies and stars. But the resolution, breadth, and depth of the images have taken astronomers aback. “I’m very impressed with these images. They’re really incredible,” says Christopher Conselice, an extragalactic astronomer at the University of Manchester in England.

One shot, created from 678 individual exposures, showcases the Trifid and Lagoon nebulas—two oceans of luminescent gas and dust where stars are born. Others depict a tiny portion of Rubin’s view of the Virgo Cluster, a zoo of galaxies. Hues of blue are coming from relatively nearby whirlpools of stars, while red tints emanate from remarkably distant and primeval galaxies. 

The rich detail in these images is already proving to be illuminating. “As galaxies merge and interact, the galaxies are pulling stars away from each other,” says Conselice. This behavior can be seen in plumes of diffuse light erupting from several galaxies, creating halos around them or illuminated bridges between them—records of these ancient galaxies’ pasts.

Images like these are also likely to contain several supernovas, the explosive final moments of sizable stars. Not only do supernovas seed the cosmos with all the heavy elements that planets—and life—rely on, but they can also hint at how the universe has expanded over time. 

Anais Möller, an astrophysicist at the Swinburne University of Technology in Melbourne, Australia, is a supernova hunter. “I search for exploding stars in very far away galaxies,” she says. Older sky surveys have found plenty, but they can lack context: You can see the explosion, but not what galaxy it’s from. Thanks to Rubin’s resolution—amply demonstrated by the Virgo Cluster set of images—astronomers can now “find where those exploding stars live,” says Möller.

Another small section of the observatory’s view of the Virgo Cluster. The image includes many distant galaxies along with stars from our own Milky Way galaxy.
NSF-DOE VERA C. RUBIN OBSERVATORY

While taking these images of the distant universe, Rubin also discovered 2,104 asteroids flitting about in our own solar system—including seven whose orbits hew close to Earth’s own. This number may sound impressive, but it’s just par for the course for Rubin. In just a few months, it will find over a million new asteroids—doubling the current known tally. And over the course of its decadal survey, Rubin is projected to identify 89,000 near-Earth asteroids, 3.7 million asteroids in the belt between Mars and Jupiter, and 32,000 icy objects beyond Neptune. 

Finding more than 2,000 previously hidden asteroids in just a few hours of observations, then, “wasn’t even hard” for Rubin, says Mario Jurić, an astronomer at the University of Washington. “The asteroids really popped out.”

Rubin’s comprehensive inventorying of the solar system has two benefits. The first is scientific: All those lumps of rocks and ice are the remnants of the solar system’s formative days, which means astronomers can use them to understand how everything around us was pieced together. 

The second benefit is security. Somewhere out there, there could be an asteroid on an Earthbound trajectory—one whose impact could devastate an entire city or even several countries. Engineers are working on defensive tech designed to either deflect or obliterate such asteroids, but if astronomers don’t know where they are, those defenses are useless. In quickly finding so many asteroids, Rubin has clearly shown that it will bolster Earth’s planetary defense capabilities like no other ground-based telescope.

Altogether, Rubin’s debut has validated the hopes of countless astronomers: The observatory won’t just be an incremental improvement on what’s come before. “I think it’s a generational leap,” says Möller. It is a ruthlessly efficient, discovery-making behemoth—and a firehose of astronomic delights is about to inundate the scientific community. “It’s very scary,” says Möller. “But very exciting at the same time.”

It’s going to be a very hectic decade. As Schwamb puts it, “The roller-coaster starts now.”

Book review: Surveillance & privacy

Privacy only matters to those with something to hide. So goes one of the more inane and disingenuous justifications for mass government and corporate surveillance. There are others, of course, but the “nothing to hide” argument remains a popular way to rationalize or excuse what’s become standard practice in our digital age: the widespread and invasive collection of vast amounts of personal data.

One common response to this line of reasoning is that everyone, in fact, has something to hide, whether they realize it or not. If you’re unsure of whether this holds true for you, I encourage you to read Means of Control by Byron Tau. 

cover of Means of Control
Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
CROWN, 2024

Midway through his book, Tau, an investigative journalist, recalls meeting with a disgruntled former employee of a data broker—a shady company that collects, bundles, and sells your personal data to other (often shadier) third parties, including the government. This ex-employee had managed to make off with several gigabytes of location data representing the precise movements of tens of thousands of people over the course of a few weeks. “What could I learn with this [data]—­theoretically?” Tau asks the former employee. The answer includes a laundry list of possibilities that I suspect would make even the most enthusiastic oversharer uncomfortable.

“If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed.”

Bryon Tau, author of Means of Control

Did someone in this group recently visit an abortion clinic? That would be easy to figure out, says the ex-employee. Anyone attend an AA meeting or check into inpatient drug rehab? Again, pretty simple to discern. Is someone being treated for erectile dysfunction at a sexual health clinic? If so, that would probably be gleanable from the data too. Tau never opts to go down that road, but as Means of Control makes very clear, others certainly have done so and will.

While most of us are at least vaguely aware that our phones and apps are a vector for data collection and tracking, both the way in which this is accomplished and the extent to which it happens often remain murky. Purposely so, argues Tau. In fact, one of the great myths Means of Control takes aim at is the very idea that what we do with our devices can ever truly be anonymized. Each of us has habits and routines that are completely unique, he says, and if an advertiser knows you only as an alphanumeric string provided by your phone as you move about the world, and not by your real name, that still offers you virtually no real privacy protection. (You’ll perhaps not be surprised to learn that such “anonymized ad IDs” are relatively easy to crack.)

“I’m here to tell you if you’ve ever been on a dating app that wanted your location, or if you ever granted a weather app permission to know where you are 24/7, there’s a good chance a detailed log of your precise movement patterns has been vacuumed up and saved in some data bank somewhere that tens of thousands of total strangers have access to,” writes Tau.

Unraveling the story of how these strangers—everyone from government intelligence agents and local law enforcement officers to private investigators and employees of ad tech companies—gained access to our personal information is the ambitious task Tau sets for himself, and he begins where you might expect: the immediate aftermath of 9/11.

At no other point in US history was the government’s appetite for data more voracious than in the days after the attacks, says Tau. It was a hunger that just so happened to coincide with the advent of new technologies, devices, and platforms that excelled at harvesting and serving up personal information that had zero legal privacy protections. 

Over the course of 22 chapters, Tau gives readers a rare glimpse inside the shadowy industry, “built by corporate America and blessed by government lawyers,” that emerged in the years and decades following the 9/11 attacks. In the hands of a less skilled reporter, this labyrinthine world of shell companies, data vendors, and intelligence agencies could easily become overwhelming or incomprehensible. But Tau goes to great lengths to connect dots and plots, explaining how a perfect storm of business motivations, technological breakthroughs, government paranoia, and lax or nonexistent privacy laws combined to produce the “digital panopticon” we are all now living in.

Means of Control doesn’t offer much comfort or reassurance for privacy­-minded readers, but that’s arguably the point. As Tau notes repeatedly throughout his book, this now massive system of persistent and ubiquitous surveillance works only because the public is largely unaware of it. “If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed,” he writes. 

As another new book makes clear, this conversation also needs to include student data. Lindsay Weinberg’s Smart University: Student Surveillance in the Digital Age reveals how the motivations and interests of Big Tech are transforming higher education in ways that are increasingly detrimental to student privacy and, arguably, education as a whole.

cover of Smart University
Smart University: Student Surveillance in the Digital Age
Lindsay Weinberg
JOHNS HOPKINS UNIVERSITY PRESS, 2024

By “smart university,” Weinberg means the growing number of public universities across the country that are being restructured around “the production and capture of digital data.” Similar in vision and application to so-called “smart cities,” these big-data-pilled institutions are increasingly turning to technologies that can track students’ movements around campus, monitor how much time they spend on learning management systems, flag those who seem to need special “advising,” and “nudge” others toward specific courses and majors. “What makes these digital technologies so seductive to higher education administrators, in addition to promises of cost cutting, individualized student services, and improved school rankings, is the notion that the integration of digital technology on their campuses will position universities to keep pace with technological innovation,” Weinberg writes. 

Readers of Smart University will likely recognize a familiar logic at play here. Driving many of these academic tracking and data-gathering initiatives is a growing obsession with efficiency, productivity, and convenience. The result is a kind of Silicon Valley optimization mindset, but applied to higher education at scale. Get students in and out of university as fast as possible, minimize attrition, relentlessly track performance, and do it all under the guise of campus modernization and increased personalization. 

Under this emerging system, students are viewed less as self-empowered individuals and more as “consumers to be courted, future workers to be made employable for increasingly smart workplaces, sources of user-generated content for marketing and outreach, and resources to be mined for making campuses even smarter,” writes Weinberg. 

At the heart of Smart University seems to be a relatively straightforward question: What is an education for? Although Weinberg doesn’t provide a direct answer, she shows that how a university (or society) decides to answer that question can have profound impacts on how it treats its students and teachers. Indeed, as the goal of education becomes less to produce well-rounded humans capable of thinking critically and more to produce “data subjects capable of being managed and who can fill roles in the digital economy,” it’s no wonder we’re increasingly turning to the dumb idea of smart universities to get the job done.  

If books like Means of Control and Smart University do an excellent job exposing the extent to which our privacy has been compromised, commodified, and weaponized (which they undoubtedly do), they can also start to feel a bit predictable in their final chapters. Familiar codas include calls for collective action, buttressed by a hopeful anecdote or two detailing previously successful pro-privacy wins; nods toward a bipartisan privacy bill in the works or other pieces of legislation that could potentially close some glaring surveillance loophole; and, most often, technical guides that explain how each of us, individually, might better secure or otherwise take control and “ownership” of our personal data.

The motivations behind these exhortations and privacy-centric how-to guides are understandable. After all, it’s natural for readers to want answers, advice, or at least some suggestion that things could be different—especially after reading about the growing list of degradations suffered under surveillance capitalism. But it doesn’t take a skeptic to start to wonder if they’re actually advancing the fight for privacy in the way that its advocates truly want.

For one thing, technology tends to move much faster than any one smartphone privacy guide or individual law could ever hope to keep up with. Similarly, framing rampant privacy abuses as a problem we each have to be responsible for addressing individually seems a lot like framing the plastic pollution crisis as something Americans could have somehow solved by recycling. It’s both a misdirection and a misunderstanding of the problem.     

It’s to his credit, then, that Lowry Pressly doesn’t include a “What is to be done” section at the end of The Right to Oblivion: Privacy and the Good Life. In lieu of offering up any concrete technical or political solutions, he simply reiterates an argument he has carefully and convincingly built over the course of his book: that privacy is important “not because it empowers us to exercise control over our information, but because it protects against the creation of such information in the first place.” 

cover of The Right to Oblivion
The Right to Oblivion: Privacy and the Good Life
Lowry Pressly
HARVARD UNIVERSITY PRESS, 2024

For Pressly, a Stanford instructor, the way we currently understand and value privacy has been tainted by what he calls “the ideology of information.” “This is the idea that information has a natural existence in human affairs,” he writes, “and that there are no aspects of human life which cannot be translated somehow into data.” This way of thinking not only leads to an impoverished sense of our own humanity—it also forces us into the conceptual trap of debating privacy’s value using a framework (control, consent, access) established by the companies whose business model is to exploit it.

The way out of this trap is to embrace what Pressly calls “oblivion,” a kind of state of unknowing, ambiguity, and potential—or, as he puts it, a realm “where there is no information or knowledge one way or the other.” While he understands that it’s impossible to fully escape a modern world intent on turning us into data subjects, Pressly’s book suggests we can and should support the idea that certain aspects of our (and others’) subjective interior lives can never be captured by information. Privacy is important because it helps to both protect and produce these ineffable parts of our lives, which in turn gives them a sense of dignity, depth, and the possibility for change and surprise. 

Reserving or cultivating a space for oblivion in our own lives means resisting the logic that drives much of the modern world. Our inclination to “join the conversation,” share our thoughts, and do whatever it is we do when we create and curate a personal brand has become so normalized that it’s practically invisible to us. According to Pressly, all that effort has only made our lives and relationships shallower, less meaningful, and less trusting.

Calls for putting our screens down and stepping away from the internet are certainly nothing new. And while The Right to Oblivion isn’t necessarily prescriptive about such things, Pressly does offer a beautiful and compelling vision of what can be gained when we retreat not just from the digital world but from the idea that we are somehow knowable to that world in any authentic or meaningful way. 

If all this sounds a bit philosophical, well, it is. But it would be a mistake to think of The Right to Oblivion as a mere thought exercise on privacy. Part of what makes the book so engaging and persuasive is the way in which Pressly combines a philosopher’s knack for uncovering hidden assumptions with a historian’s interest in and sensitivity to older (often abandoned) ways of thinking, and how they can often enlighten and inform modern problems.

Pressly isn’t against efforts to pass more robust privacy legislation, or even to learn how to better protect our devices against surveillance. His argument is that in order to guide such efforts, you have to both ask the right questions and frame the problem in a way that gives you and others the moral clarity and urgency to act. Your phone’s privacy settings are important, but so is understanding what you’re protecting when you change them. 

Bryan Gardiner is a writer based in Oakland, California. 

A Chinese firm has just launched a constantly changing set of AI benchmarks

When testing an AI model, it’s hard to tell if it is reasoning or just regurgitating answers from its training data. Xbench, a new benchmark developed by the Chinese venture capital firm HSG, or HongShan Capital Group, might help to sidestep that issue. That’s thanks to the way it evaluates models not only on the ability to pass arbitrary tests, like most other benchmarks, but also on the ability to execute real-world tasks, which is more unusual. It will be updated on a regular basis to try to keep it evergreen. 

This week the company is making part of its question set open-source and letting anyone use for free. The team has also released a leaderboard comparing how mainstream AI models stack up when tested on Xbench. (ChatGPT o3 ranked first across all categories, though ByteDance’s Doubao, Gemini 2.5 Pro, and Grok all still did pretty well, as did Claude Sonnet.) 

Development of the benchmark at HongShan began in 2022, following ChatGPT’s breakout success, as an internal tool for assessing which models are worth investing in. Since then, led by partner Gong Yuan, the team has steadily expanded the system, bringing in outside researchers and professionals to help refine it. As the project grew more sophisticated, they decided to release it to the public.

Xbench approached the problem with two different systems. One is similar to traditional benchmarking: an academic test that gauges a model’s aptitude on various subjects. The other is more like a technical interview round for a job, assessing how much real-world economic value a model might deliver.

Xbench’s methods for assessing raw intelligence currently include two components: Xbench-ScienceQA and Xbench-DeepResearch. ScienceQA isn’t a radical departure from existing postgraduate-level STEM benchmarks like GPQA and SuperGPQA. It includes questions spanning fields from biochemistry to orbital mechanics, drafted by graduate students and double-checked by professors. Scoring rewards not only the right answer but also the reasoning chain that leads to it.

DeepResearch, by contrast, focuses on a model’s ability to navigate the Chinese-language web. Ten subject-matter experts created 100 questions in music, history, finance, and literature—questions that can’t just be googled but require significant research to answer. Scoring favors breadth of sources, factual consistency, and a model’s willingness to admit when there isn’t enough data. A question in the publicized collection is “How many Chinese cities in the three northwestern provinces border a foreign country?” (It’s 12, and only 33% of models tested got it right, if you are wondering.)

On the company’s website, the researchers said they want to add more dimensions to the test—for example, aspects like how creative a model is in its problem solving, how collaborative it is when working with other models, and how reliable it is.

The team has committed to updating the test questions once a quarter and to maintain a half-public, half-private data set.

To assess models’ real-world readiness, the team worked with experts to develop tasks modeled on actual workflows, initially in recruitment and marketing. For example, one task asks a model to source five qualified battery engineer candidates and justify each pick. Another asks it to match advertisers with appropriate short-video creators from a pool of over 800 influencers.

The website also teases upcoming categories, including finance, legal, accounting, and design. The question sets for these categories have not yet been open-sourced.

ChatGPT-o3 again ranks first in both of the current professional categories. For recruiting, Perplexity Search and Claude 3.5 Sonnet take second and third place, respectively. For marketing, Claude, Grok, and Gemini all perform well.

“It is really difficult for benchmarks to include things that are so hard to quantify,” says Zihan Zheng, the lead researcher on a new benchmark called LiveCodeBench Pro and a student at NYU. “But Xbench represents a promising start.”

Calorie restriction can help animals live longer. What about humans?

Living comes with a side effect: aging. Despite what you might hear on social media or in advertisements, there are no drugs that are known to slow or reverse human aging. But there’s some evidence to support another approach: cutting back on calories.

Caloric restriction (reducing your intake of calories) and intermittent fasting (switching between fasting and eating normally on a fixed schedule) can help with weight loss. But they may also offer protection against some health conditions. And some believe such diets might even help you live longer—a finding supported by new research out this week. (Longevity enthusiast Bryan Johnson famously claims to eat his last meal of the day at 12pm.)

But the full picture is not so simple. Weight loss isn’t always healthy and neither is restricting your calorie intake, especially if your BMI is low to begin with. Some scientists warn that, based on evidence in animals, it could negatively impact wound healing, metabolism and bone density. This week let’s take a closer look at the benefits—and risks—of caloric restriction.

Eating less can make animals live longer. This remarkable finding has been published in scientific journals for the last 100 years. It seems to work in almost every animal studied—everything from tiny nematode worms and fruit flies to mice, rats, and even monkeys. It can extend the lifespan of rodents by between 15% and 60%, depending on which study you look at.

The effect of caloric restriction is more reliable than the leading contenders for an “anti-aging” drug. Both rapamycin (an immunosuppressive drug used in organ transplants) and metformin (a diabetes drug) have been touted as potential longevity therapeutics. And both have been found to increase the lifespans of animals in some studies.

But when scientists looked at 167 published studies of those three interventions in research animals, they found that caloric restriction was the most “robust.” According to their research, published in the journal Aging Cell on Wednesday, the effect of rapamycin was somewhat comparable, but metformin was nowhere near as effective.

“That is a pity for the many people now taking off-label metformin for lifespan extension,” David Clancy, lecturer in biogerontology at Lancaster University, said in a statement. “Let’s hope it doesn’t have any or many adverse effects.” Still, for caloric restriction, so far so good.

At least it’s good news for lab animals. What about people? Also on Wednesday, another team of scientists published a separate review of research investigating the effects of caloric restriction and fasting on humans. That review assessed 99 clinical trials, involving over 6,500 adults. (As I said, caloric restriction has been an active area of research for a long time.)

Those researchers found that, across all those trials, fasting and caloric restriction did seem to aid weight loss. There were other benefits, too—but they depended on the specific approach to dieting. Fasting every other day seemed to help lower cholesterol, for example. Time-restricted eating, where you only eat within a specific period each day (à la Bryan Johnson), by comparison, seemed to increase cholesterol, the researchers write in the BMJ. Given that elevated cholesterol in the blood can lead to heart disease, it’s not great news for the time-restricted eaters.

Cutting calories could also carry broader risks. Dietary restriction seems to impair wound healing in mice and rats, for example. Caloric restriction also seems to affect bone density. In some studies, the biggest effects on lifespan extension are seen when rats are put on calorie-restricted diets early in life. But this approach can affect bone development and reduce bone density by 9% to 30%.

It’s also really hard for most people to cut their caloric intake. When researchers ran a two-year trial to measure the impact of a 25% reduction in caloric intake, they found that the most their volunteers could cut was 12%. (That study found that caloric restriction reduces markers of inflammation, which can be harmful when it’s chronic, and had only a small impact on bone density.)

Unfortunately, there’s a lot we still don’t really understand about caloric restriction. It doesn’t seem to help all animals live longer—it seems to shorten the lifespan of animals with certain genetic backgrounds. And we don’t know whether it extends the lifespan of people. It isn’t possible to conduct a randomized clinical trial in which you deprive people of food from childhood and then wait their entire lives to see when they die.

It is notoriously difficult to track or change your diet. And given the unknowns surrounding caloric restriction, it’s too soon to make sweeping recommendations, particularly given that your own personal biology will play a role in any benefits or risks you’ll experience. Roll on the next round of research.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.