What does it mean for an algorithm to be “fair”?

Back in February, I flew to Amsterdam to report on a high-stakes experiment the city had recently conducted: a pilot program for what it called Smart Check, which was its attempt to create an effective, fair, and unbiased predictive algorithm to try to detect welfare fraud. But the city fell short of its lofty goals—and, with our partners at Lighthouse Reports and the Dutch newspaper Trouw, we tried to get to the bottom of why. You can read about it in our deep dive published last week.

For an American reporter, it’s been an interesting time to write a story on “responsible AI” in a progressive European city—just as ethical considerations in AI deployments appear to be disappearing in the United States, at least at the national level. 

For example, a few weeks before my trip, the Trump administration rescinded Biden’s executive order on AI safety and DOGE began turning to AI to decide which federal programs to cut. Then, more recently, House Republicans passed a 10-year moratorium on US states’ ability to regulate AI (though it has yet to be passed by the Senate). 

What all this points to is a new reality in the United States where responsible AI is no longer a priority (if it ever genuinely was). 

But this has also made me think more deeply about the stakes of deploying AI in situations that directly affect human lives, and about what success would even look like. 

When Amsterdam’s welfare department began developing the algorithm that became Smart Check, the municipality followed virtually every recommendation in the responsible-AI playbook: consulting external experts, running bias tests, implementing technical safeguards, and seeking stakeholder feedback. City officials hoped the resulting algorithm could avoid causing the worst types of harm inflicted by discriminatory AI over nearly a decade. 

After talking to a large number of people involved in the project and others who would potentially be affected by it, as well as some experts who did not work on it, it’s hard not to wonder if the city could ever have succeeded in its goals when neither “fairness” nor even “bias” has a universally agreed-upon definition. The city was treating these issues as technical ones that could be answered by reweighting numbers and figures—rather than political and philosophical questions that society as a whole has to grapple with.

On the afternoon that I arrived in Amsterdam, I sat down with Anke van der Vliet, a longtime advocate for welfare beneficiaries who served on what’s called the Participation Council, a 15-member citizen body that represents benefits recipients and their advocates.

The city had consulted the council during Smart Check’s development, but van der Vliet was blunt in sharing the committee’s criticisms of the plans. Its members simply didn’t want the program. They had well-placed fears of discrimination and disproportionate impact, given that fraud is found in only 3% of applications.

To the city’s credit, it did respond to some of their concerns and make changes in the algorithm’s design—like removing from consideration factors, such as age, whose inclusion could have had a discriminatory impact. But the city ignored the Participation Council’s main feedback: its recommendation to stop development altogether. 

Van der Vliet and other welfare advocates I met on my trip, like representatives from the Amsterdam Welfare Union, described what they see as a number of challenges faced by the city’s some 35,000 benefits recipients: the indignities of having to constantly re-prove the need for benefits, the increases in cost of living that benefits payments do not reflect, and the general feeling of distrust between recipients and the government. 

City welfare officials themselves recognize the flaws of the system, which “is held together by rubber bands and staples,” as Harry Bodaar, a senior policy advisor to the city who focuses on welfare fraud enforcement, told us. “And if you’re at the bottom of that system, you’re the first to fall through the cracks.”

So the Participation Council didn’t want Smart Check at all, even as Bodaar and others working in the department hoped that it could fix the system. It’s a classic example of a “wicked problem,” a social or cultural issue with no one clear answer and many potential consequences. 

After the story was published, I heard from Suresh Venkatasubramanian, a former tech advisor to the White House Office of Science and Technology Policy who co-wrote Biden’s AI Bill of Rights (now rescinded by Trump). “We need participation early on from communities,” he said, but he added that it also matters what officials do with the feedback—and whether there is “a willingness to reframe the intervention based on what people actually want.” 

Had the city started with a different question—what people actually want—perhaps it might have developed a different algorithm entirely. As the Dutch digital rights advocate Hans De Zwart put it to us, “We are being seduced by technological solutions for the wrong problems … why doesn’t the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it?” 

These are the kinds of fundamental questions AI developers will need to consider, or they run the risk of repeating (or ignoring) the same mistakes over and over again.

Venkatasubramanian told me he found the story to be “affirming” in highlighting the need for “those in charge of governing these systems”  to “ask hard questions … starting with whether they should be used at all.”

But he also called the story “humbling”: “Even with good intentions, and a desire to benefit from all the research on responsible AI, it’s still possible to build systems that are fundamentally flawed, for reasons that go well beyond the details of the system constructions.” 

To better understand this debate, read our full story here. And if you want more detail on how we ran our own bias tests after the city gave us unprecedented access to the Smart Check algorithm, check out the methodology over at Lighthouse. (For any Dutch speakers out there, here’s the companion story in Trouw.) Thanks to the Pulitzer Center for supporting our reporting. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Puerto Rico’s power struggles

At first glance, it seems as if life teems around Carmen Suárez Vázquez’s little teal-painted house in the municipality of Guayama, on Puerto Rico’s southeastern coast.

The edge of the Aguirre State Forest, home to manatees, reptiles, as many as 184 species of birds, and at least three types of mangrove trees, is just a few feet south of the property line. A feral pig roams the neighborhood, trailed by her bumbling piglets. Bougainvillea blossoms ring brightly painted houses soaked in Caribbean sun.

Yet fine particles of black dust coat the windowpanes and the leaves of the blooming vines. Because of this, Suárez Vázquez feels she is stalked by death. The dust is in the air, so she seals her windows with plastic to reduce the time she spends wheezing—a sound that has grown as natural in this place as the whistling croak of Puerto Rico’s ubiquitous coquí frog. It’s in the taps, so a watercooler and extra bottles take up prime real estate in her kitchen. She doesn’t know exactly how the coal pollution got there, but she is certain it ended up in her youngest son, Edgardo, who died of a rare form of cancer.

And she believes she knows where it came from. Just a few minutes’ drive down the road is Puerto Rico’s only coal-fired power station, flanked by a mountain of toxic ash.

The plant, owned by the utility giant AES, has long plagued this part of Puerto Rico with air and water pollution. During Hurricane Maria in 2017, powerful winds and rain swept the unsecured pile—towering more than 12 stories high—out into the ocean and the surrounding area. Though the company had moved millions of tons of ash around Puerto Rico to be used in construction and landfill, much of it had stayed in Guayama, according to a 2018 investigation by the Centro de Periodismo Investigativo, a nonprofit investigative newsroom. Last October, AES settled with the US Environmental Protection Agency over alleged violations of groundwater rules, including failure to properly monitor wells and notify the public about significant pollution levels. 

Governor Jenniffer González-Colón has signed a new law rolling back the island’s clean-energy statute, completely eliminating its initial goal of 40% renewables by 2025.

Between 1990 and 2000—before the coal plant opened—Guayama had on average just over 103 cancer cases per year. In 2003, the year after the plant opened, the number of cancer cases in the municipality surged by 50%, to 167. In 2022, the most recent year with available data in Puerto Rico’s central cancer registry, cases hit a new high of 209—a more than 88% increase from the year AES started burning coal. A study by University of Puerto Rico researchers found cancer, heart disease, and respiratory illnesses on the rise in the area. They suggested that proximity to the coal plant may be to blame, describing the “operation, emissions, and handling of coal ash from the company” as “a case of environmental injustice.”

Seemingly everyone Suárez Vázquez knows has some kind of health problem. Nearly every house on her street has someone who’s sick, she told me. Her best friend, who grew up down the block, died of cancer a year ago, aged 55. Her mother has survived 15 heart attacks. Her own lungs are so damaged she requires a breathing machine to sleep at night, and she was forced to quit her job at a nearby pharmaceutical factory because she could no longer make it up and down the stairs without gasping for air. 

When we met in her living room one sunny March afternoon, she had just returned from two weeks in the hospital, where doctors were treating her for lung inflammation.

“In one community, we have so many cases of cancer, respiratory problems, and heart disease,” she said, her voice cracking as tears filled her eyes and she clutched a pillow on which a photo of Edgardo’s face was printed. “It’s disgraceful.”

Neighbors have helped her install solar panels and batteries on the roof of her home, helping to offset the cost of running her air conditioner, purifier, and breathing machine. They also allow the devices to operate even when the grid goes down—as it still does multiple times a week, nearly eight years after Hurricane Maria laid waste to Puerto Rico’s electrical infrastructure.

Carmen Suárez Vázquez clutches a pillow with a portraits of her daughter and late son Edgardo. When this photograph was taken, she had just been released from the hospital, where she underwent treatment for lung inflammation.
ALEXANDER C. KAUFMAN

Suárez Vázquez had hoped that relief would be on the way by now. That the billions of dollars Congress designated for fixing the island’s infrastructure would have made solar panels ubiquitous. That AES’s coal plant, which for nearly a quarter century has supplied up to 20% of the old, faulty electrical grid’s power, would be near its end—its closure had been set for late 2027. That the Caribbean’s first virtual power plant—a decentralized network of solar panels and batteries that could be remotely tapped into and used to balance the grid like a centralized fuel-burning station—would be well on its way to establishing a new model for the troubled island. 

Puerto Rico once seemed to be on that path. In 2019, two years after Hurricane Maria sent the island into the second-longest blackout in world history, the Puerto Rican government set out to make its energy system cheaper, more resilient, and less dependent on imported fossil fuels, passing a law that set a target of 100% renewable energy by 2050. Under the Biden administration, a gas company took charge of Puerto Rico’s power plants and started importing liquefied natural gas (LNG), while the federal government funded major new solar farms and programs to install panels and batteries on rooftops across the island. 

Now, with Donald Trump back in the White House and his close ally Jenniffer González-Colón serving as Puerto Rico’s governor, America’s largest unincorporated territory is on track for a fossil-fuel resurgence. The island quietly approved a new gas power plant in 2024, and earlier this year it laid out plans for a second one. Arguing that it was the only way to avoid massive blackouts, the governor signed legislation to keep Puerto Rico’s lone coal plant open for at least another seven years and potentially more. The new law also rolls back the island’s clean-energy statute, completely eliminating its initial goals of 40% renewables by 2025 and 60% by 2040, though it preserves the goal of reaching 100% by 2050. At the start of April, González-Colón issued an executive order fast-­tracking permits for new fossil-fuel plants. 

In May the new US energy secretary, Chris Wright, redirected $365 million in federal funds the Biden administration had committed to solar panels and batteries to instead pay for “practical fixes and emergency activities” to improve the grid.

It’s all part of a desperate effort to shore up Puerto Rico’s grid before what’s forecast to be a hotter-than-­average summer—and highlights the thorny bramble of bureaucracy and business deals that prevents the territory’s elected government from making progress on the most basic demand from voters to restore some semblance of modern American living standards.

Puerto Ricans already pay higher electricity prices than most other American citizens, and Luma Energy, the private company put in charge of selling and distributing power from the territory’s state-owned generating stations four years ago, keeps raising rates despite ongoing outages. In April González-Colón moved to crack down on Luma, whose contract she pledged to cancel on the campaign trail, though it remains unclear how she will find a suitable replacement. 

Alberto Colón, a retired public school administrator who lives across the street from Suárez Vázquez, helped install her solar panels. Here, he poses next to his own batteries.
ALEXANDER C. KAUFMAN
close up of a hand holding a paper towel with a gritty black streak on it
Colón shows some of the soot wiped from the side of his house.
ALEXANDER C. KAUFMAN

At the same time, she’s trying to enforce a separate contract with New Fortress Energy, the New York–based natural-gas company that gained control of Puerto Rico’s state-owned power plants in a hotly criticized privatization deal in 2023—all while the company is pushing to build more gas-fired generating stations to increase the island’s demand for liquefied natural gas. Just weeks before the coal plant won its extension, New Fortress secured a deal to sell even more LNG to Puerto Rico—despite the company’s failure to win federal permits for a controversial import terminal in San Juan Bay, already in operation, that critics fear puts the most densely populated part of the island at major risk, with no real plan for what to do if something goes wrong.

Those contracts infamously offered Luma and New Fortress plenty of carrots in the form of decades-long deals and access to billions of dollars in federal reconstruction money, but few sticks the Puerto Rican government could wield against them when ratepayers’ lights went out and prices went up. In a sign of how dim the prospects for improvement look, New Fortress even opted in March to forgo nearly $1 billion in performance bonuses over the next decade in favor of getting $110 million in cash up front. Spending any money to fix the problems Puerto Rico faces, meanwhile, requires approval from an unelected fiscal control board that Congress put in charge of the territory’s finances during a government debt crisis nearly a decade ago, further reducing voters’ ability to steer their own fate. 

AES declined an interview with MIT Technology Review and did not respond to a detailed list of emailed questions. Neither New Fortress nor a spokesperson for González-Colón responded to repeated requests for comment. 

“I was born on Puerto Rico’s Emancipation Day, but I’m not liberated because that coal plant is still operating,” says Alberto Colón, 75, a retired public school administrator who lives across the street from Suárez Vázquez, referring to the holiday that celebrates the abolition of slavery in what was then a Spanish colony. “I have sinus problems, and I’m lucky. My wife has many, many health problems. It’s gotten really bad in the last few years. Even with screens in the windows, the dust gets into the house.”

El problema es la colonia

What’s happening today in Puerto Rico began long before Hurricane Maria made landfall over the territory, mangling its aging power lines like a metal Slinky in a blender. 

The question for anyone who visits this place and tries to understand why things are the way they are is: How did it get this bad? 

The complicated answer is a story about colonialism, corruption, and the challenges of rebuilding an island that was smothered by debt—a direct consequence of federal policy changes in the 1990s. Although they are citizens, Puerto Ricans don’t have votes that count in US presidential elections. They don’t typically pay US federal income taxes, but they also don’t benefit fully from federal programs, receiving capped block grants that frequently run out. Today the island has even less control over its fate than in years past and is entirely beholden to a government—the US federal government—that its 3.2 million citizens had no part in choosing.

What’s happening today in Puerto Rico began long before Hurricane Maria made landfall over the territory, mangling its aging power lines like a metal Slinky in a blender.

A phrase that’s ubiquitous in graffiti on transmission poles and concrete walls in the towns around Guayama and in the artsy parts of San Juan places the blame deep in history: El problema es la colonia—the problem is the colony.

By some measures, Puerto Rico is the world’s oldest colony, officially established under the Spanish crown in 1508. The US seized the island as a trophy in 1898 following its victory in the Spanish-American War. In the grips of an expansionist quest to place itself on par with European empires, Washington pried Puerto Rico, Guam, and the Philippines away from Madrid, granting each territory the same status then afforded to the newly annexed formerly independent kingdom of Hawaii. Acolytes of President William McKinley saw themselves as accepting what the Indian-born British poet Rudyard Kipling called “the white man’s burden”—the duty to civilize his subjects.

Although direct military rule lasted just two years, Puerto Ricans had virtually no say over the civil government that came to power in 1900, in which the White House appointed the governor. That explicitly colonial arrangement ended only in 1948 with the first island-wide elections for governor. Even then, the US instituted a gag law just months before the election that would remain in effect for nearly a decade, making agitation for independence illegal. Still, the following decades were a period of relative prosperity for Puerto Rico. Money from President Franklin D. Roosevelt’s New Deal had modernized the island’s infrastructure, and rural farmers flocked to bustling cities like Ponce and San Juan for jobs in the burgeoning manufacturing sector. The pharmaceutical industry in particular became a major employer. By the start of the 21st century, Pfizer’s plant in the Puerto Rican town of Barceloneta was the largest Viagra manufacturer in the world.

But in 1996, Republicans in Congress struck a deal with President Bill Clinton to phase out federal tax breaks that had helped draw those manufacturers to Puerto Rico. As factories closed, the jobs that had built up the island’s middle class disappeared. To compensate, the government hired more workers as teachers and police officers, borrowing money on the bond market to pay their salaries and make up for the drop in local tax revenue. Puerto Rico’s territorial status meant it could not legally declare bankruptcy, and lenders assumed the island enjoyed the full backing of the US Treasury. Before long, it was known on Wall Street as the “belle of the bond markets.” By the mid-2010s, however, the bond debt had grown to $74 billion, and a $49 billion chasm had opened between the amount the government needed to pay public pensions and the money it had available. It began shedding more and more of its payroll. 

The Puerto Rico Electric Power Authority (PREPA), the government-­owned utility, had racked up $9 billion in debt. Unlike US states, which can buy electricity from neighboring grids and benefit from interstate gas pipelines, Puerto Rico needed to import fuel to run its power plants. The majority of that power came from burning oil, since petroleum was easier to store for long periods of time. But oil, and diesel in particular, was expensive and pushed the utility further and further into the red.

By 2016, Puerto Rico could no longer afford to pay its bills. Since the law that gave the US jurisdiction over nonstate territories made Puerto Rico a “possession” of Congress, it fell on the federal legislature—in which the island’s elected delegate had no vote—to decide what to do. Congress passed the Puerto Rico Oversight, Management, and Economic Stability Act—shortened to PROMESA, or “promise” in Spanish. It established a fiscal control board appointed by the White House, with veto power over all spending by the island’s elected government. The board had authority over how the money the territorial government collected in taxes and utility bills could be used. It was a significant shift in the island’s autonomy. 

“The United States cannot continue its state of denial by failing to accept that its relationship with its citizens who reside in Puerto Rico is an egregious violation of their civil rights,” Juan R. Torruella, the late federal appeals court judge, wrote in a landmark paper in the Harvard Law Review in 2018, excoriating the legislation as yet another “colonial experiment.” “The democratic deficits inherent in this relationship cast doubt on its legitimacy, and require that it be frontally attacked and corrected ‘with all deliberate speed.’” 

Hurricane Maria struck a little over a year after PROMESA passed, and according to official figures, killed dozens. That proved to be just the start, however. As months ground on without any electricity and more people were forced to go without medicine or clean water, the death toll rose to the thousands. It would be 11 months before the grid would be fully restored, and even then, outages and appliance-­destroying electrical surges were distressingly common.

The spotty service wasn’t the only defining characteristic of the new era after Puerto Rico’s great blackout. The fiscal control board—which critics pejoratively referred to as “la junta,” using a term typically reserved for Latin America’s most notorious military dictatorships—saw privatization as the best path to solvency for the troubled state utility.

In 2020, the board approved a deal for Luma Energy—a joint venture between Quanta Services, a Texas-based energy infrastructure company, and its Canadian rival ATCO—to take over the distribution and sale of electricity in Puerto Rico. The contract was awarded through a process that clean-energy and anticorruption advocates said lacked transparency and delivered an agreement with few penalties for poor service. It was almost immediately mired in controversy.

A deadly diagnosis

Until that point, life was looking up for Suárez Vázquez. Her family had emerged from the aftermath of Maria without any loss of life. In 2019, her children were out of the house, and her youngest son, Edgardo, was studying at an aviation school in Ceiba, roughly two hours northeast of Guayama. He excelled. During regular health checks at the school, Edgardo was deemed fit. Gift bags started showing up at the house from American Airlines and JetBlue.

“They were courting him,” Suárez Vázquez says. “He was going to graduate with a great job.”

That summer of 2019, however, Edgardo began complaining of abdominal pain. He ignored it for a few months but promised his mother he would go to the doctor to get it checked out. On September 23, she got a call from her godson, a radiologist at the hospital. Not wanting to burden his anxious mother, Edgardo had gone to the hospital alone at 3 a.m., and tests had revealed three tumors entwined in his intestines.

So began a two-year battle with a form of cancer so rare that doctors said Edgardo’s case was one of only a few hundred worldwide. He gave up on flight school and took a job at the pharmaceutical factory with his parents. Coworkers raised money to help the family afford flights and stays to see specialists in other parts of Puerto Rico and then in Florida. Edgardo suspected the cause was something in the water. Doctors gave him inconclusive answers; they just wanted to study him to understand the unusual tumors. He got water-testing kits and discovered that the taps in their home were laden with high amounts of heavy metals typically found in coal ash. 

Ewing’s sarcoma tumors occur at a rate of about one in one million cancer diagnoses in the US each year. What Edgardo had—extraskeletal Ewing’s sarcoma, in which tumors form in soft tissue rather than bone—is even rarer. 

As a result, there’s scant research on what causes that kind of cancer. While the National Institutes of Health have found “no well-established association between Ewing sarcoma and environmental risk factors,” researchers cautioned in a 2024 paper that findings have been limited to “small, retrospective, case-control studies.”

Dependable sun

The push to give control over the territory’s power system to private companies with fossil-fuel interests ignored the reality that for many Puerto Ricans, rooftop solar panels and batteries were among the most dependable options for generating power after the hurricane. Solar power was relatively affordable, especially as Luma jacked up what were already some of the highest electricity rates in the US. It also didn’t lead to sudden surges that fried refrigerators and microwaves. Its output was as predictable as Caribbean sunshine.

But rooftop panels could generate only so much electricity for the island’s residents. Last year, when the Biden administration’s Department of Energy conducted its PR100 study into how Puerto Rico could meet its legally mandated goals of 100% renewable power by the middle of the century, the research showed that the bulk of the work would need to be done by big, utility-scale solar farms. 

worker crouching on a roof to install solar panels
Nearly 160,000 households—roughly 13% of the population—have solar panels, and 135,000 of them also have batteries. Of those, just 8,500 have enrolled in a pilot project aimed at providing backup power to the grid.
GDA VIA AP IMAGES

With its flat lands once used to grow sugarcane, the southeastern part of Puerto Rico proved perfect for devoting acres to solar production. Several enormous solar farms with enough panels to generate hundreds of megawatts of electricity were planned for the area, including one owned by AES. But early efforts to get the projects off the ground stumbled once the fiscal oversight board got involved. The solar farms that Puerto Rico’s energy regulators approved ultimately faced rejection by federal overseers who complained that the panels in areas near Guayama could be built even more cheaply.

In a September 2023 letter to PREPA vetoing the projects, the oversight board’s lawyer chastised the Puerto Rico Energy Bureau, a government regulatory body whose five commissioners are appointed by the governor, for allowing the solar developers to update contracts to account for surging costs from inflation that year. It was said to have created “a precedent that bids will be renegotiated, distorting market pricing and creating litigation risk.” In another letter to PREPA, in January 2024, the board agreed to allow projects generating up to 150 megawatts of power to move forward, acknowledging “the importance of developing renewable energy projects.”

“There’s no trust. That creates risk. Risk means more money. Things get more expensive. It’s disappointing, but that’s why we weren’t able to build large things.”

But that was hardly enough power to provide what the island needed, and critics said the agreement was guilty of the very thing the board accused Puerto Rican regulators of doing: discrediting the permitting process in the eyes of investors.

The Puerto Rico Energy Bureau “negotiated down to the bone to very inexpensive prices” on a handful of projects, says Javier Rúa-Jovet, the chief policy officer at the Solar & Energy Storage Association of Puerto Rico. “Then the fiscal board—in my opinion arbitrarily—canceled 450 megawatts of projects, saying they were expensive. That action by the fiscal board was a major factor in predetermining the failure of all future large-scale procurements,” he says.

When the independence of the Puerto Rican regulator responsible for issuing and judging the requests for proposals is overruled, project developers no longer believe that anything coming from the government’s local experts will be final. “There’s no trust,” says Rúa-Jovet. “That creates risk. Risk means more money. Things get more expensive. It’s disappointing, but that’s why we weren’t able to build large things.”

That isn’t to say the board alone bears all responsibility. An investigation released in January by the Energy Bureau blamed PREPA and Luma for causing “deep structural inefficiencies” that “ultimately delayed progress” toward Puerto Rico’s renewables goals.

The finding only further reinforced the idea that the most trustworthy path to steady power would be one Puerto Ricans built themselves. At the residential scale, Rúa-Jovet says, solar and batteries continue to be popular. Nearly 160,000 households—roughly 13% of the population—have solar panels, and 135,000 of them also have batteries. Of those, just 8,500 households are enrolled in the pilot virtual power plant, a collection of small-scale energy resources that have aggregated together and coordinated with grid operations. During blackouts, he says, Luma can tap into the network of panels and batteries to back up the grid. The total generation capacity on a sunny day is nearly 600 megawatts—eclipsing the 500 megawatts that the coal plant generates. But the project is just at the pilot stage. 

The share of renewables on Puerto Rico’s power grid hit 7% last year, up one percentage point from 2023. That increase was driven primarily by rooftop solar. Despite the growth and dependability of solar, in December Puerto Rican regulators approved New Fortress’s request to build an even bigger gas power station in San Juan, which is currently scheduled to come online in 2028.

“There’s been a strong grassroots push for a decentralized grid,” says Cathy Kunkel, a consultant who researches Puerto Rico for the Institute for Energy Economics and Financial Analysis and lived in San Juan until recently. She’d be more interested, she adds, if the proposals focused on “smaller-­scale natural-gas plants” that could be used to back up renewables, but “what they’re talking about doing instead are these giant gas plants in the San Juan metro area.” She says, “That’s just not going to provide the kind of household level of resilience that people are demanding.”

What’s more, New Fortress has taken a somewhat unusual approach to storing its natural gas. The company has built a makeshift import terminal next to a power plant in a corner of San Juan Bay by semipermanently mooring an LNG tanker, a vessel specifically designed for transport. Since Puerto Rico has no connections to an interstate pipeline network, New Fortress argued that the project didn’t require federal permits under the law that governs most natural-gas facilities in the US. As a result, the import terminal did not get federal approval for a safety plan in case of an accident like the ones that recently rocked Texas and Louisiana.

Skipping the permitting process also meant skirting public hearings, spurring outrage from Catholic clergy such as Lissette Avilés-Ríos, an activist nun who lives in the neighborhood next to the import terminal and who led protests to halt gas shipments. “Imagine what a hurricane like Maria could do to a natural-gas station like that,” she told me last summer, standing on the shoreline in front of her parish and peering out on San Juan Bay. “The pollution impact alone would be horrible.”

The shipments ultimately did stop for a few months—but not because of any regulatory enforcement. In fact, it was in violation of its contract that New Fortress abruptly cut off shipments when the price of natural gas skyrocketed globally in late 2021. When other buyers overseas said they’d pay higher prices for LNG than the contract in Puerto Rico guaranteed, New Fortress announced with little notice that it would cease deliveries for six months while upgrading its terminal.

“The government justifies extending coal plants because they say it’s the cheapest form of energy.”

Aldwin José Colón, 51, who lives across the street from Suárez Vázquez

The missed shipments exemplified the challenges in enforcing Puerto Rico’s contracts with the private companies that control its energy system and highlighted what Gretchen Sierra-Zorita, former president Joe Biden’s senior advisor on Puerto Rico and the territories, called the “troubling” fact that the same company operating the power plants is selling itself the fuel on which they run—disincentivizing any transition to alternatives.

“Territories want to diversify their energy sources and maximize the use of abundant solar energy,” she told me. “The Trump administration’s emphasis on domestic production of fossil fuels and defunding climate and clean-­energy initiatives will not provide the territories with affordable energy options they need to grow their economies, increase their self-sufficiency, and take care of their people.”

Puerto Rico’s other energy prospects are limited. The Energy Department study determined that offshore wind would be too expensive. Nuclear is also unlikely; the small modular reactors that would be the most realistic way to deliver nuclear energy here are still years away from commercialization and would likely cost too much for PREPA to purchase. Moreover, nuclear power would almost certainly face fierce opposition from residents in a disaster-prone place that has already seen how willing the federal government is to tolerate high casualty rates in a catastrophe. That leaves little option, the federal researchers concluded, beyond the type of utility-scale solar projects the fiscal oversight board has made impossible to build.

“Puerto Rico has been unsuccessful in building large-scale solar and large-scale batteries that could have substituted [for] the coal plant’s generation. Without that new, clean generation, you just can’t turn off the coal plant without causing a perennial blackout,” Rúa-Jovet says. “That’s just a physical fact.”

The lowest-cost energy, depending on who’s paying the price

The AES coal plant does produce some of the least expensive large-scale electricity currently available in Puerto Rico, says Cate Long, the founder of Puerto Rico Clearinghouse, a financial research service targeted at the island’s bondholders. “From a bondholder perspective, [it’s] the lowest cost,” she explains. “From the client and user perspective, it’s the lowest cost. It’s always been the cheapest form of energy down there.” 

The issue is that the price never factors in the cost to the health of people near the plant. 

“The government justifies extending coal plants because they say it’s the cheapest form of energy,” says Aldwin José Colón, 51, who lives across the street from Suárez Vázquez. He says he’s had cancer twice already.

On an island where nearly half the population relies on health-care programs paid for by frequently depleted Medicaid block grants, he says, “the government ends up paying the expense of people’s asthma and heart attacks, and the people just suffer.” 

On December 2, 2021, at 9:15 p.m., Edgardo died in the hospital. He was 25 years old. “So many people have died,” Suárez Vázquez told me, choking back tears. “They contaminated the water. The soil. The fish. The coast is black. My son’s insides were black. This never ends.” 

Customers sit inside a restaurant lit by battery-powered lanterns. On April 16, as this story was being edited, all of Puerto Rico’s power plants went down in an island-wide outage triggered by a transmission line failure.
AP PHOTO/ALEJANDRO GRANADILLO

Nor do the blackouts. At 12:38 p.m. on April 16, as this story was being edited, all of Puerto Rico’s power plants went down in an island-wide outage triggered by a transmission line failure. As officials warned that the blackout would persist well into the next day, Casa Pueblo, a community group that advocates for rooftop solar, posted an invitation on X to charge phones and go online under its outdoor solar array near its headquarters in a town in the western part of Puerto Rico’s central mountain range.

“Come to the Solar Forest and the Energy Independence Plaza in Adjuntas,” the group beckoned, “where we have electricity and internet.” 

Alexander C. Kaufman is a reporter who has covered energy, climate change, pollution, business, and geopolitics for more than a decade.

AI copyright anxiety will hold back creativity

Last fall, while attending a board meeting in Amsterdam, I had a few free hours and made an impromptu visit to the Van Gogh Museum. I often steal time for visits like this—a perk of global business travel for which I am grateful. Wandering the galleries, I found myself before The Courtesan (after Eisen), painted in 1887. Van Gogh had based it on a Japanese woodblock print by Keisai Eisen, which he encountered in the magazine Paris Illustré. He explicitly copied and reinterpreted Eisen’s composition, adding his own vivid border of frogs, cranes, and bamboo.

As I stood there, I imagined the painting as the product of a generative AI model prompted with the query How would van Gogh reinterpret a Japanese woodblock in the style of Keisai Eisen? And I wondered: If van Gogh had used such an AI tool to stimulate his imagination, would Eisen—or his heirs—have had a strong legal claim?  If van Gogh were working today, that might be the case. Two years ago, the US Supreme Court found that Andy Warhol had infringed upon the photographer Lynn Goldsmith’s copyright by using her photo of the musician Prince for a series of silkscreens. The court said the works were not sufficiently transformative to constitute fair use—a provision in the law that allows for others to make limited use of copyrighted material.

A few months later, at the Museum of Fine Arts in Boston, I visited a Salvador Dalí exhibition. I had always thought of Dalí as a true original genius who conjured surreal visions out of thin air. But the show included several Dutch engravings, including Pieter Bruegel the Elder’s Seven Deadly Sins (1558), that clearly influenced Dalí’s 8 Mortal Sins Suite (1966). The stylistic differences are significant, but the lineage is undeniable. Dalí himself cited Bruegel as a surrealist forerunner, someone who tapped into the same dream logic and bizarre forms that Dalí celebrated. Suddenly, I was seeing Dalí not just as an original but also as a reinterpreter. Should Bruegel have been flattered that Dalí built on his work—or should he have sued him for making it so “grotesque”?

During a later visit to a Picasso exhibit in Milan, I came across a famous informational diagram by the art historian Alfred Barr, mapping how modernist movements like Cubism evolved from earlier artistic traditions. Picasso is often held up as one of modern art’s most original and influential figures, but Barr’s chart made plain the many artists he drew from—Goya, El Greco, Cézanne, African sculptors. This made me wonder: If a generative AI model had been fed all those inputs, might it have produced Cubism? Could it have generated the next great artistic “breakthrough”?

These experiences—spread across three cities and centered on three iconic artists—coalesced into a broader reflection I’d already begun. I had recently spoken with Daniel Ek, the founder of Spotify, about how restrictive copyright laws are in music. Song arrangements and lyrics enjoy longer protection than many pharmaceutical patents. Ek sits at the leading edge of this debate, and he observed that generative AI already produces an astonishing range of music. Some of it is good. Much of it is terrible. But nearly all of it borrows from the patterns and structures of existing work.

Musicians already routinely sue one another for borrowing from previous works. How will the law adapt to a form of artistry that’s driven by prompts and precedent, built entirely on a corpus of existing material?

And the questions don’t stop there. Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it? Will the social forces that shape artistic standing—critics, curators, tastemakers—still hold sway? Or will a new, AI-era hierarchy emerge? If every artist has always borrowed from others, is AI’s generative recombination really so different? And in such a litigious culture, how long can copyright law hold its current form? The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field. 

Different industries are responding in different ways. The Academy of Motion Picture Arts and Sciences recently announced that filmmakers’ use of generative AI would not disqualify them from Oscar contention—and that they wouldn’t be required to disclose when they’d used the technology. Several acclaimed films, including Oscar winner The Brutalist, incorporated AI into their production processes.

The music world, meanwhile, continues to wrestle with its definitions of originality. Consider the recent lawsuit against Ed Sheeran. In 2016, he was sued by the heirs of Ed Townsend, co-writer of Marvin Gaye’s “Let’s Get It On,” who claimed that Sheeran’s “Thinking Out Loud” copied the earlier song’s melody, harmony, and rhythm. When the case finally went to trial in 2023, Sheeran brought a guitar to the stand. He played the disputed four-chord progression—I–iii–IV–V—and wove together a mash-up of songs built on the same foundation. The point was clear: These are the elemental units of songwriting. After a brief deliberation, the jury found Sheeran not liable.

Reflecting after the trial, Sheeran said: “These chords are common building blocks … No one owns them or the way they’re played, in the same way no one owns the colour blue.”

Exactly. Whether it’s expressed with a guitar, a paintbrush, or a generative algorithm, creativity has always been built on what came before.

I don’t consider this essay to be great art. But I should be transparent: I relied extensively on ChatGPT while drafting it. I began with a rough outline, notes typed on my phone in museum galleries, and transcripts from conversations with colleagues. I uploaded older writing samples to give the model a sense of my voice. Then I used the tool to shape a draft, which I revised repeatedly—by hand and with help from an editor—over several weeks.

There may still be phrases or sentences in here that came directly from the model. But I’ve iterated so much that I no longer know which ones. Nor, I suspect, could any reader—or any AI detector. (In fact, Grammarly found that 0% of this text appeared to be AI-generated.)

Many people today remain uneasy about using these tools. They worry it’s cheating, or feel embarrassed to admit that they’ve sought such help. I’ve moved past that. I assume all my students at Harvard Business School are using AI. I assume most academic research begins with literature scanned and synthesized by these models. And I assume that many of the essays I now read in leading publications were shaped, at least in part, by generative tools.

Why? Because we are professionals. And professionals adopt efficiency tools early. Generative AI joins a long lineage that includes the word processor, the search engine, and editing tools like Grammarly. The question is no longer Who’s using AI? but Why wouldn’t you?

I recognize the counterargument, notably put forward by Nicholas Thompson, CEO of the Atlantic: that content produced with AI assistance should not be eligible for copyright protection, because it blurs the boundaries of authorship. I understand the instinct. AI recombines vast corpora of preexisting work, and the results can feel derivative or machine-like.

But when I reflect on the history of creativity—van Gogh reworking Eisen, Dalí channeling Bruegel, Sheeran defending common musical DNA—I’m reminded that recombination has always been central to creation. The economist Joseph Schumpeter famously wrote that innovation is less about invention than “the novel reassembly of existing ideas.” If we tried to trace and assign ownership to every prior influence, we’d grind creativity to a halt.

From the outset, I knew the tools had transformative potential. What I underestimated was how quickly they would become ubiquitous across industries and in my own daily work.

Our copyright system has never required total originality. It demands meaningful human input. That standard should apply in the age of AI as well. When people thoughtfully engage with these models—choosing prompts, curating inputs, shaping the results—they are creating. The medium has changed, but the impulse remains the same: to build something new from the materials we inherit.


Nitin Nohria is the George F. Baker Jr. Professor at Harvard Business School and its former dean. He is also the chair of Thrive Capital, an early investor in several prominent AI firms, including OpenAI.

MIT Technology Review’s editorial guidelines state that generative AI should not be used to draft articles unless the article is meant to illustrate the capabilities of such tools and its use is clearly disclosed. 

The Download: power in Puerto Rico, and the pitfalls of AI agents

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Puerto Rico’s power struggles

On the southeastern coast of Puerto Rico lies the country’s only coal-fired power station, flanked by a mountain of toxic ash. The plant, owned by the utility giant AES, has long plagued this part of Puerto Rico with air and water pollution.

Before the coal plant opened Guayama had on average just over 103 cancer cases per year. In 2003, the year after the plant opened, the number of cancer cases in the municipality surged by 50%, to 167. In 2022, the most recent year with available data, cases hit a new high of 209. The question is: How did it get this bad? Read the full story.

—Alexander C. Kaufman

This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

When AIs bargain, a less advanced agent could cost you

The race to build ever larger AI models is slowing down. The industry’s focus is shifting toward agents—systems that can act autonomously, make decisions, and negotiate on users’ behalf.

But what would happen if both a customer and a seller were using an AI agent? A recent study put agent-to-agent negotiations to the test and found that stronger agents can exploit weaker ones to get a better deal. It’s a bit like entering court with a seasoned attorney versus a rookie: You’re technically playing the same game, but the odds are skewed from the start. Read the full story.

—Caiwei Chen

AI copyright anxiety will hold back creativity

—Nitin Nohria is the George F. Baker Jr. Professor at Harvard Business School and its former dean. 

Last fall, during a visit to the Van Gogh Museum in Amsterdam, I found myself imagining the painting in front of me as the product of a generative AI model prompted with the query How would van Gogh reinterpret a Japanese woodblock in the style of Keisai Eisen?

And I wondered: If van Gogh had used such an AI tool to stimulate his imagination, would Eisen—or his heirs—have had a strong legal claim?

And the questions don’t stop there. Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it?

The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field. Read the full story.

What does it mean for an algorithm to be “fair”?

—Eileen Guo

Back in February, I flew to Amsterdam to report on a high-stakes experiment the city had recently conducted. Officials had tried to create an effective, fair, and unbiased predictive algorithm to try to detect welfare fraud. But the city fell short of its lofty goals—and, with our partners at Lighthouse Reports and the Dutch newspaper Trouw, we tried to get to the bottom of why.

For an American reporter, it’s been an interesting time to write a story on “responsible AI” in a progressive European city—just as ethical considerations in AI deployments appear to be disappearing in the United States, at least at the national level. 

It has also made me think more deeply about the stakes of deploying AI in situations that directly affect human lives, and about what success would even look like. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is going to build tools for the US Defense Department
For a chunky $200 million. (CNBC)
+ Spotify founder Daniel Ek is sinking €600 million into a German drone firm. (FT $)
+ OpenAI’s new defense contract completes its military pivot. (MIT Technology Review)

2 Trump has fired the US nuclear regulator
The administration wants to speed up reactor approvals at any cost. (WP $)
+ Can nuclear power really fuel the rise of AI? (MIT Technology Review)

3 Complaints about tariff evasion in the US are rising sharply
Tipsters are sounding the alarm about alleged duty dodging. (Wired $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

4 Trump’s smartphone plans may be a little too ambitious
It doesn’t seem all that likely it could be made in the USA by August for just $499. (WSJ $)
+ Conflicts of interest, anyone? (Bloomberg $)
+ Even ordering the handset is a massive ordeal. (404 Media $)

5 AI won’t just replace jobs. It’ll create new ones too
Some will be better than others. (NYT $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

6 Ads are coming to WhatsApp
It was only ever a matter of time. (The Information $)
+ It’s all part of Meta’s grand plan. (BBC)
+ Thankfully, unless you’re an Updates obsessive, you may never see them. (Ars Technica)

7 AI bots are hammering libraries and museums
Their servers are swamped, and even knocked offline in some cases. (404 Media)

8 Venetians aren’t happy about Jeff Bezos’ upcoming nuptials
They’re protesting around the town square and along the Rialto Bridge. (Insider $)
+ ‘No space for Bezos’ is a pretty snappy slogan. (NBC News)

9 Tinder is resurrecting its Double Date feature
In an effort to let Gen Z daters bring a friend along for emotional support. (Insider $)
+ Double the rejection? No thanks. (TechCrunch)

10 Threads is experimenting with a spoiler feature
Well, that could be one reason to start using it. (The Verge)

Quote of the day

“No one who has been paying attention could miss that President Trump considers the presidency a vehicle to grow his family’s wealth. Maybe this example will help more come to see this undeniable truth.”

—Lawrence Lessig, a law professor at Harvard, tells Reuters why people should be concerned about Trump’s plans to launch a smartphone.

One more thing



Is this the end of animal testing?

Animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people, but until recently there was no other option.

Now organs on chips, also known as microphysiological systems, may offer a truly viable alternative. It’s only early days, but if they work as hoped, they could solve one of the biggest problems in medicine today. Read the full story.

—Harriet Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Brace yourself: there are still plenty of great video games coming later this year.
+ Tradwives are out, radwives are in.
+ I fully endorse this handy guide to sleeping in airports.
+ How artist Andy Vella came up with the beautiful artwork for The Cure’s latest album.

New Books on Classic Brands, Growth, Change

This roundup of compelling new business titles includes inspirational lessons from Sonic diners and Rolex, as well as perspectives on mentorship, data, hiring, transformation, startups, and more.

The Making of a Status Symbol: A Business History of Rolex

Cover of Making of a Status Symbol

Making of a Status Symbol

by Pierre-Yves Donzé

The author, a professor of business, explores the power of branding and the evolution of consumer culture through the engaging, well-researched story of how a small Swiss watch company became “a global emblem of success, wealth, and prestige” through strategic partnerships and a “genius for storytelling.”

Wealthy and Well-Known: Build Your Personal Brand and Turn Reputation into Revenue

Cover of Wealthy and Well-Known

Wealthy and Well-Known

by Rory Vaden and AJ Vaden

A renowned duo of brand strategists and entrepreneurs share their playbook for cutting through the glut of “influencers” and information overload to stand out and make money as a unique expert and compelling thought leader.

The Little Book of Data

Cover of Little Book of Data

Little Book of Data

by Justin Evans

Evans, a tech innovator and acclaimed novelist, aims to demystify data and empower readers by illustrating core principles in entertaining stories of how experts have used data to solve problems. From adtech to epidemiology, data is key to improving business and society, he says.

Fired Up: How to Turn Your Spark into a Flame and Come Alive at Any Age

Cover of Fired Up

Fired Up

by Shannon Watts

Watts is the founder of Moms Demand Action, the largest grassroots organization against gun violence in the United States. Her new book on breaking free of limiting beliefs and releasing inner potential has garnered accolades from leaders such as authors Elizabeth Gilbert and Tara Mohr, as well as Kennedy scion Maria Shriver.

The Multicultural Mindset: Driving Business Growth in a Borderless Era

Cover of Multicultural Mindset

Multicultural Mindset

by Joycelyn David

David, CEO of AV Communications, a top Canadian marketing agency, and a “most influential Filipina” in 2022, provides case studies and practical methods for developing the cultural intelligence that is an essential competitive advantage in the global marketplace.

Give First: The Power of Mentorship

Cover of Give First

Give First

by Brad Feld

This slim, easy-to-read book packs a wealth of insight on business and life. Feld founded or co-founded several businesses and venture funds, as well as Techstars, a startup accelerator that matches founders with mentors. He explains how to apply the guiding principles set forth in the “Techstars Mentorship Manifesto” and shows how prioritizing generosity has contributed to his phenomenal success.

After the Idea: What It Really Takes to Create and Scale a Startup

Cover of After the Idea

After the Idea

by Julia Austin

What’s next after starting a company, joining a startup, having a great idea, or building a prototype? How do you manage and grow your new venture? Austin offers strategies for meeting startup challenges based on her experience at firms such as Akamai, DigitalOcean, and VMware, as well as advising numerous others.

The Growth Dilemma

Cover of The Growth Dilemma

The Growth Dilemma

by Annie Wilson and Ryan Hamilton

Everyone wants brand growth, but targeting wider market segments often means conflict among customers. How do you create a growth strategy that successfully engages new customers without making loyal ones feel left behind? The authors use real-world cases from industries such as skateboarding, tech, and fashion to illustrate practical ways of targeting the right markets and managing multiple customer segments.

Bricks and Clicks: How We Drove Sonic into the Digital Age

Cover of Bricks and Clicks

Bricks and Clicks

by Clifford Hudson and Craig Miller

The authors revitalized Sonic, a nostalgic restaurant chain, for the twenty-first century. In this business memoir, they share lessons and insights, offering a roadmap for transforming traditional brick-and-mortar businesses into resilient digital enterprises.

The Hiring Handbook

Cover of The Hiring Handbook

The Hiring Handbook

by Kasey Harboe Guentert and Mollie Berke

Hiring the right people to build high-performing teams is a key component of success for any business. Drawing on their experience in talent management at leading global companies, the authors provide practical guidance for managers and owners in all aspects of the hiring process, from writing compelling job ads to effective interviewing and evaluating applicants.

See What AI Sees: AI Mode Killed the Old SEO Playbook — Here’s the New One via @sejournal, @mktbrew

This post was sponsored by MarketBrew. The opinions expressed in this article are the sponsor’s own.

Is Google using AI to censor thousands of independent websites?

Wondering why your traffic has suddenly dropped, even though you’re doing SEO properly?

Between letters to the FTC describing a systematic dismantling of the open web by Google to SEO professionals who may be unaware that their strategies no longer make an impact, these changes represent a definite re-architecting of the web’s entire incentive structure.

It’s time to adapt.

While some were warning about AI passage retrieval and vector scoring, the industry largely stuck to legacy thinking. SEOs continued to focus on E-E-A-T, backlinks, and content refresh cycles, assuming that if they simply improved quality, recovery would come.

But the rules had changed.

Google’s Silent Pivot: From Keywords to Embedding Vectors

In late 2023 and early 2024, Google began rolling out what it now refers to as AI Mode.

What Is Google’s AI Mode?

AI Mode breaks content into passages, embeds those passages into a multi-dimensional vector space, and compares them directly to queries using cosine similarity.

In this new model, relevance is determined geometrically rather than lexically. Instead of ranking entire pages, Google evaluates individual passages. The most relevant passages are then surfaced in a ChatGPT-like interface, often without any need for users to click through to the source.

Beneath this visible change is a deeper shift: content scoring has become embedding-first.

What Are Embedding Vectors?

Embedding vectors are mathematical representations of meaning. When Google processes a passage of content, it converts that passage into a vector, a list of numbers that captures the semantic context of the text. These vectors exist in a multi-dimensional space where the distance between vectors reflects how similar the meanings are.

Instead of relying on exact keywords or matching phrases, Google compares the embedding vector of a search query to the embedding vectors of individual passages. This allows it to identify relevance based on deeper context, implied meaning, and overall intent.

Traditional SEO practices like keyword targeting and topical coverage do not carry the same weight in this system. A passage does not need to use specific words to be considered relevant. What matters is whether its vector lands close to the query vector in this semantic space.

How Are Embedding Vectors Different From Keywords?

Keywords focus on exact matches. Embedding vectors focus on meaning.

Traditional SEO relied on placing target terms throughout a page. But Google’s AI Mode now compares the semantic meaning of a query and a passage using embedding vectors. A passage can rank well even if it doesn’t use the same words, as long as its meaning aligns closely with the query.

This shift has made many SEO strategies outdated. Pages may be well-written and keyword-rich, yet still underperform if their embedded meaning doesn’t match search intent.

What SEO Got Wrong & What Comes Next

The story isn’t just about Google changing the game, it’s also about how the SEO industry failed to notice the rules had already shifted.

Don’t: Misread the Signals

As rankings dropped, many teams assumed they’d been hit by a quality update or core algorithm tweak. They doubled down on familiar tactics: improving E-E-A-T signals, updating titles, and refreshing content. They pruned thin pages, boosted internal links, and ran audits.

But these efforts were based on outdated models. They treated the symptom, visibility loss, not the cause: semantic drift.

Semantic drift happens when your content’s vector no longer aligns with the evolving vector of search intent. It’s invisible to traditional SEO tools because it occurs in latent space, not your HTML.

No amount of backlinks or content tweaks can fix that.

This wasn’t just platform abuse. It was also a strategic oversight.

SEO teams:

Many believed that doing what Google said, improving helpfulness, pruning content, and writing for humans, would be enough.

That promise collapsed under AI scrutiny.

But we’re not powerless.

Don’t: Fall Into The Trap of Compliance

Google told the industry to “focus on helpful content,” and SEOs listened, through a lexical lens. They optimized for tone, readability, and FAQs.

But “helpfulness” was being determined mathematically by whether your vectors aligned with the AI’s interpretation of the query.

Thousands of reworked sites still dropped in visibility. Why? Because while polishing copy, they never asked: Does this content geometrically align with search intent?

Do: Optimize For Data, Not Keywords

The new SEO playbook begins with a simple truth: you are optimizing for math, not words.

The New SEO Playbook: How To Optimize For AI-Powered SERPs

Here’s what we now know:

  1. AI Mode is real and measurable.
    You can calculate embedding similarity.
    You can test passages against queries.
    You can visualize how Google ranks.
  2. Content must align semantically, not just topically.
    Two pages about “best hiking trails” may be lexically similar, but if one focuses on family hikes and the other on extreme terrain, their vectors diverge.
  3. Authority still matters, but only after similarity.
    The AI Mode fan-out selects relevant passages first. Authority reranking comes later.
    If you don’t pass the similarity threshold, your authority won’t matter.
  4. Passage-level optimization is the new frontier.
    Optimizing entire pages isn’t enough. Each chunk of content must pull semantic weight.

How Do I Track Google AI Mode Data To Improve SERP Visibility?

It depends on your goals; for success in SERPs, you need to focus on tools that not only show you visibility data, but also how to get there.

Profound was one of the first tools to measure whether content appeared inside large language models, essentially offering a visibility check for LLM inclusion. It gave SEOs early signals that AI systems were beginning to treat search results differently, sometimes surfacing pages that never ranked traditionally. Profound made it clear: LLMs were not relying on the same scoring systems that SEOs had spent decades trying to influence.

But Profound stopped short of offering explanations. It told you if your content was chosen, but not why. It didn’t simulate the algorithmic behavior of AI Mode or reveal what changes would lead to better inclusion.

That’s where simulation-based platforms came in.

Market Brew approached the challenge differently. Instead of auditing what was visible inside an AI system, they reconstructed the inner logic of those systems, building search engine models that mirrored Google’s evolution toward embeddings and vector-based scoring. These platforms didn’t just observe the effects of AI Mode, they recreated its mechanisms.

As early as 2023, Market Brew had already implemented:

  • Passage segmentation that divides page content into consistent ~700-character blocks.
  • Embedding generation using Sentence-BERT to capture the semantic fingerprint of each passage.
  • Cosine similarity calculations to simulate how queries match specific blocks of content, not just the page as a whole.
  • Thematic clustering algorithms, like Top Cluster Similarity, to determine which groupings of passages best aligned with a search intent.

🔍 Market Brew Tutorial: Mastering the Top Cluster Similarity Ranking Factor | First Principles SEO

This meant users could test a set of prompts against their content and watch the algorithm think, block by block, similarity score by score.

Where Profound offered visibility, Market Brew offered agency.

Instead of asking “Did I show up in an AI overview?”, simulation tools helped SEOs ask, “Why didn’t I?” and more importantly, “What can I change to improve my chances?

By visualizing AI Mode behavior before Google ever acknowledged it publicly, these platforms gave early adopters a critical edge. The SEOs using them didn’t wait for traffic to drop before acting, they were already optimizing for vector alignment and semantic coverage long before most of the industry knew it mattered.

And in an era where rankings hinge on how well your embeddings match a user’s intent, that head start has made all the difference.

Visualize AI Mode Coverage. For Free.

SEO didn’t die. It transformed, from art into applied geometry.

AI Mode Visualizer Tutorial

To help SEOs adapt to this AI-driven landscape, Market Brew has just announced the AI Mode Visualizer, a free tool that simulates how Google’s AI Overviews evaluate your content:

  • Enter a page URL.
  • Input up to 10 search prompts or generate them automatically from a single master query using LLM-style prompt expansion.
  • See a cosine similarity matrix showing how each content chunk (700 characters) for your page aligns with each intent.
  • Click any score to view exactly which passage matched, and why.

🔗 Try the AI Mode Visualizer

This is the only tool that lets you watch AI Mode think.

Two Truths, One Future

Nate Hake is right: Google restructured the game. The data reflects an industry still catching up to the new playbook.

Because two things can be true:

  • Google may be clearing space for its own services, ad products, and AI monopolies.
  • And many SEOs are still chasing ghosts in a world governed by geometry.

It’s time to move beyond guesses.

If AI Mode is the new architecture of search, we need tools that expose how it works, not just theories about what changed.

We were bringing you this story back in early 2024, before AI Overviews had a name, explaining how embeddings and vector scoring would reshape SEO.

Tools like the AI Mode Visualizer offer a rare chance to see behind the curtain.

Use it. Test your assumptions. Map the space between your content and modern relevance.

Search didn’t end.

But the way forward demands new eyes.

________________________________________________________________________________________________

Image Credits

Featured Image: Image by MarketBrew. Used with permission.

The New Normal via @sejournal, @Kevin_Indig

Today’s Memo is a download straight from my brain about the current state of Search and AI. So much happened in the last few weeks, and I haven’t had a chance to sort out my thoughts.

Until now.

I’m finishing this Memo with exclusive insight into the KPIs I measure for search right now for premium subscribers .

In this issue, we’re looking at:

  • The role of clicks in the future of SEO.
  • How our work “fans out” into many channels.
  • AI Mode and agentic search.
  • The hot battle between Google and ChatGPT.

Let’s dive in!

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

On April 16 at 9 a.m., OpenAI dropped ChatGPT o3.

By noon, I’d already scrapped the slide deck I had finished the night before.

That whiplash has become routine: Each new model triggers the same loop – panic that it’s smarter than I am. Relief when I find the edges. Then, fresh panic as the cycle restarts.

When my coach, Heather, heard me vent, she dropped a killer quote that stuck with me since: “Kevin, constant change is the new normal.”

She’s right.

Releases land weekly, search interfaces mutate overnight, and the ground under every SEO strategy keeps sliding.

As we cross the midpoint of 2025, I want to freeze-frame what’s happening to search right now – and what it means for you.

Here’s the short version:

  • Google’s AI Overviews (AIOs) inflate impressions while suffocating clicks.
  • The clicks that survive carry more purchase intent than ever.
  • “Performance” SEO is morphing into an “influence” play that spans Google, LLMs, and every social feed your customers consult for a second opinion.

Let’s unpack each shift, starting with the calories we’ve been counting all wrong.

Empty Calories

Since Google widened AI Overviews (AIOs) in March, one pattern rules them all: impressions up, clicks down.

Image Credit: Kevin Indig

Why the gap?

Two reasons:

  1. People run more searches due to AIOs.
  2. Google now records an “impression” the moment someone expands the overview, and every cited source is logged as position 1.

The result: visibility inflation without visitors.

2024 was the year of peak traffic.

And looking at how few people clicked on links (a few percent) in the AIO usability study makes me think it’s entirely possible clicks drop to 10% or less of what we’ve been used to in 2024. And that’s ok.

Clicks have always been empty calories anyway. They were useful as a leading indicator for conversions/revenue/pipeline/sales/etc. (But that’s about it. Clicks didn’t mean dollars, and they didn’t mean real business growth.)

Of course, to us SEO folk, losing clicks sounds grim until you look closer at user behavior:

  • We thought pogo sticking was bad, but it’s just normal search behavior.
  • The only click that matters is the one that ends the journey.
  • In our study, 80% of those “final answer” clicks still land on organic results, not the AIO.
  • When people do click, it’s to validate, compare, or buy – high-intent actions that convert.

So, yes, raw clicks are vanishing, but the ones that survive are pure protein, not empty calories.

From Performance To Influence

Clicks are collapsing, but the ones that remain are loaded with intent.

That flips SEO’s value prop on its head.

For 20 years, we sold SEO as a performance channel, whether we wanted to or not.

The standard calculation was: Search volume ✕ CTR ✕ CVR = Projected dollars.

When a keyword couldn’t survive that spreadsheet, it died in committee.

Meanwhile, those same executives drop seven figures to get a logo the size of a postage stamp on an F1 car – no attribution model in sight.

Why? Influence.

The belief that persistent visibility bends preference.

SEO is crossing the same Rubicon. In an AIO-and-LLM world, you’re not just fighting for traffic; you’re fighting for mindshare wherever prospects ask questions:

  • Google’s AI Overviews.
  • ChatGPT.
  • Reddit threads, YouTube comments, Discord chats.

Your brand needs to echo across all of them.

That means new yardsticks (i.e., KPIs, which I laid out in the premium section at the end of the article).

In short, SEO is graduating from direct-response to influence.

Treat it – and budget for it – like any other brand channel that shapes preference long before the buy button.

Channel Fan-Out

AI Mode turns a single prompt into dozens of behind-the-scenes queries – a process engineers call “fan-out.”

The same thing is happening at the channel level: Search itself is fanning out, escaping the browser and popping up in every feed, app, and device.

Although SEO pros have been talking about it for years, in 2025, that finally, actually matters – and for three big reasons:

1. LLMs have injected search into every app. Want a cookie recipe breakdown in Microsoft Excel? You can have it. Meta shipped a standalone Meta AI and wove it into WhatsApp, IG, and FB. YouTube and Netflix are testing AI Overviews so you can “search” for the perfect video without ever leaving their walls.1

Translation: discovery no longer begins – or ends – on Google.com. Each walled garden is now its own mini-SERP, and Google has to fight a thousand little AI search engines, not just ChatGPT.

2. People cross-check AI with humans: Our AIO usability study showed a consistent pattern: Users read the AI answer, then hop to Reddit threads, YouTube comments, or Discord chats to see whether real people agree.

Credibility now comes from echoing across both machine answers and human conversations. If you’re invisible on social or community platforms, you’re invisible in the final decision loop.

3. The pie is somehow getting bigger. TikTok, Facebook, Instagram, Threads, Bluesky, YouTube, Google, ChatGPT, Perplexity, Claude, Snapchat – the list keeps growing, and so do their daily active users.

Where’s the extra time coming from? Mostly legacy media: linear TV, radio, even mainstream news sites. Attention is being reallocated, not reinvented.

What it means:

  • Your brand’s “search” footprint is now the sum of every place people ask questions.
  • Monitoring only Google rankings is like checking the weather on one street corner.
  • To win budget, tie each additional platform back to concrete customer insight – ideally gathered from, you guessed it, talking to customers and using tools like Sparktoro.
Sparktoro’s channel overview

AI Mode

AI Mode is the “final boss” of search.

Sundar Pichai told Lex Fridman that “the results page is just one possible UI,” and VP of Search Liz Reid called it “a construct.”

In other words, Google’s happy to toss the classic SERP the moment the math works.

Similarweb data shows AI Mode adoption is a bit over 1% – for now (Image Credit: Kevin Indig)

But right now, the math doesn’t.

Similarweb shows AI Mode in barely 1% of queries, by design.

A single AI Mode answer can swallow 20-50 follow-up searches, erasing the ad slots those pages used to carry.

Until Google finds a new way to charge (embedded ads, pay-per-chat, who knows), rollout will stay throttled.

When that business model lands, AI Mode becomes paradise for anyone who understands user intent.

Behind each prompt, Google “fans-out” dozens of micro-queries – price, specs, comparisons, nearby, reviews – and stitches the answers together.

Those micro-queries are the very same long-tails you optimize for today; they’re just fired in parallel and reassembled into a narrative.

How to prep while the gate is still half-closed:

  • Map the likely fan-out set for every core topic (look at People-Also-Ask, Related Searches, Reddit threads, etc. – more in a future Memo).
  • Track rankings for each micro-query; gaps there equal lost citations in AI Mode.
  • Structure content so it’s easy to quote: tight answers, clear sub-heads, rich schema.

Do the homework now and you’ll be ready when AI Mode graduates from beta to default – at least until the next boss fight, fully agentic search, shows up.

ChatGPT Vs. Google

The twist of 2025 is that Google is meeting ChatGPT on its own turf.

AI Mode lifts Google’s results page into the same chat-first UI that OpenAI popularized – proof that Google is willing to “level down” from its ad-optimized SERP if that’s what users expect.

Last year, I shared this graphic for the launch of ChatGPT Search and got lots of questions:

Image Credit: Kevin Indig

Two Takeaways From The Latest Projection (Chart Below):

  1. If you extrapolate the entire data set, ChatGPT overtakes Google in October 2030.
  2. If you extrapolate only the last 12 months, the crossover happens mid-2026.
Image Credit: Kevin Indig

Important Caveats:

  • Growth is not destiny. Google still owns distribution (Android, Chrome, Safari deals) and can slow ChatGPT by matching its features inside AI Mode and Gemini.
  • The projection measures query share, not revenue share. Even if ChatGPT wins usage, Google’s ads can keep the cash register ringing longer.
  • A single platform tweak (bundling, default settings, carrier deals) can bend either curve overnight – think of how Microsoft pushed Bing Chat via Windows updates.

What To Watch Next:

  • Pay-per-chat or embedded-ad experiments: Whichever company nails monetization without wrecking UX will sprint ahead.
  • Default-search contracts (Apple, Samsung, Mozilla) renewing in 2026–27. Losing any of those would be a body blow for Google.
  • Mobile latency and offline mode: If ChatGPT can run acceptably on-device, Google’s web moat shrinks fast.

Bottom line: treat the Google-ChatGPT battle as a live A/B test for the future of search.

Your job is to be visible in both ecosystems until a clear winner emerges – and that may take years.

Conductor Mode

Image Credit: Kevin Indig

So, where does all of this leave SEO (leaders)?

Less in the weeds, more on the podium.

Your job is no longer to fine-tune a single channel; it’s to keep an entire orchestra in time as search fragments across AI Overviews, chatbots, and social feeds.

No other role sits at the intersection of so much (intent) data – and that gives you license (and responsibility) to conduct.

1. Paid Media

  • Pipe impression, click, and conversion data from classic SERPs, AIOs, and AI Mode back into one shared Looker Dash.
  • Swap keywords and creative weekly; AI churn demands shorter feedback loops.

2. Social & Community

  • Mine Reddit threads, TikTok comments, and Discord chats to surface the “why” behind queries.
  • Feed those insights straight to content so every article answers a real objection.

3. Product Marketing

  • Hand them the exact language users copy-paste into prompts; that’s gold for positioning.
  • Return the favor by baking the latest differentiators into every meta description, schema tag, and featured snippet answer.

4. Content/GTM

  • Package what you learn into data stories, interactive tools, and expert POVs – assets worth citing by both humans and LLMs.
  • Structure it so agents can lift answers wholesale: tight headers, clear claims, evidence links.

What’s Next?

Search will get even more agentic.

We could soon optimize not just for people but for the AI helpers who act on their behalf.

That means:

  • Higher insight density per paragraph.
  • Structured outputs (tables, JSON, how-to checklists) ready for zero-click consumption.
  • APIs or embeddings that let agents pull your data directly.

We’re not there yet, but the runway is short.

Shift from tactician to conductor now, and you’ll have the score in hand when the orchestra changes instruments again.

Baton up.


1 YouTube Tests AI Overviews In Search Results; Netflix Tests New AI Search Engine to Recommend Shows, Movies


Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: How AI Is Changing Affiliate Strategies via @sejournal, @rollerblader

This week’s Ask an SEO question about affiliate strategies comes from Mike R:

“How is AI changing affiliate marketing strategy in 2025? I’m concerned my current approach will become obsolete but don’t know which new techniques are actually worth adopting.”

Great question, Mike. I’m seeing a few trends and strategies that are changing, for the better and for the worse.

When AI is used properly in the affiliate marketing channel, it can help businesses and brands grow.

If any of the three types of businesses (defined below) in affiliate marketing use it in a way that AI and large language models are not ready for “yet,” it can backfire.

I’m answering this question in three parts, as I’m unsure which side of the industry you’re on.

For the record: The affiliate channel is not at risk (i.e., affiliate marketing is not dead) because affiliate marketing is more than a content website that creates a list or writes a review, and coupon sites intercept the end of sale.

Affiliate marketing is a mix of all marketing channels, including email, SMS, online and offline communities, PPC, media buying, and even print media.

It is not going to be as impacted by AI as SEO and content marketing – and in many ways, it will likely grow and scale from it.

1. Affiliates (Content Creators, Publishers, Media Houses, Etc.)

Affiliates are the party that promotes another brand in hopes of earning a commission.

Here’s some of what I’m seeing regarding the use of AI and its impact on affiliate revenue.

Programmatic SEO And Content Creation

Programmatic SEO is not new, and using LLMs to create content or lists is burning what were quality sites to the ground.

It is almost never a good idea; it doesn’t matter if AI can spin up content and get it publish-ready in minutes.

In the early 2000s, affiliates and SEO professionals would use pre-AI article spinners to create massive quantities of content from one or two professionally written and fact-checked articles, then publish them to blogs and third-party publishing platforms like Squidoo.

This is equivalent to affiliates publishing their content on Reddit or LinkedIn Pulse to rank it.

The algorithms caught up and penalized the affiliate websites. Squidoo and some of the third-party platforms managed to stay afloat as they had trust and a strong user base for a while.

Next, PHP became the go-to for programmatic SEO, and affiliates would generate shopping lists or pages with unique mixes of products and descriptions via merchant data feeds and network-provided tools. Then, these got penalized. Again, nothing new.

Media companies have been getting penalized and devalued for years for this, and plenty of content creators, too.

If an affiliate manager is telling you to use LLMs to create content, or someone is using LLMs and AI to do programmatic SEO, look for advice elsewhere.

I’ve watched multiple quality sites fall since ChatGPT, Perplexity, and others began writing and spinning their content.

Content And Creator Value

In traditional affiliate marketing, if an affiliate is not making sales, even if they send quality traffic, they get ignored. LLMs have changed this 100%.

I’ve seen affiliates, including bloggers, YouTubers, forums, and social media influencers, are being sourced and cited by AI systems.

If the brand is not on the content being used for fact-checking (grounding) and sourcing, the brands begin to disappear from outputs and results. I’m seeing this firsthand.

Not getting traffic or sales, or being number seven to 10 on a list, now has value. The citations and mentions from the resources that LLMs trust can help your brand gain visibility in AI.

Affiliates can and should begin charging extra fees for these placements until the LLMs begin penalizing or ignoring pay-to-play content.

We’re likely a couple of years away from their algorithms being anywhere near that advanced, so it is a prime opportunity while Google is reducing traffic to publishers via AI Overviews.

Coupon Sites For Top And End-Of-Sale Touchpoints

I think coupon sites are going to take a substantial hit, as AI is starting to create its own lists of coupons that work.

It also includes where and how to save, where to shop, and current deals on specific products. For example, “I want to buy a pair of Asics Kayano 32 men’s running shoes and get them on sale. Where can I find a deal?”

Right now, Google’s AI Overviews are populating lists of where to find deals, and it is showing the coupon sites as the sources to the right. These sites are likely getting clicks now.

I’ve seen ChatGPT pull the codes directly and preventing the need to click to the coupon website and set their affiliate tracking. It does show the website it came from, though – just no reason to click since you get the code in the output.

One interesting thing is that ChatGPT may pull in vanity codes.

The output from ChatGPT featuring these could give an influencer who was sourced for the code or a coupon site credit for their sales, throwing attribution off, because it was the coupon that triggered the commission, even though the user was using the LLM.

The influencer did not have anything to do with this transaction, but they’ll be getting credit.

The brand may now pay more money to the influencer, when, in reality, it should be ChatGPT – that is where the customers are, not the influencer.

By showing where to find the deals and which deals are available by product (not brand), AI eliminates one of the deal and coupon site’s top-funnel traffic strategies to brands.

The biggest hit I see coupon sites taking is ranking in search engines for “brand + coupon” for the last-second click from someone who is already in the brand’s shopping cart.

If Google AI Overviews creates its own coupon lists as the output, like ChatGPT is doing, there is no reason to click on a coupon website and click their affiliate links.

But, don’t count deal and coupon sites out. They still have email lists and social media accounts that can drive top-funnel traffic, and they can reintroduce customers who have forgotten about you by utilizing their own internal databases of shoppers.

2. Affiliate Manager And Affiliate Management Agencies

These are the people who manage programs by recruiting affiliates into the program, giving the affiliates the tools they need, and ensuring the data on the network is tracked and accurate so the brands being promoted have the sales and touchpoints they’re looking for.

Content Sites That Lost Traffic

Some managers hit the panic button because they relied on content sites and publishers who have SEO rankings, but AI Overviews is using affiliate and publisher content and not sending the same amount of traffic to the publishers.

This reduces the number of clicks and traffic. The publishers are still driving traffic, but it is coming in via Google and not the affiliate channel.

With that said, affiliate managers can shift their focus to channels not as impacted by AI Overviews, including:

  • Discord.
  • Platforms like Skool.
  • Social media groups.
  • YouTube channels.
  • Influencers.

Fraud Sign Ups

From speaking to others, it appears that high-quality publisher accounts are being created en masse as fronts for fraud and fake affiliate accounts.

I’ve had conversations with people hired by the fake affiliate account who are being paid to talk to the affiliate manager, so it makes these sites look even more legit. We’ll have back-and-forth emails, and in some cases, a call.

Once the traffic and sales start, it turns out to be stolen credit cards or program violations. In some instances, the person or websites they applied with no longer exist.

Interestingly, when they activate a year later, thinking you forgot about them, magically, the site reappears when they know you’re not checking.

Always evaluate a site, and if the content is being generated by LLMs or AI, it may be best to reject it and reduce the risk of a fake account.

AI content may rank temporarily, but this is not a long-term strategy. If your brand is being written about by AI and spun out to a site via programmatic SEO, there is a reasonable chance that the details won’t be as factual or as on-brand as they should be.

An affiliate who cannot take the time to create good content and use AI to edit, versus using AI to create and then edit, should not be trusted in your affiliate program.

Non-Factual Information And False Claims

When your affiliates are generating content or fact-checking via LLMs and AI, they’re not doing their jobs as your partners to promote your program factually, with correct talking points, and following brand guidelines.

There’s a reasonable chance that incorrect claims about financial products, medical treatments, or even books to buy and read will be in the content you, as a brand, are paying to have made.

Even if you’re paying on a performance basis, you are approving this content to be live and represent your brand. This is why affiliates in your program using AI to create content are at high risk.

Set rules and enforce them so that your brand cannot be included in any AI-created content, or remove the affiliate from your program until they’re ready to treat your brand or your clients’ brands with the same care as you do.

Partner Matching And Approvals

One interesting use of AI for affiliate management is merchant and affiliate matching using machine learning and AI by agencies and larger brands.

Just because a partner does well in one vertical or with one affiliate program that has a similar audience, it does not mean it is a good match for others.

  • One program may allow end-of-sale touchpoints while the other does not. The top partners that use low-value clicks should not be allowed in a similar program that does not (or will not) match it. If the programs are on auto-approve or using AI to approve affiliates that do well in specific verticals, the TOS is likely no longer being enforced.
  • A partner may make a ton of T-shirt sales in one program, but their audience may not respond to the colors, social causes, or price points of another merchant. If the affiliate is part of AI matching and starts to lose money because they got matched to new T-shirt shops, they may start to move on from the affiliate or focus less because they’re making less money and getting bad recommendations from the agencies and managers.
  • If the program trusts AI to do matching, but has restrictions like requiring advertising disclosures or using factual information, the machine learning likely won’t be able to check for this, and partners that are not a fit can get in.
  • Automating approvals because they pass an AI review or scan is risky, as AI will miss things that an experienced affiliate manager will find, like advertising disclosures in the wrong space and false claims in the industry or space in content.

One exception to using AI for matching is to build a list of potential partners from a database. But automatically approving that list because the output creates a list is problematic.

Each affiliate that is recommended still needs to be vetted by hand to make sure they meet the requirements of the new program.

Recruitment And List Building

Some of the best uses of AI, especially LLMs, have been building lists of potential partners.

You can train GPTs to validate the lists, remove current partners so you don’t accidentally email or call them, do a gap analysis, and even customize the recruitment email to a very strong degree.

No, it isn’t perfect, but you can save hours each week from the manual tasks of discovery, validation, and outreach.

The recruitment emails still need to be reviewed and sent manually, but it is a massive time-saver.

We manually review every email before it goes out and have to do a decent chunk of rewriting, but we’re saving large amounts of time, too.

We also pre-schedule the emails using a database tool, but we’ve slowly begun implementing new discovery and drafting methods, and they’re turning out to be fantastic.

I was a non-believer in AI for this at first, but now I’m about ready to double down, especially as the systems advance.

3. Affiliate Networks

These are the tracking and payment platforms that power the affiliate programs.

Affiliates rely on them to accurately record sales and release payments.

Affiliate managers use them to track progress, simplify paying partners around the world, and generate reports based on the key performance indicators (KPIs) their company uses.

Better Controls

All of the networks we’re working on have an influx of AI-generated sites. I’ve talked to agencies and managers on the ones we don’t work on, and they’re seeing the same.

The networks would be wise to add filters and create an alert for affiliate managers to let them know if the affiliate is human or AI, meaning that AI would be a website and promotional method without quality control.

There are no advanced controls in place on any networks that I’ve seen specifically for AI affiliates. But most networks do have compliance teams to which you can report fake accounts.

From the networks I’ve talked to, they’re working on solutions to help detect and reject these sites, but it is a massive problem because they’re being generated at high volumes, and some are really hard to detect.

The spammers and scammers are getting smarter, and AI has given them a new advantage.

Partnership Matching

This is a double-edged sword. Networks have more data than any affiliate agency, and they may be best suited to try partner and program matching algorithms.

They can create a list of programs that an affiliate may want to test, or a list of partners a program manager can pay to recruit based on program goals and dimensions.

The downside is that programs spend countless hours recruiting partners for their programs. Networks doing matching and recruitment take that work and give it for free to that program’s competitors.

A second downside is that affiliates get bombarded with program requests, and this can cause that to skyrocket, making it harder to get them to open emails, including program updates and newsletters.

Once they start ignoring emails because of too many, you may not get compliance issues fixed or promotions that would normally have benefited both parties.

Reporting

One of the most beneficial things a network can do, but none are currently doing on a mass scale (some are starting to, and it’s looking promising), is to use AI to create custom reports for affiliate programs. These could be charts and graphs on trends over XYZ years.

Another is a gap analysis of products that get bundled together by type of affiliate, and then which similar affiliates already in the program don’t have a specific SKU in their orders.

The manager can recommend pre-selling the SKU within the content that drives the sale, or adding that specific SKU as an upsell to any customer who came from that affiliate’s link, based on the affiliate ID passed in the URL.

It can show trends where there are cross-channel (SEO, email, PPC, SMS, etc) touchpoints and how it modifies seasonally, annually, and if the goal creates more or less sales for the affiliate channel or company as a whole.

One important thing to remember is that not all affiliate networks offer true cross-channel reporting. Multiple only offer it once the user has clicked an affiliate link.

Final Thoughts

AI is going to be amazing and horrible for each of the three entities above that make up the affiliate marketing channel.

If used correctly, it can save time, increase efficiency, and create more meaningful strategies.

At the same time, it could result in violations of a program’s Terms of Service (TOS), steal traffic from publishers, and harm multiple types of businesses.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

How AI can help make cities work better for residents

In recent decades, cities have become increasingly adept at amassing all sorts of data. But that data can have limited impact when government officials are unable to communicate, let alone analyze or put to use, all the information they have access to.

This dynamic has always bothered Sarah Williams, a professor of urban planning and technology at MIT. “We do a lot of spatial and data analytics. We sit on academic papers and research that could have a huge impact on the way we plan and design our cities,” she says of her profession. “It wasn’t getting communicated.”

Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have pushed the narrative and expository bounds of urban planning data using the latest technologies available—making numbers vivid and accessible through human stories and striking graphics. One project she was involved in, on rates of incarceration in New York City by neighborhood, is now in the permanent collection of the Museum of Modern Art in New York. Williams’s other projects have tracked the spread and impact of air pollution in Beijing using air quality monitors and mapped the daily commutes of Nairobi residents using geographic information systems

Cities should be transparent in how they’re using AI and what its limitations are. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology.

In recent years, as AI became more accessible, Williams was intrigued by what it could reveal about cities. “I really started thinking, ‘What are the implications for urban planning?’” she says. These tools have the potential to organize and illustrate vast amounts of data instantaneously. But having more information also increases the risks of misinformation and manipulation. “I wanted to help guide cities in thinking about the positives and negatives of these tools,” she says. 

In 2024, that inquiry led to a collaboration with the city of Boston, which was exploring how and whether to apply AI in various government functions through its Office of Emerging Technology. Over the course of the year, Williams and her team followed along as Boston experimented with several new applications for AI in government and gathered feedback at community meetings.

On the basis of these findings, Williams and the Civic Data Design Lab published the Generative AI Playbook for Civic Engagement in the spring. It’s a publicly available document that helps city governments take advantage of AI’s capabilities and navigate its ­attendant risks. This kind of guidance is especially important as the federal government takes an increasingly laissez-faire approach to AI regulation. 

“That gray zone is where nonprofits and academia can create research to help guide states and private institutions,” Williams says. 

The lab’s playbook and academic papers touch on a wide range of emerging applications, from virtual assistants for Boston’s procurement division to optimization of traffic signals to chatbots for the 311 nonemergency services hotline. But Williams’s primary focus is how to use this technology for civic engagement. AI could help make the membrane between the government and the public more porous, allowing each side to understand the other a little better. 

Right now, civic engagement is mostly limited to “social media, websites, and community meetings,” she says. “If we can create more tools to help close that gap, that’s really important.”

One of Boston’s AI-powered experiments moves in that direction. The city used a large language model to summarize every vote of the Boston City Council for the past 16 years, creating simple and straightforward descriptions of each measure. This easily searchable database “will help you find what you’re looking for a lot more quickly,” says Michael Lawrence Evans, head of the Office of Emerging Technology.  A quick search for “housing” shows the city council’s recent actions to create a new housing accelerator fund and to expand the capacity of migrant shelters. Though not every summary has been double-checked by a human, the tool’s accuracy was confirmed through “a really robust evaluation,” Evans says. 

AI tools may also help governments understand the needs and desires of residents. The community is “already inputting a lot of its knowledge” through community meetings, public surveys, 311 tickets, and other channels, Williams says. Boston, for instance, recorded nearly 300,000 311 requests in 2024 (most were complaints related to parking). New York City recorded 35 million 311 contacts in 2023. It can be difficult for government workers to spot trends in all that noise. “Now they have a more structured way to analyze that data that didn’t really exist before,” she says.

AI can help paint a clearer picture of how these sorts of resident complaints are distributed geographically. At a community meeting in Boston last year, city staff used generative AI to instantly produce a map of pothole complaints from the previous month. 

AI also has the potential to illuminate more abstract data on residents’ desires. One mechanism Williams cites in her research is Polis, an open-source polling platform used by several national governments around the world and a handful of cities and media companies in the US. A recent update allows poll hosts to categorize and summarize responses using AI. It’s something of an experiment in how AI can help facilitate direct democracy—an issue that tool creator Colin Megill has worked on with both OpenAI and Anthropic. 

But even as Megill explores these frontiers, he is proceeding cautiously. The goal is to “enhance human agency,” he says, and to avoid “manipulation” at all costs: “You want to give the model very specific and discrete tasks that augment human authors but don’t replace them.”

Misinformation is another concern as local governments figure out how best to work with AI. Though they’re increasingly common, 311 chatbots have a mixed record on this front. New York City’s chatbot made headlines last year for providing inaccurate and, at times, bizarre information. When an Associated Press reporter asked if it was legal for a restaurant to serve cheese that had been nibbled on by a rat, the chatbot responded, “Yes, you can still serve the cheese to customers if it has rat bites.” (The New York chatbot appears to have improved since then. When asked by this reporter, it responded firmly in the negative to the nibbling rat question.)

These AI mishaps can reduce trust in government—precisely the opposite of the outcome that Williams is pursuing in her work. 

“Currently, we don’t have a lot of trust in AI systems,” she says. “That’s why having that human facilitator is really important.” Cities should be transparent in how they’re using AI and what its limitations are, she says. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology. 

Next on Williams’s agenda is exploring how cities can develop their own AI systems rather than relying on tech giants, which often have a different set of priorities. This technology could be open-source; not only would communities be able to better understand the data they produce, but they would own it. 

“One of the biggest criticisms of AI right now is that the people who are doing the labor are not paid for the work that they do [to train the systems],” she says. “I’m super excited about how communities can own their large language models. Then communities can own the data that’s inside them and allow people to have access to it.”  

Benjamin Schneider is a freelance writer covering housing, transportation, and urban policy.

The Download: how AI can improve a city, and inside OpenAI’s empire

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI can help make cities work better

In recent decades, cities have become increasingly adept at amassing all sorts of data. But that data can have limited impact when government officials are unable to communicate, let alone analyze or put to use, all the information they have access to.

This dynamic has always bothered Sarah Williams, a professor of urban planning and technology at MIT. Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have made urban planning data more vivid and accessible through human stories and striking graphics. Read the full story.

—Ben Schneider

This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

Inside OpenAI’s empire with Karen Hao

AI journalist Karen Hao’s newly released book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, tells the story of OpenAI’s rise to power and its far-reaching impact all over the world.

Hao, a former MIT Technology Review senior editor, will join our executive editor Niall Firth in an intimate subscriber-exclusive Roundtable conversation exploring the AI arms race, what it means for all of us, and where it’s headed. Register here to join us at 9am ET on Monday June 30th June.

Special giveaway: Attendees will have the chance to receive a free copy of Hao’s book. See registration form for details.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The White House is sharing tasteless deportation memes
Its digital strategy revolves around boosting policies for cheap laughs. (WP $)
+ Trump’s immigration raids are a rapid escalation of his deportation tactics. (Vox)
+ The administration is revelling in the outraged reaction to its actions. (The Atlantic $)
+ But New Yorkers are fighting back. (New Yorker $)

2 New York is asking companies to disclose when AI contributes to layoffs
It’s the first official step towards measuring AI’s impact on the labor market. (Bloomberg $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

3 Regeneron isn’t buying 23andMe after all
A non-profit controlled by its cofounder has made a higher bid. (WSJ $)
+ Anne Wojcicki says she has the backing of a Fortune 500 company. (FT $)
+ How to… delete your 23andMe data. (MIT Technology Review)

4 RFK Jr has filled the CDC’s vaccine committee with allies
Robert Malone, one of the appointees, has encouraged the public to embrace the term anti-vax. (The Atlantic $)
+ Here’s what food and drug regulation might look like under the Trump administration. (MIT Technology Review)

5 Americans are commissioning animal torture videos
The US government has revealed details of residents accused of paying people in Indonesia to abuse helpless monkeys. (Ars Technica)

6 China has conducted its first brain implant clinical trial
Making it only the second country to do so, after the US. (Bloomberg $)
+ Brain-computer interfaces face a critical test. (MIT Technology Review)

7 The US Navy wants your startup
It’s more open to partnerships than ever before, apparently. (TechCrunch)
+ China is stockpiling intercontinental ballistic missiles. (Insider $)
+ Generative AI is learning to spy for the US military. (MIT Technology Review)

8 The UK is working on a chemotherapy-free approach to treating leukaemia
Combining two targeted drugs appears to perform better. (The Guardian)

9 Brace yourself for AI sponcon
Just when you thought product placement couldn’t get any worse. (The Verge)

10 Zines are staging a comeback
Creatives are turning their backs on social media in favor of good old-fashioned booklets. (Wired $)

Quote of the day

“Being a highly “online” person is a very embarrassing thing and should be relegated to basement losers.”

—Derek Guy, aka The Menswear Guy on X, explains to Wired why he thinks a significant proportion of the Republican coalition need to step away from their keyboards.

One more thing

Bright LEDs could spell the end of dark skies

Scientists have known for years that light pollution is growing and can harm both humans and wildlife. In people, increased exposure to light at night disrupts sleep cycles and has been linked to cancer and cardiovascular disease, while wildlife suffers from interruption to their reproductive patterns, and increased danger.

Astronomers, policymakers, and lighting professionals are all working to find ways to reduce light pollution. Many of them advocate installing light-emitting diodes, or LEDs, in outdoor fixtures such as city streetlights, mainly for their ability to direct light to a targeted area.

But the high initial investment and durability of modern LEDs mean cities need to get the transition right the first time or potentially face decades of consequences. Read the full story.

—Shel Evergreen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ As commencement speeches go, Steve Jobs’ is definitely one of the best.
+ I love this iconic Homer moment recreated in Lego.
+ The remains of a beautiful Byzantine tomb complex has been uncovered between Aleppo and Damascus.
+ I want to believe: check out this short, bizarre history of alien abductions in America 👽