How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year. 

But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky. 

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  

Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis.

The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 

This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine.

What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs.

Ed Sisco in a mask
Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.
B. HAYES/NIST

The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs.

Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else.

“It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week.

The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without.

In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco. 

To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine. 

Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions.

“It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.” 

Adam Bluestein writes about innovation in business, science, and technology.

Phase two of military AI has arrived

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT. 

As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. Consider this phase two of the US military’s AI push, where phase one began back in 2017 with older types of AI, like computer vision to analyze drone imagery. Though this newest phase began under the Biden administration, there’s fresh urgency as Elon Musk’s DOGE and Secretary of Defense Pete Hegseth push loudly for AI-fueled efficiency. 

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite. 

With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.”

What are the limits of “human in the loop”?

Talk to as many defense-tech companies as I have and you’ll hear one phrase repeated quite often: “human in the loop.” It means that the AI is responsible for particular tasks, and humans are there to check its work. It’s meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them.

But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

“‘Human in the loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to draw conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.” As AI systems rely on more and more data, this problem scales up. 

Is AI making it easier or harder to know what should be classified?

In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped “Top Secret,” with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways.

One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at. 

With the mountain of data growing each day, and then AI constantly creating new analyses, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information. 

The defense giant Palantir is positioning itself to help, by offering its AI tools to determine whether a piece of data should be classified or not. It’s also working with Microsoft on AI models that would train on classified data. 

How high up the decision chain should AI go?

Zooming out for a moment, it’s worth noting that the US military’s adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets.

Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance. 

So what’s next? For consumers, it’s agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It’s also personalized AI, or models that learn from your private data to be more helpful. 

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown’s Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. “Military commanders are interested in AI’s potential to improve decision-making, especially at the operational level of war,” the authors wrote.

In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn’t been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it’s clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions. 

I’ll be following these three questions closely. If you have information on how the Pentagon might be handling these questions, please reach out via Signal at jamesodonnell.22. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This architect wants to build cities out of lava

Arnhildur Pálmadóttir was around three years old when she saw a red sky from her living room window. A volcano was erupting about 25 miles away from where she lived on the northeastern coast of Iceland. Though it posed no immediate threat, its ominous presence seeped into her subconscious, populating her dreams with streaks of light in the night sky.

Fifty years later, these “gloomy, strange dreams,” as Pálmadóttir now describes them, have led to a career as an architect with an extraordinary mission: to harness molten lava and build cities out of it.

Pálmadóttir today lives in Reykjavik, where she runs her own architecture studio, S.AP Arkitektar, and the Icelandic branch of the Danish architecture company Lendager, which specializes in reusing building materials.

The architect believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques: drill straight into magma pockets and extract the lava; channel molten lava into pre-dug trenches that could form a city’s foundations; or 3D-print bricks from molten lava in a technique similar to the way objects can be printed out of molten glass.

Pálmadóttir and Skarphéðinsson first presented the concept during a talk at Reykjavik’s DesignMarch festival in 2022. This year they are producing a speculative film set in 2150, in an imaginary city called Eldborg. Their film, titled Lavaforming, follows the lives of Eldborg’s residents and looks back on how they learned to use molten lava as a building material. It will be presented at the Venice Biennale, a leading architecture festival, in May. 

lava around a structure
Set in 2150, her speculative film Lavaforming presents a fictional city built from molten lava.
COURTESY OF S.AP ARKITEKTAR

Buildings and construction materials like concrete and steel currently contribute a staggering 37% of the world’s annual carbon dioxide emissions. Many architects are advocating for the use of natural or preexisting materials, but mixing earth and water into a mold is one thing; tinkering with 2,000 °F lava is another. 

Still, Pálmadóttir is piggybacking on research already being done in Iceland, which has 30 active volcanoes. Since 2021, eruptions have intensified in the Reykjanes Peninsula, which is close to the capital and to tourist hot spots like the Blue Lagoon. In 2024 alone, there were six volcanic eruptions in that area. This frequency has given volcanologists opportunities to study how lava behaves after a volcano erupts. “We try to follow this beast,” says Gro Birkefeldt M. Pedersen, a volcanologist at the Icelandic Meteorological Office (IMO), who has consulted with Pálmadóttir on a few occasions. “There is so much going on, and we’re just trying to catch up and be prepared.”

Pálmadóttir’s concept assumes that many years from now, volcanologists will be able to forecast lava flow accurately enough for cities to plan on using it in building. They will know when and where to dig trenches so that when a volcano erupts, the lava will flow into them and solidify into either walls or foundations.

Today, forecasting lava flows is a complex science that requires remote sensing technology and tremendous amounts of computational power to run simulations on supercomputers. The IMO typically runs two simulations for every new eruption—one based on data from previous eruptions, and another based on additional data acquired shortly after the eruption (from various sources like specially outfitted planes). With every event, the team accumulates more data, which makes the simulations of lava flow more accurate. Pedersen says there is much research yet to be done, but she expects “a lot of advancement” in the next 10 years or so. 

To design the speculative city of Eldborg for their film, Pálmadóttir and Skarphéðinsson used 3D-modeling software similar to what Pedersen uses for her simulations. The city is primarily built on a network of trenches that were filled with lava over the course of several eruptions, while buildings are constructed out of lava bricks. “We’re going to let nature design the buildings that will pop up,” says Pálmadóttir. 

The aesthetic of the city they envision will be less modernist and more fantastical—a bit “like [Gaudi’s] Sagrada Familia,” says Pálmadóttir. But the aesthetic output is not really the point; the architects’ goal is to galvanize architects today and spark an urgent discussion about the impact of climate change on our cities. She stresses the value of what can only be described as moonshot thinking. “I think it is important for architects not to be only in the present,” she told me. “Because if we are only in the present, working inside the system, we won’t change anything.”

Pálmadóttir was born in 1972 in Húsavik, a town known as the whale-watching capital of Iceland. But she was more interested in space and technology and spent a lot of time flying with her father, a construction engineer who owned a small plane. She credits his job for the curiosity she developed about science and “how things were put together”—an inclination that proved useful later, when she started researching volcanoes. So was the fact that Icelanders “learn to live with volcanoes from birth.” At 21, she moved to Norway, where she spent seven years working in 3D visualization before returning to Reykjavik and enrolling in an architecture program at the Iceland University of the Arts. But things didn’t click until she moved to Barcelona for a master’s degree at the Institute for Advanced Architecture of Catalonia. “I remember being there and feeling, finally, like I was in the exact right place,” she says. 

Before, architecture had seemed like a commodity and architects like “slaves to investment companies,” she says. Now, it felt like a path with potential. 

Lava has proved to be a strong, durable building material, at least in its solid state. To explore its potential, Pálmadóttir and Skarphéðinsson envision a city built on a network of trenches that have filled with lava over the course of several eruptions, while buildings are constructed with lava bricks.

She returned to Reykjavik in 2009 and worked as an architect until she founded S.AP (for “studio Arnhildur Pálmadóttir”) Arkitektar in 2018; her son started working with her in 2019 and officially joined her as an architect this year, after graduating from the Southern California Institute of Architecture. 

In 2021, the pair witnessed their first eruption up close, near the Fagradalsfjall volcano on the Reykjanes Peninsula. It was there that Pálmadóttir became aware of the sheer quantity of material coursing through the planet’s veins, and the potential to divert it into channels. 

Lava has already proved to be a strong, long-lasting building material—at least in its solid state. When it cools, it solidifies into volcanic rock like basalt or rhyolite. The type of rock depends on the composition of the lava, but basaltic lava—like the kind found in Iceland and Hawaii—forms one of the hardest rocks on Earth, which means that structures built from this type of lava would be durable and resilient. 

For years, architects in Mexico, Iceland, and Hawaii (where lava is widely available) have built structures out of volcanic rock. But quarrying that rock is an energy-intensive process that requires heavy machines to extract, cut, and haul it, often across long distances, leaving a big carbon footprint. Harnessing lava in its molten state, however, could unlock new methods for sustainable construction. Jeffrey Karson, a professor emeritus at Syracuse University who specializes in volcanic activity and who cofounded the Syracuse University Lava Project, agrees that lava is abundant enough to warrant interest as a building material. To understand how it behaves, Karson has spent the past 15 years performing over a thousand controlled lava pours from giant furnaces. If we figure out how to build up its strength as it cools, he says, “that stuff has a lot of potential.” 

In his research, Karson found that inserting metal rods into the lava flow helps reduce the kind of uneven cooling that would lead to thermal cracking—and therefore makes the material stronger (a bit like rebar in concrete). Like glass and other molten materials, lava behaves differently depending on how fast it cools. When glass or lava cools slowly, crystals start forming, strengthening the material. Replicating this process—perhaps in a kiln—could slow down the rate of cooling and let the lava become stronger. This kind of controlled cooling is “easy to do on small things like bricks,” says Karson, so “it’s not impossible to make a wall.” 

Pálmadóttir is clear-eyed about the challenges before her. She knows the techniques she and Skarphéðinsson are exploring may not lead to anything tangible in their lifetimes, but they still believe that the ripple effect the projects could create in the architecture community is worth pursuing.

Both Karson and Pedersen caution that more experiments are necessary to study this material’s potential. For Skarphéðinsson, that potential transcends the building industry. More than 12 years ago, Icelanders voted that the island’s natural resources, like its volcanoes and fishing waters, should be declared national property. That means any city built from lava flowing out of these volcanoes would be controlled not by deep-pocketed individuals or companies, but by the nation itself. (The referendum was considered illegal almost as soon as it was approved by voters and has since stalled.) 

For Skarphéðinsson, the Lavaforming project is less about the material than about the “political implications that get brought to the surface with this material.” “That is the change I want to see in the world,” he says. “It could force us to make radical changes and be a catalyst for something”—perhaps a social megalopolis where citizens have more say in how resources are used and profits are shared more evenly.

Cynics might dismiss the idea of harnessing lava as pure folly. But the more I spoke with Pálmadóttir, the more convinced I became. It wouldn’t be the first time in modern history that a seemingly dangerous idea (for example, drilling into scalding pockets of underground hot springs) proved revolutionary. Once entirely dependent on oil, Iceland today obtains 85% of its electricity and heat from renewable sources. “[My friends] probably think I’m pretty crazy, but they think maybe we could be clever geniuses,” she told me with a laugh. Maybe she is a little bit of both.

Elissaveta M. Brandon is a regular contributor to Fast Company and Wired.

The Download: tracking the evolution of street drugs, and the next wave of military AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why.

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. And a pilot uncovered new, critical information almost immediately. Read the full story.

—Adam Bluestein

This story is from the next edition of our print magazine. Subscribe now to read it and get a copy of the magazine when it lands!

Phase two of military AI has arrived

—James O’Donnell

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT. 

As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. This push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes.

Here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.” Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The FCC wants Europe to choose between US and Chinese technology
Trump official Brendan Carr has urged Western allies to pick Elon Musk’s Starlink over rival Chinese satellite firms. (FT $)
+ China may look like a less erratic choice right now. (NY Mag $)

2 Nvidia wants to build its AI supercomputers entirely in the US
It’s a decision the Trump administration has claimed credit for. (WP $)
+ That said, Nvidia hasn’t said how much gear it plans to make in America. (WSJ $)
+ Production of its latest chip has already begun in Arizona. (Bloomberg $)

 3 Mark Zuckerberg defended Meta in the first day of its antitrust trial
He downplayed the company’s decision to purchase Instagram and WhatsApp. (Politico)
+ The government claims he bought the firms to stifle competition. (The Verge)
+ Zuckerberg has previously denied that his purchases had hurt competition. (NYT $)

4 OpenAI’s new models are designed to excel at coding
The three models have been optimized to follow complex instructions. (Wired $)
+ We’re still waiting for confirmation of GPT-5. (The Verge)
+ The second wave of AI coding is here. (MIT Technology Review)

5 Apple has increased its iPhone shipments by 10%
It’s part of a pre-emptive plan to mitigate tariff disruptions. (Bloomberg $)
+ The tariff chaos has played havoc with Apple stocks. (Insider $)

6 We’re learning more about the link between long covid and cognitive impairment
Studies suggest that a patient’s age when they contracted covid may be a key factor. (WSJ $)

7 Can’t be bothered to call your elderly parents? Get AI to do it 📞
How thoroughly depressing. (404 Media)

8 This video app hopes to capitalize on TikTok’s uncertain future
But unlike TikTok, Neptune allows creators to hide their likes. (TechCrunch)

9 Meet the tech bros who want to live underwater
Colonizing the sea is one of the final frontiers. (NYT $)
+ Meet the divers trying to figure out how deep humans can go. (MIT Technology Review)

10 Google’s new AI model can decipher dolphin sounds🐬
If they’re squawking, back away. (Ars Technica)
+ The way whales communicate is closer to human language than we realized. (MIT Technology Review)

Quote of the day

“If you don’t like an ad, you scroll past it. It takes about a second.”

—Mark Hansen, Meta’s lead lawyer, makes light of the Federal Trade Commission’s assertion that users of its platforms are inundated with ads during the first day of Meta’s monopoly trial, Ars Technica reports.

The big story

Recapturing early internet whimsy with HTML

Websites weren’t always slick digital experiences.

There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.

Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story

—Tiffany Ng

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Who doesn’t love a good stroll?
+ All hail Shenmue, the recently-crowned most influential game of all time.
+ This Wikipedia-powered museum is really quite something (thanks Amy!)
+ This spring’s hottest accessory is a conical princess crown. No, really.

A small US city experiments with AI to find out what residents want

Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city?

When Doug Gorman, elected leader of the county that includes Bowling Green, took office in 2023, it was the fastest-growing city in the state and projected to double in size by 2050, but it lacked a plan for how that growth would unfold. Gorman had a meeting with Sam Ford, a local consultant who had worked with the surveying platform Pol.is, which uses machine learning to gather opinions from large groups of people. 

They “needed a vision” for the anticipated growth, Ford says. The two convened a group of volunteers with experience in eight areas: economic development, talent, housing, public health, quality of life, tourism, storytelling, and infrastructure. They built a plan to use Pol.is to help write a 25-year plan for the city. The platform is just one of several new technologies used in Europe and increasingly in the US to help make sure that local governance is informed by public opinion.

After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what the 25-year plan should include. They could also vote on whether they agreed or disagreed with other ideas. The tool could be translated into a participant’s preferred language, and human moderators worked to make sure the traffic was coming from the Bowling Green area. 

Over the month that it was live, 7,890 residents participated, and 2,000 people submitted their own ideas. An AI-powered tool from Google Jigsaw then analyzed the data to find what people agreed and disagreed on. 

Experts on democracy technologies who were not involved in the project say this level of participation—about 10% of the city’s residents—was impressive.

“That is a lot,” says Archon Fung, director of the Ash Center for Innovation and Democratic Governance at the Harvard Kennedy School. A local election might see a 25% turnout, he says, and that requires nothing more than filling out a ballot. 

“Here, it’s a more demanding kind of participation, right? You’re actually voting on or considering some substantive things, and 2,000 people are contributing ideas,” he says. “So I think that’s a lot of people who are engaged.”

The plans that received the most attention in the Bowling Green experiment were hyperlocal. The ideas with the broadest support were increasing the number of local health-care specialists so residents wouldn’t have to travel to nearby Nashville for medical care, enticing more restaurants and grocery stores to open on the city’s north side, and preserving historic buildings. 

More contentious ideas included approving recreational marijuana, adding sexual orientation and gender identity to the city’s nondiscrimination clause, and providing more options for private education. Out of 3,940 unique ideas, 2,370 received more than 80% agreement, including initiatives like investing in stormwater infrastructure and expanding local opportunities for children and adults with autism.  

The volunteers running the experiment were not completely hands-off. Submitted ideas were screened according to a moderation policy, and redundant ideas were not posted. Ford says that 51% of ideas were published, and 31% were deemed redundant. About 6% of ideas were not posted because they were either completely off-topic or contained a personal attack.

But some researchers who study the technologies that can make democracy more effective question whether soliciting input in this manner is a reliable way to understand what a community wants.

One problem is self-selection—for example, certain kinds of people tend to show up to in-person forums like town halls. Research shows that seniors, homeowners, and people with high levels of education are the most likely to attend, Fung says. It’s possible that similar dynamics are at play among the residents of Bowling Green who decided to participate in the project.

“Self-selection is not an adequate way to represent the opinions of a public,” says James Fishkin, a political scientist at Stanford who’s known for developing a process he calls deliberative polling, in which a representative sample of a population’s residents are brought together for a weekend, paid about $300 each for their participation, and asked to deliberate in small groups. Other methods, used in some European governments, use jury-style groups of residents to make public policy decisions. 

What’s clear to everyone who studies the effectiveness of these tools is that they promise to move a city in a more democratic direction, but we won’t know if Bowling Green’s experiment worked until residents see what the city does with the ideas that they raised.

“You can’t make policy based on a tweet,” says Beth Simone Noveck, who directs a lab that studies democracy and technology at Northeastern University. As she points out, residents were voting on 140-character ideas, and those now need to be formed into real policies. 

“What comes next,” she says, “is the conversation between the city and residents to develop a short proposal into something that can actually be implemented.” For residents to trust that their voice actually matters, the city must be clear on why it’s implementing some ideas and not others. 

For now, the organizers have made the results public, and they will make recommendations to the Warren County leadership later this year. 

Spring Books on B2B, Nvidia, Bill Gates, More

Seven new and upcoming books offer practical advice on bold marketing, global branding, and growing from a startup to a multi-million-dollar company, including honest portrayals of lessons learned by brilliant business leaders.

Courageous Marketing: The B2B Marketer’s Playbook for Career Success

Cover of Courageous Marketing

Courageous Marketing

by Udi Ledergor

Author Udi Ledergor is the chief evangelist and former CMO at Gong, an AI SaaS platform to monitor sales decisions that has grown to a $7 billion valuation in just 10 years. His just-published book advocates making bold and risky moves to grab attention and create loyal fans. It garnered blurbs from prominent authors Daniel Pink, Robert Cialdini, and Nir Eyal. It is already in the top 10 for three Amazon book categories.

Build a Business You Love

Cover of Build a Business You Love

Build a Business You Love

by Dave Ramsey

Ramsey built a one-man consulting business into a $250 million empire and authored eight books, notably the New York Times bestseller “Total Money Makeover.” This new title aims to be a “road map that takes the guesswork out of growth for business owners.” Ramsey breaks growth into five stages — Treadmill Operator, Pathfinder, Trailblazer, Peak Performer, and Legacy Builder — and advises on the unique challenges of each.

How Not to Invest

Cover of How Not to Invest

How Not to Invest

by Barry Ritholz

Asserting that “avoiding errors is much more important than scoring wins,” Ritholz, co-founder of a prominent wealth management firm, aims to help readers evade the most common mistakes people make with their money. “Shark Tank” host Mark Cuban and Nobel-winning economist Richard Thaler call it a fun read.

The Thinking Machine: Jensen Huang, Nvidia, and the World’s Most Coveted Microchip

Cover of The Thinking Machine

The Thinking Machine

by Stephen Witt

Hot on the heels of February’s “The Nvidia Way” comes a new biography of Nvidia founder Jensen Huang, “a determined entrepreneur who defied Wall Street to push his radical vision for computing.” Read it to learn how the company morphed from video games to a leader in AI.

Source Code: My Beginnings

Cover of Source Code

Source Code

by Bill Gates

With its black and white youth cover image, this memoir by Microsoft founder and philanthropist Bill Gates isn’t the usual portrait of an entrepreneur’s path to success. Instead, it recounts the early life experiences that shaped his character before starting that journey.

Shoveling $h!t: A Love Story About the Entrepreneur’s Messy Path to Success

Cover of Shoveling $h!t

Shoveling $h!t

by Kass and Mike Lazerow

As the irreverent title suggests, the serial entrepreneur power couple who founded Golf.com and Buddy Media (acquired by Salesforce) promise a “brutally honest take” in their forthcoming book. Admitting that entrepreneurship is hard, they share personal stories and the strategies they’ve learned.

Brand Global, Adapt Local: How to Build Brand Value Across Cultures

Cover of Brand Global, Adapt Local

Brand Global, Adapt Local

by Katherine Melchior Ray and Nataly Kelly

Two experts share their global experiences with Nestlé, Nike, and others on how to build an international marketing and localization mindset. They explore how companies balance preserving brand identity with exploring new markets.

LinkedIn Study Finds Adding Links Boosts Engagement By 13% via @sejournal, @MattGSouthern

A new study of over 577,000 LinkedIn posts challenges common marketing advice. It finds that posts with links get 13.57% more interactions and 4.90% more views than posts without links.

The LinkedIn study by Metricool analyzed nearly 48,000 company pages over three years. The findings give marketers solid data to rethink their LinkedIn strategies.

Link Performance Contradicts Common Advice

For years, social media experts have warned against adding links in LinkedIn posts.

Many claimed the platform would show these posts to fewer people to keep users on LinkedIn.

This new research says that’s wrong.

The data shows that about 31% of LinkedIn posts contained links to other websites. These posts consistently did better than posts without links.

Image Credit: Metricool LinkedIn Study 2025.

Content Format Performance Reveals Unexpected Winners

The study also found big differences in how content types perform.

Carousels (document posts) work best for engagement, with the highest engagement rate (45.85%) of any format. People on LinkedIn are willing to spend time clicking through multiple slides.

Polls are a missed opportunity. They make up only 0.00034% of all posts analyzed but got 206.33% more reach than average posts. Almost no one uses them, but they perform well.

Text-only posts performed worse than visual content across all metrics. Despite being common, they received the fewest interactions.

Video Content Shows Remarkable Growth

LinkedIn video content grew by 53% last year, with engagement up by 87.32%. This growth is faster than on TikTok, Reels, and YouTube.

The report states:

“Video posting may have increased by 13.77%, but the real story is in the rise of impressions (+73.39%) and views (+52.17%). Users are engaging more with video content, which indicates that LinkedIn is prioritizing this format in its algorithm.”

Industry-Specific Insights

The research broke down performance by industry. Surprisingly, sectors with smaller followings often get better engagement.

Manufacturing and utilities companies had fewer followers than education or retail companies, yet they received more engagement per post.

This challenges the idea that having more followers automatically means better results.

Practical Tips for Marketers

Based on these findings, here’s what LinkedIn marketers should do:

  • Don’t avoid links: Include links when they add value. They help, not hurt, your posts.
  • Mix up your content: Use more carousels and polls. They perform much better than other formats.
  • Send more traffic through LinkedIn: With clicks up 28.13% year-over-year, LinkedIn is better than many think for driving website traffic.
  • Be realistic about follower growth: Only 17.68% of accounts gained followers in 2024. Growing a LinkedIn following is harder than on other platforms.

Looking Ahead

The Metricool report challenges fundamental LinkedIn marketing beliefs with solid data. The most useful finding for SEO and content marketers is that adding links helps rather than hurts your posts.

Marketers should regularly test old advice against real performance data. What worked on LinkedIn in the past might not work in 2025.


Featured Image: Jartee/Shutterstock

Microsoft Monetize Gets A Major AI Upgrade via @sejournal, @brookeosmundson

Microsoft’s Monetize platform just received one of its biggest updates to date, and this one is all about working smarter, not harder.

Launched April 14, the new Monetize experience introduces AI-powered tools, a revamped homepage, and much-needed platform enhancements that give both publishers and advertisers more visibility and control.

This isn’t just a design refresh. With Microsoft Copilot now integrated, a new centralized dashboard, and a detailed history log, the platform is being positioned as a smarter command center for digital monetization.

Here’s what’s new and how it impacts your bottom line.

Copilot Is Now Built Into Monetize

Microsoft’s Copilot is now officially integrated into Monetize and available to all clients.

Copilot acts like a real-time AI assistant built directly into your monetization workflow. Instead of sifting through reports and data tables to figure out what’s wrong, Copilot surfaces insights automatically.

Think: “Why is my fill rate down?” or “Which line items are underperforming this week?”

Now, you’re able to ask and get answers without leaving the platform.

It’s designed to proactively alert users to revenue-impacting issues, like creatives that haven’t served, line items that didn’t deliver as expected, or unexpected dips in CPM.

For publishers who manage large volumes of inventory and multiple demand sources, this type of AI support can dramatically reduce troubleshooting time and help get campaigns back on track faster.

This allows monetization teams to shift their focus to revenue strategy, not just diagnostics.

A Smarter, Centralized Homepage

The new Monetize homepage is more than just a cosmetic update, it’s now the nerve center of the platform. It’s built around clarity and action.

Instead of bouncing between multiple tabs or reports, users now land on a central dashboard that shows performance highlights, revenue trends, system notifications, and even troubleshooting insights.

It’s designed to cut down the time spent navigating the platform and ramp up how quickly you can make revenue-driving decisions.

Microsoft Monetize homepage performance highlights example.Image credit: Microsoft Ads blog, April 2025

Some of the key features of the new homepage include:

  • Performance highlights: Get a high-level summary of revenue trends and your most important KPIs at the top of the screen.
  • Revenue and troubleshooting insights: What was originally in the Monetize Insights tool is now integrated into the homepage.
  • Brand unblock and authorized sellers insights: Brings visibility to commonly overlooked revenue blocks.

In short: you no longer need to click into five different tabs to piece together what’s going on. The homepage is designed to give a high-level pulse on your monetization performance, with quick pathways to dig deeper when needed.

It’s particularly helpful for teams managing multiple properties, as you can prioritize where to intervene based on the highest revenue impact.

A Simplified Navigation Experience

Another welcome change is the platform’s redesigned navigation. Microsoft has moved to a cleaner left-hand panel layout, consistent with its broader product ecosystem.

It may seem like a small thing, but this update removes a lot of the friction users previously experienced when trying to find specific tools or data. Now, when you hover over a section like “Line Items” or “Reporting,” all related sub-navigation options appear instantly, helping users get where they need to go faster.

For publishers who jump between Microsoft Ads, Monetize, and other tools like Microsoft’s Analytics offerings, this consistency in layout creates a smoother experience overall.

History Log Adds Transparency

One of the more functional (but underrated) updates is the new history change log.

This feature gives users the ability to view a running history of platform changes, whether it’s edits to ad units, campaign-level changes, or adjustments made by different team members.

You can now:

  • Filter changes by user, object type, or date range
  • View a summary of all edits made to a specific item over time
  • Compare and search up to five different objects at once
  • Spot which changes may have inadvertently affected revenue or delivery

The is such a time-saver for teams managing complex account structures or operating across multiple internal stakeholders.

Why Advertisers and Brands Should Care

While most of these updates are tailored to publishers, advertisers and brands also stand to benefit – especially those buying programmatically within Microsoft’s ecosystem.

Here’s a few examples of how brands and advertisers can benefit:

  • Cleaner inventory = better delivery. Copilot helps publishers resolve issues like broken creatives or poor match rates faster. That means your ads are more likely to show where and when they should.
  • More consistent pricing. With publishers better able to manage and optimize their inventory, the fluctuations in floor pricing and bid dynamics can become more predictable.
  • Better campaign outcomes. When ad operations run more smoothly, campaign metrics tend to improve.
  • Reduced latency. The homepage’s new alert system flags latency issues immediately, helping prevent delayed or missed ad requests that impact advertiser performance.

In short: a more efficient supply side leads to fewer wasted impressions and stronger results for advertisers across Microsoft inventory.

Looking Ahead

With this revamp, Microsoft is signaling that Monetize is no longer just an ad server: it’s becoming an intelligence hub for publishers.

Between the Copilot integration, the centralized homepage, and detailed change logs, the platform gives monetization teams tools to act faster, stay informed, and optimize proactively.

By improving the infrastructure on the publisher side, Microsoft is also improving the health and quality of its programmatic marketplace. That’s a win for everyone involved, whether you’re selling impressions or buying them.

If you’re a publisher already using Monetize, now’s the time to explore these new features. If you’re an advertiser, these updates may mean more reliable inventory and smarter campaign performance across Microsoft’s supply chain.

Google AI Overview Study: 90% Of B2B Buyers Click On Citations via @sejournal, @MattGSouthern

Google’s AI Overviews have changed how search works. A TrustRadius report shows that 72% of B2B buyers see AI Overviews during research.

The study found something interesting: 90% of its respondents said they click on the cited sources to check information.

This finding differs from previous reports about declining click rates.

AI Overviews Are Affecting Search Patterns in Complex Ways

When AI summaries first appeared in search results, many publishers worried about “zero-click searches” reducing traffic. Many still see evidence of fewer clicks across different industries.

This research suggests B2B tech searches work differently. The study shows that while traffic patterns are changing, many users in their sample don’t fully trust AI content. They often check sources to verify what they read.

The report states:

“These overviews cite sources, and 90% of buyers surveyed said that they click through the sources cited in AI Overviews for fact-checking purposes. Buyers are clearly wanting to fact-check. They also want to consult with their peers, which we’ll get into later.”

If this extends beyond this study, being cited in these overviews might offer visibility for specific queries.

From Traffic Goals to Citation Considerations

While still optimizing for organic clicks, becoming a citation source for AI overviews is valuable.

The report notes:

“Vendors can fill the gap in these tools’ capabilities by providing buyers with content that answers their later-stage buying questions, including use case-specific content or detailed pricing information.”

This might mean creating clear, authoritative content that AI systems could cite. This applies especially to category-level searches where AI Overviews often appear.

The Ungated Content Advantage in AI Training

The research spotted a common mistake about how AI works. Some vendors think AI models can access their gated content (behind forms) for training.

They can’t. AI models generally only use publicly available content.

The report suggests:

“Vendors must find the right balance between gated and ungated content to maintain discoverability in the age of AI.”

This creates a challenge for B2B marketers who put valuable content behind forms. Making more quality information public could influence AI systems. You can still keep some premium content gated for lead generation.

Potential Implications For SEO Professionals

For search marketers, consider these points:

  • Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness seems even more critical for AI evaluation.
  • The research notes that “AI tools aren’t just training on vendor sites… Many AI Overviews cite third-party technology sites as sources.”
  • As organic traffic patterns change, “AI Overviews are reshaping brand discoverability” and possibly “increasing the use of paid search.”

Evolving SEO Success Metrics

Traditional SEO metrics like organic traffic still matter. But this research suggests we should also monitor other factors, like how often AI Overviews cite you and the quality of that traffic.

Kevin Indig is quoted in the report stating:

“The era of volume traffic is over… What’s going away are clicks from the super early stage of the buyer journey. But people will click through visit sites eventually.”

He adds:

“I think we’ll see a lot less traffic, but the traffic that still arrives will be of higher quality.”

This offers search marketers one view on handling the changing landscape. Like with all significant changes, the best approach likely involves:

  • Testing different strategies
  • Measuring what works for your specific audience
  • Adapting as you learn more

This research doesn’t suggest AI is making SEO obsolete. Instead, it invites us to consider how SEO might change as search behaviors evolve.


Featured Image: PeopleImages.com – Yuri A/Shutterstock

Beyond ROAS: Aligning Google Ads With Your True Business Objectives [Webinar] via @sejournal, @hethr_campbell

Are your paid campaigns delivering the results that really matter?

If your ad strategy is focused only on cost-per-acquisition, you might be leaving long-term growth on the table. It’s time to rethink how you measure success in Google Ads.

In this upcoming webinar, you’ll get:

  • Smarter ways to measure PPC success.
  • Tested, powerful bidding strategies.
  • Real, bigger business impact.

Why This Webinar Is a Must-Attend Event

This session is designed to help you move beyond ROAS and align your ad performance with actual business goals.

Join live and you’ll learn to:

Expert Insights From Justin Covington

Justin Covington, Director of Paid Channels Solutions at iQuanti, will walk you through the latest updates in Google Ads and how to use them to drive stronger results. You’ll leave with practical, ready-to-use strategies you can apply immediately.

From campaign structure to audience strategy, you’ll get practical steps to start optimizing your paid ads immediately.

Don’t Miss Out!

Save your spot now for clear, tactical guidance that helps your ad dollars go further.

Can’t Make It Live?

Register anyway, and we’ll send the full recording straight to your inbox.