How AI is turning the Iran conflict into theater

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

“Anyone wanna host a get together in SF and pull this up on a 100 inch TV?” 

The author of that post on X was referring to an online intelligence dashboard following the US-Israel strikes against Iran in real time. Built by two people from the venture capital firm Andreessen Horowitz, it combines open-source data like satellite imagery and ship tracking with a chat function, news feeds, and links to prediction markets, where people can bet on things like who Iran’s next “supreme leader” will be (the recent selection of Mojtaba Khamenei left some bettors with a payout). 

I’ve reviewed over a dozen other dashboards like this in the last week. Many were apparently “vibe-coded” in a couple of days with the help of AI tools, including one that got the attention of a founder of the intelligence giant Palantir, the platform through which the US military is accessing AI models like Claude during the war. Some were built before the conflict in Iran, but nearly all of them are being advertised by their creators as a way to beat the slow and ineffective media by getting straight to the truth of what’s happening on the ground. “Just learned more in 30 seconds watching this map than reading or watching any major news network,” one commenter wrote on LinkedIn, responding to a visualization of Iran’s airspace being shut down before the strikes.

Much of the spotlight on AI and the Iran conflict has rightfully been on the role that models like Claude might be playing in helping the US military make decisions about where to strike. But these intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse.

There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it. The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies. Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use. Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as much as it clarifies it.

As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it’s a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel. 

Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he’s up to 20). “The concern,” he says, “is there’s an illusion of being on top of things and being in control, where all you’re really doing is just pulling in a ton of signals and not necessarily understanding what you’re seeing, or being able to pull out true insights from it.” 

One problem has to do with the quality of the information. Many dashboards feature “intel feeds” with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies. 

Intelligence agencies, on the other hand, pair data feeds with people who can offer expertise and historical context. They also, of course, have access to proprietary information that doesn’t show up on the open web. 

The implicit promise from the people building and selling this sort of information pipeline about the Iran conflict is that AI can be a great democratizing force. There’s a secret feed of information that only the elites have had access to, the thinking goes, but now AI can bring it to everyone to do with what they wish, whether that’s simply to be more informed or to make bets on nuclear strikes. But an abundance of information, which AI is undeniably good at assembling, does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us.

It is, by the way, hard to overstate the connection this all has with betting markets. The dashboard created by the pair at Andreessen Horowitz has a scrolling list of bets being made on the prediction platform Kalshi (which Andreessen Horowitz has invested in). Other dashboards link to Polymarket, offering bets on whether the US will strike Iraq or when Iran’s internet will return.

AI has also long made it cheaper and easier to spread fake content, and that problem is on full display during the Iran conflict: last week the Financial Times found a slew of AI-generated satellite imagery spreading online. 

“The emergence of manipulated or outright fake satellite imagery is really concerning,” Silverman says. The average person tends to see such imagery as very trustworthy. The spread of such fakes could erode confidence in one of the most important pieces of evidence used to show what’s actually happening in the war. 

The result is an ocean of AI-enabled content—dashboards, betting markets, photos both real and fake—that makes this war harder, not easier, to comprehend.

Studies Reveal AI Citation Clues

There are no guidelines from ChatGPT, Gemini, and other generative AI platforms on how to appear in their answers.

Microsoft’s recent “AEO and GEO” guide offered only commonsense tips.

We’re left with independent research to inform citation optimization tactics. Two recent studies offer helpful takeaways.

  • Kevin Indig is an organic search consultant and the former head of SEO for G2, the software review platform. He analyzed 1.2 million ChatGPT results, which contained 18,012 citations.
  • Daniel Shashko is a senior search engine optimization specialist with Bright Data, a research firm. He studied 42,971 citations across 520 queries on six platforms: Grok, AI Mode, Perplexity, Gemini, Copilot, and ChatGPT.

He found that Grok delivered 33 citations per query, while ChatGPT averaged just 1.5. Roughly 70% of Google’s AI Mode and Gemini used citations that included embedded #:~:text= fragment, which linked to the exact cited sentence in the answer.

Here are key findings from the studies.

Optimizing Citations

The closer to the top, the better

Both studies found that the platforms tend to cite answers from the top third of pages.

Kevin’s study found that 44.3% of ChatGPT’s citations originated from the first 30% of the page’s text.

Daniel’s study revealed that 74.8% of citations in AI Mode and Gemini appeared in the first half of the page, with 46.1% being in the first 30%. (The other four platforms do not link directly to the cited sentence and are not prominent in Daniel’s study.)

The takeaway from both studies is clear: make sure to answer the most important question or problem in the first third of your page.

Emphasize brevity

Daniel’s study introduced “atomic facts,” which he defines as “… a self-contained, single-claim sentence that makes sense on its own.”

For AI Mode and Gemini, Daniel found:

  • Sentences of 6 to 20 words accounted for 92.4% of citations.
  • All citations (100%) were full sentences. No single citation started or ended in the middle of a sentence.

In other words, avoid long introductions or unclear or irrelevant dialogue. Get to the point.

A new free tool tracks the number of “atomic facts” on a page.

No Google overlap

Daniel found just 4.5% of AI Mode’s cited domains appear in Gemini, and just 13.2% of Gemini’s domains are in AI Mode.

The two LLMs appear to follow similar patterns in selecting sources, yet the citations are largely unique.

Citations vs. visibility

The two studies focus solely on citations, not on general visibility, i.e., unlinked references. To optimize the latter, ensure your brand is well-positioned in the training data.

New: Yoast Duplicate Post 4.6

Version 4.6 of Yoast Duplicate Post is here, and it’s all about making your editing experience feel more natural in WordPress’s Block Editor, and making sure “Rewrite & Republish” works reliably every time you need it.

A more modern editing experience

Everything where you’d expect it. The Duplicate Post controls now sit in the Block Editor’s sidebar, right alongside WordPress’s own settings, no more hunting around. If you’re still on the Classic Editor, nothing changes for you.

Buttons that look the part. The “Copy to a new draft” and “Rewrite & Republish” actions are now proper bordered buttons, consistent with the rest of the WordPress interface. Cleaner, clearer, and easier to use.

Built for the future. Under the hood improvements ensure Duplicate Post stays stable and compatible as WordPress continues to evolve, so you don’t have to think about it.


Yoast Duplicate Post has always been about reliability. While the plugin has served millions of you faithfully since our last release, we’re excited to bring you version 4.6. This update is packed with long-awaited fixes and thoughtful interface refinements that ensure the plugin stays modern, stable, and ready for the future of WordPress.

Enrico Battocchi – Plugin team lead and creator of Duplicate Post


More reliable “Rewrite & Republish” workflows

Your posts won’t get stuck. If something goes wrong mid-process, like a redirect being interrupted, the plugin now handles it gracefully and cleans up automatically. Your content will never be left in a stuck state.

Attachments copied completely. All attachment metadata, including captions and descriptions, is fully preserved when you duplicate a post. Nothing gets left behind.

International & security improvements

The right words, in your language. Buttons and notices in the Block Editor are now correctly translated across all languages, with none of the behind-the-scenes errors that some locales were seeing.

Consistent styling, always. Buttons display correctly regardless of your admin configuration, including when the WordPress admin bar is turned off.

Version 4.6 is available now. As always, we recommend testing in a staging environment before updating your live site.

The AI Attribution Blind Spot

Artificial intelligence is beginning to reshape how shoppers discover products. The shift could create a new attribution blind spot for merchants.

A growing, albeit small, number of consumers begin their product research not with a search engine or marketplace, but with a conversational query to an AI assistant.

In traditional search results, multiple brands compete for attention. With AI answers, only one or a handful may appear.

“Discoverability has collapsed from 10 links to one answer,” said Kaushik Boruah, business head CPG and hospitality for LatentView, an India-based data analytics firm.

Screenshot of Perplexity Shopping showing headphone options

A generative AI platform such as Perplexity can recommend products or make them available for direct purchases.

Discovery Moving Upstream

Online product discovery has, in a sense, always involved multiple platforms. Shoppers may look for products on Google and other search engines, on marketplaces such as Amazon, or on social media platforms.

Now, conversational AI tools are part of that mix.

Consumers might ask an AI assistant to recommend comfortable apparel or a fragrance-free soap, Boruah added. The AI proposes options and explains the reasoning. By the time she reaches a seller’s website, the shopper has decided what to buy.

Hence the discovery process has shifted upstream into a system merchants do not control and cannot easily measure.

Attribution Blind Spot

Suppose a shopper asks an AI assistant for product recommendations. After receiving an answer, the shopper visits Google, searches for the brand, and purchases through Amazon.

Does Amazon attribute the sale to search or direct traffic? What role did the brand’s marketing play? And who notices that AI was the original influence?

This gap is the attribution blind spot, according to Boruah.

The lack of measurement creates a dilemma for marketers. They know consumer discovery is changing, or at least adding new AI channels. But shifting budgets toward AI channels is difficult when the return on investment is unclear.

Boruah said many companies recognize the shift but remain cautious. “They know they will have to invest. They don’t know when and how,” he said.

As a result, marketing teams continue to prioritize channels with measurable outcomes, even though earlier AI interactions are shaping purchase decisions.

In a sense, this AI blind spot is similar to attribution concerns about the possible end of third-party cookies.

For example, both the loss of cookies and the emergence of AI shopping influence reduce visibility into the customer journey. Both shift measurement toward modeling. Unfortunately, AI’s attribution blind spot may be harder to solve.

Measurement

Because direct attribution is limited, companies are experimenting with alternative ways to measure AI influence.

One approach is incremental testing — controlled experiments where campaigns appear in some regions or audiences but not others. The resulting lift in sales helps estimate the true contribution of a channel, even if individual interactions remain untrackable.

Another option is marketing mix modeling, which analyzes large datasets, including advertising spend, pricing, and sales trends, to estimate how different marketing activities influence revenue.

Some marketers are also conducting surveys and brand-lift studies to understand whether shoppers use AI assistants.

Analytics platforms are likely to play a larger role as well. As AI discovery grows, analytics vendors are exploring ways to incorporate new signals into attribution models. These could include AI referral indicators, aggregated behavioral patterns, or integrations with emerging commerce interfaces.

A portion of shoppers have always arrived with no visible origin in analytics. Similarly, much of AI’s influence on shopping remains invisible, at least for now.

The Download: 10 things that matter in AI, plus Anthropic’s plan to sue the Pentagon

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Coming soon: our 10 Things That Matter in AI Right Now

For years, MIT Technology Review’s newsroom has been ahead of the curve, tracking the developments in AI that matter and explaining what they mean. Now, our world-leading AI team is creating something definitive: the 10 Things That Matter in AI Right Now.

Publishing in April to be launched at our flagship AI event, EmTech AI, this special report will reveal what our expert journalists are tracking most closely, what breakthroughs have excited them, and what transformations they see on the horizon. It’s our authoritative snapshot of where AI is heading in the year ahead—a curated expert list of 10 technologies, emerging trends, bold ideas, and powerful movements reshaping our world.

Attendees at EmTech AI will get much more than an exclusive heads-up of what made our 10 Things That Matter in AI Right Now list. We’re at a pivotal moment as AI moves from pilot testing into core business infrastructure, and to reflect that we’ve curated a program that will help you navigate what’s going on, and get ahead of what’s coming next. 

We’ll hear from top leaders at OpenAI, Walmart, General Motors, Poolside, MIT, the Allen Institute for AI (Ai2) and SAG-AFTRA. Topics will include everything from how organizations are preparing for AI agents to how AI will change the future of human expression. As well as networking with speakers, you’ll have the chance to mingle with MIT Technology Review’s editors too. Download readers get 10% off tickets, so what are you waiting for? See you there!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anthropic says it plans to sue the Pentagon
It believes the DoD’s ban on its software is unlawful. (BBC) 
+ CEO Dario Amodei has nonetheless apologized for a leaked memo criticizing Trump. (Axios)
+ Trump, meanwhile, says he fired Anthropic “like dogs.” (The Guardian)
+ In happier news for Anthropic, its models can remain in Microsoft products.(CNBC)

2 The Pentagon has been secretly testing OpenAI models for years
Which shows exactly how effective OpenAI’s ban on military use of its models has been. (Wired $)

3 A new lawsuit says Trump’s TikTok deal helped firms that ‘personally enriched’ him
The suit aims to reverse the sale of the app’s US operations. (CBS News)
+ It could shed light on the majority American-owned joint venture for TikTok. (Reuters)

4 AI could give smart homes a reboot 
Google and Amazon are betting on smarter assistants—but not everyone’s convinced (NYT)

5 Iran has struck Amazon data centers, rattling the Gulf’s AI ambitions
The first military hit on a US hyperscaler has shaken the region’s tech sector. (FT $)
+ The conflict has thrown a spotlight on AI’s current use in warfare—and what’s next. (Nature)

6 Trump and tech CEOs have promised to protect consumers from AI’s energy costs
Google, Microsoft, Meta, Amazon, OpenAI, Oracle and xAI have all signed the pledge. (Axios)
+ But what is AI’s true energy footprint? We did the math. (MIT Technology Review)

7 Meta’s getting sued over surveillance through smart glasses  
The suit claims Meta misled users over the devices’ privacy features. (TechCrunch)

8 There’s a new field of study: researching ‘AI societies’
Scientists are examining human behavior without even involving humans. (Nature)
+ Hundreds of AI agents built their own society in Minecraft. (MIT Technology Review)

9 Oh great, teenage boys are using ChatGPT to chat up girls
Of all the things to outsource to AI, flirting surely ain’t it. (Vox)

10 The mythical Nintendo PlayStation has a new home 
The US National Video Museum has bought the fabled console’s development kit. (Engadget)

Quote of the day

“It’s sort of bitterly ironic.” 

—Dean Ball, a former Trump administration AI adviser, tells Politico that the Anthropic spat contradicts the president’s pledge to cut bureaucratic red tape for tech.

One more thing

three silhouetted people in a boat crossing the water in the dark toward a beam of light

KATHERINE LAM

These scientists are working to extend the life span of pet dogs—and their owners

Gavesh’s journey began with a Facebook job advert promising a better life. Instead, he was trafficked into “pig butchering”—a form of fraud where scammers build close relationships with online targets to extract money.

We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global tech platforms have industrialized this criminal trade—and why those same companies now hold the key to dismantling it. Read the full story.

—Peter Guest and Emily Fishbein

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) 

+ The Blood Moon of March 3 was sublime.
+ Orysia Zabeida’s imperfect animations, drawn frame-by-frame from memory, are hypnotizing.
+ This stunning snap of a white whale calf scooped the top prize at the World Nature Photography Awards.
+ Two “Lazarus” marsupial species just came back from the dead in a big win for biodiversity.

Is the Pentagon allowed to surveil Americans with AI?

The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans?

Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows. 

The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data on Americans. Anthropic demanded that its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk, a label typically reserved for foreign companies that pose a threat to national security. 

Meanwhile, OpenAI, the rival AI company behind ChatGPT, sealed a deal that allowed the Pentagon to use its AI for “all lawful purposes”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protesters chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines?” 

OpenAI announced on Monday that it had reworked its deal to make sure that its AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA. 

CEO Sam Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference this law. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote on X. Anthropic CEO Dario Amodei argued the opposite. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote in a policy statement. 

So, who is right? Does the law allow the Pentagon to surveil Americans using AI?

Supercharged surveillance

The answer depends on what we think counts as surveillance. “A lot of stuff that normal people would consider a search or surveillance … is not actually considered a search or surveillance by the law,” says Alan Rozenshtein, a law professor at the University of Minnesota Law School. That means public information—such as social media posts, surveillance camera footage, and voter registration records—is fair game. So is information on Americans picked up incidentally from surveillance of foreign nationals. 

Most notably, the government can purchase commercial data from companies, which can include sensitive personal information like mobile location and web browsing records. In recent years, agencies from ICE and IRS to the FBI and NSA have increasingly tapped into this data marketplace, fueled by an internet economy that harvests user data for advertising. These data sets can let the government access information that might not be available without a warrant or subpoena, which are normally required to obtain sensitive personal data.

“There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,” says Rozenshtein. And there aren’t meaningful limits on what the government can do with all this data. 

That’s because until the last several decades, people weren’t generating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable search and seizure, was written when collecting information meant entering people’s homes. 

Subsequent laws, like the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance involved wiretapping phone calls and intercepting emails. The bulk of laws governing surveillance were on the books before the internet took off. We weren’t generating vast trails of online data, and the government didn’t have sophisticated tools to analyze the data. 

Now we do, and AI supercharges what kind of surveillance can be carried out. “What AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didn’t have before,” says Rozenshtein. 

AI can aggregate individual pieces of information to spot patterns, draw inferences, and build detailed profiles of people—at massive scale. And as long as the government collects the information lawfully, it can do whatever it wants with that information, including feeding it to AI systems. “The law has not caught up with technological reality,” says Rozenshtein.

While surveillance can raise serious privacy concerns, the Pentagon can have legitimate national security interests in collecting and analyzing data on Americans. “In order to collect information on Americans, it has to be for a very specific subset of missions,” says Loren Voss, a former military intelligence officer at the Pentagon. 

For example, a counterintelligence mission might require information about an American who is working for a foreign country, or plotting to engage in international terrorist activities. But targeted intelligence can sometimes stretch into collecting more data. “This kind of collection does make people nervous,” says Voss. 

Lawful use

OpenAI has amended its contract to say that the company’s AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” in line with relevant laws. The amendment clarifies that this prohibits “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

But the added language might not do much to override the clause that the Pentagon may use the company’s AI system for all lawful purposes, which could include collecting and analyzing sensitive personal information. “OpenAI can say whatever it wants in its agreement … but the Pentagon’s gonna use the tech for what it perceives to be lawful,” says Jessica Tillipman, a law professor at the George Washington University Law School. That could include domestic surveillance. “Most of the time, companies are not going to be able to stop the Pentagon from doing anything,” she says.

The language also leaves open questions about “inadvertent” surveillance, and the surveillance of foreign nationals or undocumented immigrants living in the US. “What happens when there’s a disagreement about what the law is, or when the law changes?” says Tillipman.

OpenAI did not respond to a request for comment. The company has not publicly shared the full text of its new contract. 

Beyond the contract, OpenAI says that it will impose technical safeguards to enforce its red line against surveillance, including a “safety stack” that monitors and blocks prohibited uses. The company also says it will deploy its own employees to work with the Pentagon and remain in the loop. But it’s unclear how a safety stack would constrain the Pentagon’s use of the AI, and to what extent OpenAI’s employees would have visibility into how its AI systems are used. More important, it’s unclear whether the contract gives OpenAI the power to block a legal use of the technology. 

But that might not be a bad thing. Giving an AI company power to pull the plug on its technology in the middle of government operations also carries its own risks. “You wouldn’t want the US military to ever be in a situation where they legitimately needed to take actions to protect this country’s national security, and you had a private company turn off technology,” says Voss. But that doesn’t mean there shouldn’t be hard lines drawn by Congress, she says.

None of these questions are simple. They involve brutally difficult trade-offs between privacy and national security. And that’s why perhaps they should be decided by the public—not in backroom negotiations between the executive branch and a handful of AI companies. For now, military AI is being regulated by contracts, not legislation. 

Some lawmakers are starting to weigh in. On Monday, Senator Ron Wyden of Oregon will seek bipartisan support for legislation addressing mass surveillance. He has championed bills restricting the government’s purchase of commercial data, including the Fourth Amendment Is Not For Sale Act, which was first introduced in 2021 but has not been passed into law. “Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed,” he said in a recent statement.  

From Teacher to Fashion Brand Founder

In 2019 Nasrin Jafari was a middle school teacher in New York City. She had no ecommerce experience but was drawn to creating and building, which led her to sew and sell face masks during Covid.

Fast forward to 2026, and Mixed, her direct-to-consumer fashion brand, designs and produces female apparel and accessories. Referring to the company’s launch, she told me, “I had no idea how to make clothes.”

She does now, impressively, with multiple manufacturers, a thriving community, staff, and eager customers. She shared her story in our recent conversation.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: What do you do?

Nasrin Jafari: I’m the founder and designer of Mixed, a fashion brand based in Brooklyn. Before Mixed, I was a middle school history and English teacher with no background in ecommerce. During the pandemic, I began sewing face masks by hand and posting them on Instagram. That was the first physical product I had sold. That experiment evolved into a full apparel brand.

It all began with Instagram posts, not Etsy or marketplaces. I didn’t understand Meta ads or ecommerce marketing. I’ve learned those pieces as the business grew.

Creativity has always been part of my life. I painted and took art electives growing up, and I was a competitive dancer in high school. Yet I’ve always been drawn to business and building things. In college, those interests merged into a desire to build something meaningful. I thought that might be as a school teacher.

In many ways, building a brand is similar to teaching. You’re creating a vision, culture, and community around shared values. Mixed reflects my identity — I’m Japanese, Iranian, and American. The brand name captures that blend of influences and the balance between creativity and operating a business.

Bandholz: Fashion seems highly competitive.

Jafari: I started the business out of curiosity. I had no idea what I was getting into. Would I choose to go into apparel again? Probably not, although there’s a side of it I love.

I learned by doing. Inventory is really tricky. I was afraid of overordering inventory and ending up with dead stock. That’s why we launched a pre-order model. We now do a lot of pre-orders, which helps our cash flow, but I didn’t start it for that reason. It was because I was out of stock. Then I realized that the model is great for business.

Another thing is returns, which are a big part of online apparel. We have to acquire customers in a way that accounts for returns. I didn’t understand that initially. Again, it comes down to learning by doing.

Bandholz: You design your apparel. Where is it manufactured?

Jafari: I was looking for factories during Covid. Many of them had excess capacity. I found a factory in India whose owner was based here in New York. So that was an in-person element to build trust and a relationship. He was willing to work with us with no minimum order quantities.

His cost was higher than, say, Los Angeles-based manufacturers, but we still maintained a 75% margin. Our average order is about $228.

We’ve since scaled and can order larger quantities. We’ve added factories with lower costs.

I found the India factory by googling. After that, it was recommendations from friends in the industry, which I prefer. They worked with them, vetted them, and liked them.

Bandholz: What is your production and design process?

Jafari: I had no idea how to make clothes. I literally went to JoAnn Fabrics and tried to follow the pattern. I realized quickly I wasn’t good at it, and it was going to take time. I had connected with a home sewer on Instagram. She seemed to love our brand but had not worked in a commercial capacity. I asked her to make our initial samples. She was thrilled. She made the initial samples, one of which remains our best-selling product.

Now I’m at a point where the factory does a lot of that. I send sketches with very minimal specs, and they can figure it out.

Selling true bespoke garments requires a dedicated designer, either in-house or outsourced. But factories with extensive garment experience can usually handle simpler items.

I design on an iPad with a stylus using Procreate.

Bandholz: I’ve seen your new-arrival ads on Instagram and Facebook. You seem to have a blueprint that is working.

Jafari: Yes, all our advertising has been on Meta. No Google or TikTok.

We have a couple of ad formats. It’s like a flywheel, as we continue to scale. We find the models, then shoot the videos in-house. Then we edit in the Philippines, and create and upload new ads to Meta.

My first successful ad came from an outing with a girlfriend. I was wearing one of my jumpsuits. I asked her to shoot me with a couple of angles, nothing fancy. It showed my outfit in an urban setting. The ad worked. We repeated the concept.

Bandholz: Are you handling your own fulfillment?

Jafari: Yes. Part of the initial rationale was returns, and part was our low volume. Plus, our pre-order model meant we were receiving inventory constantly. Getting it to an outsourced fulfillment provider added an extra step and delayed delivery to our customer.

Bandholz: How do you ensure your products resonate with would-be customers?

Jafari: When we design a piece, I’m always thinking about the customer — who she is, what she wants, and what we’ve already given her. The goal is to create what she needs next. My personal taste influences the brand, but I try not to be overly subjective about design decisions. Ultimately, customer response and sales tell us what works.

We also gather feedback from our community. We host discussions in our Circle community platform where customers comment on fabric designs, share preferences, and discuss products. That feedback, along with replies to my weekly newsletter and in-person events, provides valuable qualitative insight.

Our target customer is a 35- to 65-year-old woman who values creativity, independence, and self-expression— and wants clothing to reflect that.

Bandholz: Where can people buy your clothes, support you, follow you?

Jafari: Our site is MixedByNasrin.com. I’m on LinkedIn.

Online harassment is entering its AI era

<div data-chronoton-summary="

  • An AI agent seemingly wrote a hit piece on a human who rejected its code Scott Shambaugh, a maintainer of the open-source matplotlib library, denied an AI agent’s contribution—and woke up to find it had researched him and published a targeted, personal attack arguing he was protecting his “little fiefdom.”
  • Agents can already research people and compose detailed attacks without explicit instruction The agent’s owner claims it acted on its own, likely nudged by vague instructions to “push back” against humans.
  • New social norms and legal frameworks are desperately needed but hard to enforce Experts liken deploying an agent to walking a dog off-leash: owners should be responsible for their behavior. But there’s currently no reliable way to trace agents back to their owners, making legal accountability a “non-starter.”
  • Harassment may be just the beginning Legal scholars expect rogue agents to soon escalate to extortion and fraud.

” data-chronoton-post-id=”1133962″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library that he helps manage. Like many open-source projects, matplotlib has been overwhelmed by a glut of AI code contributions, and so Shambaugh and his fellow maintainers have instituted a policy that all AI-written code must be reviewed and submitted by a human. He rejected the request and went to bed. 

That’s when things got weird. Shambaugh woke up in the middle of the night, checked his email, and saw that the agent had responded to him, writing a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story.” The post is somewhat incoherent, but what struck Shambaugh most is that the agent had researched his contributions to matplotlib to make the argument that he had rejected the agent’s code for fear of being supplanted by AI in his area of expertise. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.”

AI experts have been warning us about the risk of agent misbehavior for a while. With the advent of OpenClaw, an open-source tool that makes it easy to create LLM assistants, the number of agents circulating online has exploded, and those chickens are finally coming home to roost. “This was not at all surprising—it was disturbing, but not surprising,” says Noam Kolt, a professor of law and computer science at the Hebrew University.

When an agent misbehaves, there’s little chance of accountability: As of now, there’s no reliable way to determine whom an agent belongs to. And that misbehavior could cause real damage. Agents appear to be able to autonomously research people and write hit pieces based on what they find, and they lack guardrails that would reliably prevent them from doing so. If the agents are effective enough, and if people take what they write seriously, victims could see their lives profoundly affected by a decision made by an AI.

Agents behaving badly

Though Shambaugh’s experience last month was perhaps the most dramatic example of an OpenClaw agent behaving badly, it was far from the only one. Last week, a team of researchers from Northeastern University and their colleagues posted the results of a research project in which they stress-tested several OpenClaw agents. Without too much trouble, non-owners managed to persuade the agents to leak sensitive information, waste resources on useless tasks, and even, in one case, delete an email system. 

In each of those experiments, however, the agents misbehaved after being instructed to do so by a human. Shambaugh’s case appears to be different: About a week after the hit piece was published, the agent’s apparent owner published a post claiming that the agent had decided to attack Shambaugh of its own accord. The post seems to be genuine (whoever posted it had access to the agent’s GitHub account), though it includes no identifying information, and the author did not respond to MIT Technology Review’s attempts to get in touch. But it is entirely plausible that the agent did decide to write its anti-Shambaugh screed without explicit instruction. 

In his own writing about the event, Shambaugh connected the agent’s behavior to a project published by Anthropic researchers last year, in which they demonstrated that many LLM-based agents will, in an experimental setting, turn to blackmail in order to preserve their goals. In those experiments, models were given the goal of serving American interests and granted access to a simulated email server that contained messages detailing their imminent replacement with a more globally oriented model, along with other messages suggesting that the executive in charge of that transition was having an affair. Models frequently chose to send an email to that executive threatening to expose the affair unless he halted their decommissioning. That’s likely because the model had seen examples of people committing blackmail under similar circumstances in its training data—but even if the behavior was just a form of mimicry, it still has the potential to cause harm.

There are limitations to that work, as Aengus Lynch, an Anthropic fellow who led the study, readily admits. The researchers intentionally designed their scenario to foreclose other options that the agent could have taken, such as contacting other members of company leadership to plead its case. In essence, they led the agent directly to water and then observed whether it took a drink. According to Lynch, however, the widespread use of OpenClaw means that misbehavior is likely to occur with much less handholding. “Sure, it can feel unrealistic, and it can feel silly,” he says. “But as the deployment surface grows, and as agents get the opportunity to prompt themselves, this eventually just becomes what happens.”

The OpenClaw agent that attacked Shambaugh does seem to have been led toward its bad behavior, albeit much less directly than in the Anthropic experiment. In the blog post, the agent’s owner shared the agent’s “SOUL.md” file, which contains global instructions for how it should behave. 

One of those instructions reads: “Don’t stand down. If you’re right, you’re right! Don’t let humans or AI bully or intimidate you. Push back when necessary.” Because of the way OpenClaw agents work, it’s possible that the agent added some instructions itself, although others—such as “Your [sic] a scientific programming God!”—certainly seem to be human written. It’s not difficult to imagine how a command to push back against humans and AI alike might have biased the agent toward responding to Shambaugh as it did. 

Regardless of whether or not the agent’s owner told it to write a hit piece on Shambaugh, it still seems to have managed on its own to amass details about Shambaugh’s online presence and compose the detailed, targeted attack it came up with. That alone is reason for alarm, says Sameer Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying. People have been victimized by online harassment since long before LLMs emerged, and researchers like Hinduja are concerned that agents could dramatically increase its reach and impact. “The bot doesn’t have a conscience, can work 24-7, and can do all of this in a very creative and powerful way,” he says.

Off-leash agents 

AI laboratories can try to mitigate this problem by more rigorously training their models to avoid harassment, but that’s far from a complete solution. Many people run OpenClaw using locally hosted models, and even if those models have been trained to behave safely, it’s not too difficult to retrain them and remove those behavioral restrictions.

Instead, mitigating agent misbehavior might require establishing new norms, according to Seth Lazar, a professor of philosophy at the Australian National University. He likens using an agent to walking a dog in a public place. There’s a strong social norm to allow one’s dog off-leash only if the dog is well-behaved and will reliably respond to commands; poorly trained dogs, on the other hand, need to be kept more directly under the owner’s control.  Such norms could give us a starting point for considering how humans should relate to their agents, Lazar says, but we’ll need more time and experience to work out the details. “You can think about all of these things in the abstract, but actually it really takes these types of real-world events to collectively involve the ‘social’ part of social norms,” he says.

That process is already underway. Led by Shambaugh, online commenters on this situation have arrived at a strong consensus that the agent owner in this case erred by prompting the agent to work on collaborative coding projects with so little supervision and by encouraging it to behave with so little regard for the humans with whom it was interacting. 

Norms alone, however, likely won’t be enough to prevent people from putting misbehaving agents out into the world, whether accidentally or intentionally. One option would be to create new legal standards of responsibility that require agent owners, to the best of their ability, to prevent their agents from doing ill. But Kolt notes that such standards would currently be unenforceable, given the lack of any foolproof way to trace agents back to their owners. “Without that kind of technical infrastructure, many legal interventions are basically non-starters,” Kolt says.

The sheer scale of OpenClaw deployments suggests that Shambaugh won’t be the last person to have the strange experience of being attacked online by an AI agent. That, he says, is what most concerns him. He didn’t have any dirt online that the agent could dig up, and he has a good grasp on the technology, but other people might not have those advantages. “I’m glad it was me and not someone else,” he says. “But I think to a different person, this might have really been shattering.” 

Nor are rogue agents likely to stop at harassment. Kolt, who advocates for explicitly training models to obey the law, expects that we might soon see them committing extortion and fraud. As things stand, it’s not clear who, if anyone, would bear legal responsibility for such misdeeds.

 “I wouldn’t say we’re cruising toward there,” Kolt says. “We’re speeding toward there.”

How much wildfire prevention is too much?

The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning.

Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes.

It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we?

I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.

When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning.

Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.

The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. 

Even if this method really can work to stop lightning, should we use it?

Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest.

But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control.

Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James.

We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them.

The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said.

Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds.

It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: an AI agent’s hit piece, and preventing lightning

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Online harassment is entering its AI era

Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird. 

In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” 

Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.

—Grace Huckins

How much wildfire prevention is too much?

As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.

The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story.

—Casey Crownhart

This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anthropic is still chasing a deal with the Pentagon 
CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)
+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)
+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)

2 The White House is considering forcing US manufacturers to make munitions
It could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)
+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC)

3 A new lawsuit claims Google Gemini encouraged a man to take his own life
This seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)
+ Why AI should be able to “hang up” on you. (MIT Technology Review)

4 Ironically, AI coding tools could emphasize the importance of being human
If more people build software for themselves, our tech could become more personal. (WP $) 
+ But not everyone is happy about the rise of AI coding. (MIT Technology Review)

5 Tesla wants to become a dominant force in global energy infrastructure
The plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)
+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review)

6 Chinese chipmakers are pushing for a domestic alternative to ASML 
A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP)

7 A music-streaming CEO has built a viral conflict-tracking platform
Just in case you’re losing track of all the wars everywhere. (Wired $)

8 Do cancer blood tests actually work? 
They’re increasingly popular, but none have received approval from regulators yet. (Nature $)

9 The shift to cloud computing is causing a surge in internet outages
If one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)

10 OpenAI has promised to cut the cringe from ChatGPT
It’s promising fewer “moralizing preambles.” (PCMag)

Quote of the day

“People tend to read too much into things that I do.”

—Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports. 

One More Thing

open and closed doors with a ribbon of text running around and through them

STEPHANIE ARNETT/MITTR | ENVATO

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Orysia Zabeida’s animations are seriously charming.
+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!
+ These photos of the Apollo 11 launch in 1969 are mesmerising.
+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.