Health-care AI is here. We don’t know if it actually helps patients.

I don’t need to tell you that AI is everywhere.

Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays.

A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients?

We don’t yet have a good answer.

That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto, argue in a paper published in the journal Nature Medicine this week.

Wiens tells me she has spent years investigating how AI might benefit health care. For the first decade of her career she tried to pitch the technology to clinicians. Over the last few years, she says, it’s as though “a switch flipped.” Health-care providers not only appear much more interested in the promise of these technologies, they have also begun rapidly deploying them.

The problem is that many providers aren’t rigorously assessing how well they actually work.

Take “ambient AI” tools, for example. Also known as AI scribes, they “listen” to conversations between doctors and patients, then transcribe and summarize them. Multiple tools are available, and they are already being widely adopted by health-care providers.

A few months ago, a staffer at a major New York medical center who develops AI tools for doctors told me that, anecdotally, medics are “overjoyed” by the technology—it allows them to focus all their attention on their patients during appointments, and it saves them from a lot of time-consuming paperwork. Early studies support these anecdotes and suggest that the tools can reduce clinician burnout.

That’s all well and good. But what about patient health outcomes? “[Researchers] have evaluated provider or clinician and patient satisfaction, but not really how these tools are affecting clinical decision-making,” says Wiens. “We just don’t know.”

The same holds true for other AI-based technologies used in health-care settings. Some are used to predict patients’ health trajectories, others to recommend treatments. They are designed to make health care more effective and efficient.

But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest X-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately: What will this mean for those patients?

The answers to those questions might vary between hospitals or departments and could depend on clinical workflows, says Wiens. They might also differ between doctors at various stages of their careers.

Take the AI scribes, as another example. Some research on AI use in education suggests that such tools can impact the way people cognitively process information. Could they affect the way a doctor processes a patient’s information? Will the tools affect the way medical students think about patient data in a way that impacts care? These questions need to be explored, says Wiens. “We like things that save us time, but we have to think about the unintended consequences of this,” she says.

In a study published in January 2025, Paige Nong at the University of Minnesota and her colleagues found that around 65% of US hospitals used AI-assisted predictive tools. Only two-thirds of those hospitals evaluated their accuracy. Even fewer assessed them for bias.

The number of hospitals using these tools has probably increased since then, says Wiens. Those hospitals, or entities other than the companies developing the tools, need to evaluate how much they help in specific settings. There’s a possibility that they could leave patients worse off, although it’s more likely that AI tools just aren’t as beneficial as health-care providers might assume they are, says Wiens.

“I do believe in the potential of AI to really improve clinical care,” says Wiens, who stresses that she doesn’t want to stop the adoption of AI tools in health care. She just wants more information about how they are affecting people. “I have to believe that in the future it’s not all AI or no AI,” she says. “It’s somewhere in between.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
 

The Download: supercharged scams and studying AI healthcare

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

We’re in a new era of AI-driven scams

When ChatGPT was released in late 2022, it showed how easily generative AI could create human-like text. This quickly caught the eye of cybercriminals, who began using LLMs to compose malicious emails. Since then, they’ve adopted AI for everything from turbocharged phishing and hyperrealistic deepfakes to automated vulnerability scans.

Many organizations are now struggling to cope with the sheer volume of cyberattacks. AI is making them faster, cheaper, and easier to carry out, a problem set to worsen as more cybercriminals adopt these tools—and their capabilities improve. Read the full story on how AI is reshaping cybercrime.

—Rhiannon Williams

“Supercharged scams” is one of the 10 Things That Matter in AI Right Now, our essential guide to what’s really worth your attention in the field.

Subscribers can watch an exclusive roundtable unveiling the technologies and trends on the list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth.

Healthcare AI is here. We don’t know if it actually helps patients.

Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays.

A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients? We don’t yet have a good answer—here’s why.

—Jessica Hamzelou

The story is from The Checkup, our weekly newsletter that gives you the latest from the worlds of health and biotech. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepSeek has unveiled its long-awaited new AI model
The Chinese company has just launched preview versions of DeepSeek-V4. (CNN)
+It says V4 is the most powerful open-source platform. (Bloomberg $)
+ And rivals top closed-source models from OpenAI and DeepMind. (SCMP)
+ The model is adapted for Huawei chip technology. (Reuters $)

2 More countries are curbing children’s social media access
Norway is set to enforce the latest ban. (Reuters $)
+ The Philippines could follow soon. (Bloomberg $)
+ Americans are pushing to get AI out of schools. (The New Yorker)

3 The US has accused China of mass AI theft as tensions rise
A White House memo claims Chinese firms are exploiting American models. (BBC)
+ Beijing calls the accusations “slander.” (Ars Technica)

4 OpenAI set itself apart from Anthropic by widely releasing its new model
It’s releasing GPT-5.5 to all ChatGPT users, despite cybersecurity concerns. (NYT $)
+ OpenAI says the new model is better at coding and more efficient. (The Verge)

5 Meta is cutting 10% of jobs to offset AI spending
Roughly 8,000 layoffs are set to be announced on May 20. (QZ)
+ Anti-AI protests are growing. (MIT Technology Review)

6 Palantir is facing a backlash from employees
Thanks to its work with ICE and the Trump administration. (Wired $)
+ Surveillance tech is reshaping the fight for privacy. (MIT Technology Review)

7 The era of free access to advanced AI is coming to an end
AI labs are under mounting pressure to start turning profits. (The Verge)

8 Elon Musk’s feud with Sam Altman is heading to court 
The case has already revealed several unflattering secrets. (WP $)

9 A new movement is encouraging people to ditch their smartphones for a month
“Month Offline” is like a Dry January for smartphones. (The Atlantic)

10 Spotify has revealed its most-streamed music of the last 20 years
Featuring Taylor Swift, Bad Bunny, and The Weeknd. (Gizmodo

Quote of the day

“We want a childhood where children get to be children. Play, friendships, and everyday life must not be taken over by algorithms and screens.” 

—Norwegian Prime Minister Jonas Gahr Store announces age restrictions for social media.

One More Thing

NASA/JPL-CALTECH VIA WIKIMEDIA COMMONS; CRAFT NASA/JPL-CALTECH/SWRI/MSSS; IMAGE PROCESSING: KEVIN M. GILL


The search for extraterrestrial life is targeting Jupiter’s icy moon Europa

As astronomers have discovered more about Europa over the past few decades, Jupiter’s fourth-largest moon has excited planetary scientists interested in the geophysics of alien worlds.

 All that water and energy—and hints of elements essential for building organic molecules —point to an extraordinary possibility. In the depths of its ocean, or perhaps crowded in subsurface lakes or below icy surface vents, Jupiter’s big, bright moon could host life. 

To find further evidence, NASA is now searching for signs of alien existence on Europa. Read the full story on the mission.


—Stephen Ornes

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Here’s a fun look at the secret collaborations of pop history.
+ Meet the mannequins showing how the “ideal” body has evolved.
+ A photographer has cataloged all 12,795 objects in her home into an archive of a life.
+ Slime molds are unexpectedly beautiful when viewed through these high-detail macro shots.

Three reasons why DeepSeek’s new model matters

On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that helps it handle large amounts of text more efficiently. Like DeepSeek’s previous models, V4 is open source, meaning it is available for anyone to download, use, and modify.

V4 marks DeepSeek’s most significant release since R1, the reasoning model it launched in January 2025. R1, which was trained on limited computing resources, stunned the global AI industry with its strong performance and efficiency, turning DeepSeek from a little-known research team into China’s best-known AI company almost overnight. It also helped set off a wave of open-weight model releases from other Chinese AI firms. 

DeepSeek has kept a relatively low profile since then—but earlier this month, it effectively teased V4’s release when it added “expert” and “flash” modes to the online version of its model, prompting speculation that the updates were tied to a bigger upcoming release.

While the company has become a powerful symbol of China’s AI ambitions, its big return to cutting-edge frontier models comes after months of scrutiny—including major personnel departures, delays to previous model launches, and growing scrutiny from both the US and Chinese governments. 

So, will V4 shake the AI field the way R1 did? Almost certainly not, but here are three big reasons why this release matters.

1. It breaks new ground for an open-source model.

As with R1 before it, DeepSeek claims that V4’s performance rivals the best models available at a fraction of the price. This is great news for developers and for companies using the tech, because it means they can access frontier AI capabilities on their own terms, and without worrying about skyrocketing costs.

The new model comes in two versions, both of which are available on DeepSeek’s website and in its app, with API access also open to developers. V4-Pro is a larger model built for coding and complex agent tasks, and V4-Flash is a smaller version designed to be faster and cheaper to run. Both versions offer reasoning modes, in which the model can carefully parse a user’s prompt and show each step as it works through the problem.

For V4-Pro, DeepSeek charges $1.74 per million input tokens and $3.48 per million output tokens, a fraction of the cost of comparable models from OpenAI and Anthropic. V4-Flash is even cheaper, at about $0.14 per million input tokens and about $0.28 per million output tokens, making it one of the cheapest top-tier models available. This would make it a very appealing model to build applications on.

In terms of performance, V4 is, perhaps unsurprisingly, a huge jump from R1—and it seems to be a strong alternative to just about all the latest big AI models. On the major benchmarks, according to results shared by the company, DeepSeek V4-Pro competes with leading closed-source models, matching the performance of Anthropic’s Claude-Opus-4.6, OpenAI’s GPT-5.4, and Google’s Gemini-3.1. And compared to other open-source models, such as Alibaba’s Qwen-3.5 or Z.ai’s GLM-5.1, DeepSeek V4 exceeds them all on coding, math, and STEM problems, making it one of the strongest open-source models ever released. 

DeepSeek also says that V4-Pro now ranks among the strongest open-source models on benchmarks for agentic coding tasks and performs well on other tests that measure ability to carry out multistep problems. Its writing ability and world knowledge also lead the field, according to benchmarking results shared by the company. 

In a technical report released alongside the model, DeepSeek shared results from an internal survey of 85 experienced developers: More than 90% included V4-Pro among their top model choices for coding tasks.

DeepSeek says it has specifically optimized V4 for popular agent frameworks such as Claude Code, OpenClaw, and CodeBuddy.

2. It delivers on a new approach to memory efficiency.

One of the key innovations of V4 is its long context window—the amount of text the model can process at once. Both versions can handle 1 million tokens, which is large enough to fit all three volumes of The Lord of the Rings and The Hobbit combined. The company says this context window size is now the default across all DeepSeek services and it matches what is offered by cutting-edge versions of models like Gemini and Claude. 

But it’s important to know not just that DeepSeek has made this leap, but how it did so. V4 makes significant architectural changes to the company’s former models—especially in the attention mechanism, which is the feature of AI models that helps them understand each part of a prompt in relation to the rest. As the prompt text gets longer, these comparisons become much more costly, making attention one of the main bottlenecks for long-context models.

DeepSeek’s innovation was to make the model more selective about what it pays attention to. Instead of treating all earlier text as equally important, V4 compresses older information and focuses on the parts most likely to matter in the present moment, while still keeping nearby text in full so it does not miss important details. 

DeepSeek says this sharply reduces the cost of using long context. In a 1-million-token context, V4-Pro uses only 27% of the computing power required by its previous model, V3.2, while cutting memory use to 10%. The reduction in V4-Flash is even larger, using just 10% of the computing power and 7% of the memory. In practice, this could make it cheaper to build tools that need to work across huge amounts of material, such as an AI coding assistant that can read an entire codebase or a research agent that can analyze a long archive of documents without constantly forgetting what came before.

DeepSeek’s interest in long context windows didn’t start with V4. Over the past year and a half, the company has quietly published a series of papers on how AI models “remember” information, experimenting with compression and mathematical techniques to extend what AI models could realistically handle.

3. It marks the first steps on the hard road away from Nvidia.

V4 is DeepSeek’s first model optimized for domestic Chinese chips, such as Huawei’s Ascend—a move that has turned the launch into something of a test of whether China’s homegrown AI industry can begin to loosen its dependence on US chip giant Nvidia. 

This was largely expected, since The Information reported earlier this month that DeepSeek did not give American chipmakers like Nvidia and AMD early access to V4, though prerelease access is common to allow chipmakers to optimize support of the new model ahead of a launch. Instead, the company reportedly gave early access only to Chinese chipmakers. 

On Friday, Huawei said its Ascend supernode products, based on the Ascend 950 series, would support DeepSeek V4. This means that companies and individuals who want to run their own modified version of Deepseek V4 will be able to use Huawei chips easily.

Reuters previously reported that Chinese government officials recommended that DeepSeek integrate Huawei chips in its training process. And this pressure fits a broader pattern in China’s industrial policy: Strategic sectors are often pushed, and sometimes effectively required, to align with national self-reliance goals. But there’s a particular urgency when it comes to AI. Since 2022, US export controls have cut Chinese firms off from Nvidia’s most powerful chips, and they later also restricted access to downgraded China-market versions. Beijing’s response has been to accelerate the push for a domestic AI stack, from chips to software frameworks to data centers.

Chinese authorities have reportedly been pushing data centers and public computing projects to use more domestic chips, including through reported bans on foreign-made chips, sourcing quotas, and requirements to pair Nvidia chips with Chinese alternatives from companies such as Huawei and Cambricon. 

Still, replacing Nvidia is not as simple as swapping one chip for another. Nvidia’s advantage lies not only in its chips, but in the software ecosystem developers have spent years building around them. Moving to Huawei’s Ascend chips means adapting model code, rebuilding tools, and proving that systems built around those chips are stable enough for serious use.

To be clear, DeepSeek does not appear to have fully moved beyond Nvidia. The company’s technical report reveals that it is using Chinese chips to run the model for inference, or when someone asks the model to complete a task. But Liu Zhiyuan, a computer science professor at Tsinghua University, told MIT Technology Review that DeepSeek appears to have adapted only part of V4’s training process for Chinese chips. The report does not say whether some key long-context features were adapted to domestic chips, so Liu says V4 may still have been trained mainly on Nvidia chips. Multiple sources who spoke on the condition of anonymity, due to political sensitivity around these issues, told MIT Technology Review that Chinese chips still don’t perform as well as Nvidia chips but are better suited for inference than training.

DeepSeek is also tying the future costs of V4 to this hardware shift. The company says V4-Pro prices could fall significantly after Huawei’s Ascend 950 supernodes begin shipping at scale in the second half of this year. 

If that works, V4 could be an early sign that China is successfully building a parallel AI infrastructure.

Surviving D2C’s Boom and Bust

Chris Wichert is an investment banker turned direct-to-consumer entrepreneur.

His luxury shoe brand, Koio, launched in 2015 and quickly scaled. Then the pandemic hit. By late 2022, he says, the D2C hype and funding had collapsed.

He slashed costs, stabilized cash flow, and successfully exited the company. In our recent conversation, he shared his story of boom, bust, and survival.

Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us the rundown.

Chris Wichert: I co-founded Koio, a luxury footwear brand, in 2015. We exited the brand six months ago, and I helped with the transition. I’m now advising other consumer brands on how to reach profitability and stay there.

I’m from Germany. I started my career in investment banking, then moved to the U.S. for my Wharton MBA, where I met my Koio co-founder.

I live in Brooklyn, New York.

Bandholz: Did Wharton help with the launch?

Wichert: Not directly, but the Wharton School connections were conversation starters for raising money. We moved to New York after graduating.

Our first financing round, about $1.5 million, came 12 months after our launch. The money got us started, but it also set us up for the wrong path. Building a luxury D2C brand with a high average order value requires patience. You have to keep investing to eventually see the compounding effect after five, six, seven years.

We ended up raising close to $20 million over a decade. It was a mix of venture capitalists, family offices like the Winklevosses, and other D2C entrepreneurs.

We used the money initially to fund inventory and build our team. Our first hire was for operations. Our second was for marketing.

We learned quickly that selling a $300 shoe requires a strong brand and credibility. It takes a lot of investment in media outreach, pop-up stores, and retail. Our sales increased when people saw our shoes in person, tried them on, and felt the leather. So we went into retail and digital early on as a dual strategy.

We experienced great growth for the first five years. Our biggest raise, $10 million, came in 2019. But the pandemic wiped out our retail business. We had five stores at the time. Plus, our use case was gone. Our shoes were dress sneakers for dates and nice occasions.

By late 2022, early 2023, the D2C hype and funding had collapsed. Valuations plummeted.

That forced us to make big changes. We were losing roughly $3 million per year with no growth. The company was way too complex and costly. Our SKUs had expanded from men’s dress sneakers into boots, loafers, and slip-ons, for men and women.

We interviewed around 100 customers. We learned that the product expansion was detrimental to the brand. Our messaging was unclear.

We went back to the core items. Then we cut 70% of our New York team, which was painful. We closed the office and transitioned to remote only. We also closed unprofitable dropship accounts and stores. Then we rehired certain remote roles internationally.

Over the ensuing 12-18 months, we reached break-even profitability.

By then, neither my co-founder nor I wanted to keep running the business. We had an obligation to our investors and remaining employees to end the company in the best possible way.

So, I reached out to many people in D2C, especially footwear and apparel brands, to explore an exit or merger. That process was cumbersome.

It took almost two years, but we got a competitive process underway and spoke with several interested parties. We found a trustworthy acquirer who owns several brands and closed the deal with him in August of last year.

The transition lasted just six months. My co-founder and I remain shareholders. We believe in the company and wanted to ensure operational and brand consistency.

We also wanted to integrate our employees into the workflow.

Bandholz: You’ve pivoted to an advisory role.

Wichert: I’ve built a great network of consumer-brand entrepreneurs over the years. I love the industry and want to share my knowledge and experience.

I’m now working with founders across different consumer categories, such as skincare, footwear, eyewear, watches, you name it.

Bandholz: How can people reach out?

Wichert: Learn more about our shoe company at Koio.co. I’m on LinkedIn and X.

Google’s Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how deep links appear in your snippets, how your robots.txt gets parsed, how agentic features work in Search, and how the EU’s data-sharing rules apply to AI chatbots.

Here’s what matters for you and your work.

Google Lists Best Practices For Read More Deep Links

Google updated its snippet documentation with a new section on “Read more” deep links in Search results. The documentation lists three best practices that can increase the likelihood of these links appearing.

Key facts: Content must be immediately visible to a human on page load, and content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use H2 or H3 headings. The snippet text needs to match the content that appears on the page, and pages with content loaded after scrolling or interaction may further reduce the likelihood.

Why This Matters

The three practices are the first specific guidance Google has published on this feature. Sites using expandable FAQ sections, tabbed product detail areas, or scroll-triggered content for core information may see fewer deep links in their snippets compared with sites that render the same content on page load.

The guidance matches a pattern Google has applied to other Search features. Content that renders without user interaction is more likely to appear in enhanced display.

Slobodan Manić, founder of No Hacks, made a related observation on LinkedIn:

“The documentation is framed around one snippet behavior (read more deep links in search results), but the language Google chose reads as a general preference. ‘Content immediately visible to a human’ is the structural instruction, not a read-more-specific tip.”

Manić’s point extends his April 16 IMHO interview with Managing Editor Shelley Walsh, where he argued that most websites are structurally broken for AI agents. He argues that search crawlers and AI agents now face the same structural problem, and the audit is the same for both.

For existing pages, the audit question is whether key information is contained within a click-to-expand element. If a page already has a “Read more” deep link for one section, that section’s structure serves as a guide to what works. For other sections on the same page, replicating that structure may also improve their chances.

Google describes the guidance as best practices that can “increase the likelihood” of deep links appearing. That hedging matters because this is not a list of requirements, and following all three may not guarantee the links appear.

Read our full coverage: Google Lists Best Practices For Read More Deep Links

Google May Expand Its Robots.txt Unsupported Rules List

Google may add rules to its robots.txt documentation based on analysis of real-world data collected through HTTP Archive. Gary Illyes and Martin Splitt described the project on the latest Search Off the Record podcast.

Key facts: Google’s team analyzed the most frequently unsupported rules in robots.txt files across millions of URLs indexed by the HTTP Archive. Illyes said the team plans to document the top 10 to 15 most-used unsupported rules beyond user-agent, allow, disallow, and sitemap. He also said the parser may expand the typos it accepts for disallow, though he did not commit to a timeline or name specific typos.

Why This Matters

If Google documents more unsupported directives, sites using custom or third-party rules will have clearer guidance on what Google ignores.

Anyone maintaining a robots.txt file with rules beyond user-agent, allow, disallow, and sitemap should audit for directives that have never worked for Google. The HTTP Archive data is publicly queryable on BigQuery, so the same distribution Google used is available to anyone who wants to examine it.

The typo tolerance is the more speculative part. Illyes’ phrasing implies that the parser already accepts some misspellings of “disallow,” and more may be honored over time. Audit any spelling variants now and correct them, rather than assuming they will be ignored.

Read our full coverage: Google May Expand Unsupported Robots.txt Rules List

EU Proposes Google Share Search Data With Rivals And AI Chatbots

The European Commission sent preliminary findings proposing that Google share search data with rival search engines across the EU and EEA, including AI chatbots that qualify as online search engines under the DMA. The measures are not yet binding, with a public consultation open until May 1 and a final decision due by July 27.

Key facts: The proposal covers four data categories shared on fair, reasonable, and non-discriminatory terms. The categories are ranking, query, click, and view data. Eligibility extends to AI chatbot providers that meet the DMA’s definition of online search engines. If the Commission maintains eligibility through the final decision, qualifying providers could gain access to anonymized Google Search data under the Commission’s proposed terms.

Why This Matters

This proposal explicitly extends search-engine data-sharing eligibility to AI chatbots under the DMA. If the eligibility survives the consultation, the regulatory category of “search engine” now includes products that most search marketing work has treated as a separate category.

The consequences vary depending on where you operate. For sites optimizing for EU/EEA visibility, the change could broaden the scope of where anonymized search signals flow. AI products competing with Google in that market could use the data to improve their retrieval and ranking systems, which could, in turn, affect which content they cite.

Outside the EU, the direct regulatory effect is zero. The category definition is a different matter. How the Commission draws the line between “AI chatbot” and “AI chatbot that qualifies as a search engine” is likely to be cited in future proceedings.

The eligibility question is the story to watch through May 1. If the Commission narrows the AI chatbot criteria in response to consultation feedback, the implications stay regulatory. If it holds the line, that would set a material precedent for how AI search is classified.

Read our full coverage: Google May Have To Share Search Data With Rivals

Google Adds New Task-Based Search Features

Google introduced new Search features that continue its evolution toward task completion. Users can now track individual hotel price drops via a new toggle in Search, and Google is adding the ability to launch AI agents directly from AI Mode.

Key facts: Hotel price tracking is available globally through a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. The AI agent launched from AI Mode allows users to initiate tasks handled by AI within the search interface. Rose Yao, a Google Search product leader, posted about the features on X.

Why This Matters

Each task-based feature moves a process that previously started on another site into Google’s own surface. Hotel price tracking has existed at the city level for months. Expansion to individual hotels adds a new signal that users can set inside Google rather than on hotel or aggregator sites.

Direct-booking visibility depends on being inside Google’s ecosystem. Sites relying on price-drop alerts as a return-trigger for users may see some of that engagement reallocated to Google’s tracking UI. For hotel brands, this raises the stakes for ensuring individual hotel pages are fully populated in Google Business Profile and hotel feeds.

On LinkedIn, Daniel Foley Carter connected the feature to a broader pattern:

“Google’s AI overviews, AI mode and now in-frame functionality for SERP + SITE is just Google eating more and more into traffic opportunities. Everything Google told US not to do its doing itself. SPAM / LOW VALUE CONTENT – don’t resummarise other peoples content – Google does it.”

The AI agent launch is more speculative. Google has not published detailed documentation explaining what kinds of tasks users can delegate or how sources get cited. The feature confirms that agentic search, described by Sundar Pichai as “search as an agent manager,” is appearing incrementally in Search rather than as a single launch.

Read Roger Montti’s full coverage: Google Adds New Tasked-Based Search Features

Theme Of The Week: The Rules Are Getting Written

Each story this week spells out something that was previously implicit or underway.

Google signaled plans to expand what its robots.txt documentation covers. The company listed specific practices that can increase the likelihood of “Read more” deep links appearing. The European Commission proposed measures that extend search-engine data-sharing eligibility to AI chatbots under the DMA. And task-based features that Sundar Pichai described in interviews are rolling out as toggles in the search bar.

For your day-to-day, the ground gets firmer. Fewer questions are judgment calls. What does and doesn’t qualify, what Google supports, and what counts as a search engine to a regulator are all getting written down. That works to your advantage when it means clearer audit criteria, and against you when “we weren’t sure” is no longer a defensible answer.

Top Stories Of The Week:

More Resources:


Featured Image: [Photographer]/Shutterstock

Why Microsoft’s AI Ad Strategy Deserves More Attention From PPC Managers via @sejournal, @brookeosmundson

Microsoft announced a wave of AI updates this week, and most of the coverage will likely focus on the individual launches. New targeting options, diagnostics, commerce tools, Copilot enhancements, and campaign features will naturally get the headlines.

What stood out to me was the broader vision behind them.

Microsoft is not just talking about better ads. They’re talking about a different internet, where businesses need to be relevant to both people and AI systems helping shape decisions.

In their announcement this week, AI agents are becoming the fastest-growing audience. The company says automated traffic is growing 8x faster than human traffic, AI-driven sessions nearly tripled in 2025, and agentic browser traffic is up roughly 8,000% year over year. Those visitors don’t browse the way people do. They evaluate, select, and act. If a brand’s data is weak, incomplete, or untrusted, they move on.

That changes what modern performance marketing may require. Visibility inside AI answers, stronger product data, better measurement, faster diagnostics, audience precision, and clearer control over automation all start to matter more in that environment.

Google is pushing many of these same themes in its own way, especially around product feeds, automation, and AI-assisted search experiences. But Microsoft’s recent announcements offer a distinct perspective on where advertiser value may come from as discovery and buying behavior continue to shift.

Because underneath the product updates is a bigger question for PPC teams: how do you compete when the next valuable audience may not always be human?

Microsoft Is Selling A Different AI Future

Most platform announcements focus on what a new feature does. Microsoft spent more time explaining why advertiser behavior may need to change.

Their framework centered on three parallel realities:

  • People still searching on their own (the Human web)
  • People using AI to compare options (the LLM web)
  • AI systems taking action on behalf of users (the Agentic web)

What they’re saying beyond these parallels is that customer journeys are less linear and are finally being recognized as such.

For years, many PPC teams optimized around the click because the click was the clearest measurable moment. Someone searched, clicked, landed, and converted. That model still matters, but it no longer explains every influence that leads to a sale.

If an AI assistant narrows the shortlist before a search happens, the brand has already won or lost ground. If a shopping assistant compares shipping speed, loyalty perks, and product availability in seconds, the decision may be shaped before the landing page visit. If an agent eventually completes more transactions directly, structured data and transaction readiness become part of media performance.

That is why this announcement deserves more attention than a standard product roundup. Microsoft is describing a future where paid media performance depends on more than media settings.

Why This Matters For PPC Managers

Many advertisers are still operating with a channel mindset. Additionally, these channels likely sit within different teams in an organization (Search, SEO, CRM data, Analytics, etc.)

That separation becomes harder to sustain and sustains friction if buying journeys are influenced by connected systems rather than isolated clicks.

This is where the role of PPC teams can start to expand and/or evolve.

Strong practitioners still need campaign skills – that’s never going to change. They also need to spot when the real constraint sits outside the account, bring the right teams together, and push improvements that create better inputs for the platform.

Having these skills become your advantage as a PPC marketer down the road when campaign management and optimization become automated, but that’s a subject for another day.

How Microsoft’s AI Vision Takes A Different Approach

Google remains the largest force in paid search. It also continues to launch strong AI updates across bidding, creative, search experiences, and campaign management. This is not about Google falling behind.

What stood out to me was where Microsoft placed its focus.

A lot of AI discussion still centers on better ads, faster automation, or the next big interface. Microsoft spent more time talking about how buying behavior is changing and what advertisers may need to do differently.

Their view suggests the audience is no longer only the customer.

It can also be the AI system helping compare products, narrow options, recommend brands, or complete tasks on someone’s behalf.

That is where I think Microsoft’s message becomes more interesting than a standard product launch. They are pushing marketers to think beyond clicks and impressions and pay closer attention to how decisions are being shaped before a traditional ad interaction ever happens.

If that shift continues, many teams will realize they were optimizing the final step of the journey while missing the earlier moments that influenced the outcome.

AI Visibility In Microsoft Clarity Is Their Competitive Advantage

If I had to choose the most useful announcement for marketers, I would put AI Visibility in Microsoft Clarity near the top of the list.

Why? Because it speaks to a blind spot many businesses may already have.

A lot of performance reporting has been built around clicks, visits, and conversions that happen in trackable sessions. As AI tools start summarizing answers, citing brands, and influencing decisions before someone reaches a site, that model becomes less complete.

Some brands may already be winning attention in those moments. Others may be losing ground. Many likely cannot see either clearly today.

That is what makes this update so interesting.

Microsoft is giving businesses a way to understand how AI systems discover, cite, and surface their content. You do not need to advertise on Microsoft for that to matter. SEO teams, content teams, e-commerce leaders, and paid media teams all have a reason to care about how their brand appears in AI-driven experiences.

My bigger view is that tools like this will eventually become normal. Right now, Microsoft is one of the first major platforms speaking clearly about the problem and trying to give marketers something actionable to measure.

Audience Generation Could Be More Useful Than It Sounds

Audience Generation may sound like another setup feature, but I think it deserves more attention than that.

Microsoft describes it as an AI-powered audience assistant where advertisers can describe an ideal customer in natural language and receive recommended targeting settings. That can include demographics, locations, in-market signals, and dynamically generated audiences.

What interests me most is how this could improve strategic thinking, not just save time during campaign creation.

Many advertisers already know their obvious audience. But strong audience strategy often depends on ideas a team does not think to test.

For example, an advertiser may know they want “young professionals interested in fitness.” They may not think about adjacent areas where those consumers spend time, neighborhoods with stronger purchase intent, seasonal behaviors tied to events, or combinations of signals that reveal higher-value segments.

That is where a tool like this can become valuable.

Used thoughtfully, it can help marketers find new angles to test, challenge stale audience assumptions, and build stronger targeting plans than they may have created manually.

How Microsoft Is Turning That AI Vision Into Practical Tools

A broader vision only matters if it shows up in tools advertisers can actually use.

That is where Microsoft’s recent updates become more interesting.

Explainability Is Part Of The Product

One of the more useful launches was performance shift root-cause analysis inside the Microsoft Advertising Platform.

When results move sharply, most marketers don’t need another dashboard. They need to know what changed and clear “why”. Without the why, marketers can’t identify how to improve campaigns or pivot strategy.

Getting to that answer faster can save hours of manual work. It can also help teams act with more confidence instead of making reactive changes.

Google is thinking in a similar direction. Its Ads Advisor experience is also designed to help advertisers ask questions, surface insights, and understand account performance faster.

The opportunity for marketers is not choosing one assistant over another. It is using these tools to reduce analysis time and spend more time on better decisions.

Guardrails Still Matter

Microsoft also emphasized brand exclusions, term exclusions, and messaging constraints tied to AI-powered products like AI Max.

It mimics where Google has gone with their AI Max direction and broader advertiser controls across automated products.

That matters because many advertisers are not operating in a world where they can simply turn everything on and hope for the best. Legal review, brand standards, regulated categories, stakeholder approvals, and internal risk tolerance all shape how new tools get adopted.

That is why control features deserve more attention than they usually get. They are often what make adoption possible in the first place.

Product Data Continues To Be Bigger Than Shopping Campaigns

One of the clearest signals from both Microsoft and Google right now is that product data is starting to matter far beyond traditional Shopping campaigns.

Clean titles, accurate availability, pricing consistency, strong attributes, shipping details, and trustworthy structured data can now influence how products are surfaced across search experiences, AI recommendations, comparison journeys, and agent-assisted buying flows.

That is exactly why I wrote last week that Google’s product feed strategy points to the future of retail discovery. Product data is no longer just supporting Shopping campaigns. It is becoming part of how platforms understand inventory, evaluate relevance, and decide what gets shown in newer discovery environments.

Microsoft’s recent announcements point to the same shift through a different lens. Google is emphasizing Merchant Center and commerce surfaces. Microsoft is emphasizing agentic commerce, Copilot experiences, and AI visibility.

Feed health is becoming a growth issue, not just an operations issue – something that both Google and Microsoft are telling the industry.

What Advertisers Are Saying

Navah Hopkins, the Microsoft Ads Liaison, took to LinkedIn to share her thoughts on these updates. She highlighted diagnostics, clearer explanations, and the idea that marketers should decide what they own, what they share with AI, and what they delegate. That framing reflects how adoption actually happens inside businesses. Teams rarely hand over everything at once. They test where trust has been earned.

She also pointed to Microsoft Clarity as an increasingly valuable source of behavioral insight as AI-driven experiences grow, which I completely agree with.

Mark Creusen added his thoughts to her post:

The owning and sharing bit always pops for me. Way easier to chill about AI when you just mark out what’s “yours” and what you’re happy to throw to the bots instead of trying to wrangle it all. Otherwise teams just end up dragging each other to burnout mountain.

Frederick Vallaeys focused on another risk: invisibility. In his write-up after Microsoft’s partner event, he argued that many businesses may be unprepared for AI-driven discovery and cited Microsoft’s discussion around sites still blocking AI agents through robots.txt. He also highlighted strong early commerce statistics shared at the event, including higher purchase likelihood after Copilot interactions and conversion lifts tied to Brand Agents.

What This Means For Your Campaigns

The bigger lesson from Microsoft’s updates is that campaign performance may increasingly be shaped by factors that sit outside the traditional campaign build. That includes how your products are structured, how clean your measurement setup is, how well your audiences reflect real buying behavior, and whether your brand is visible in AI-assisted discovery moments before a search click ever happens.

Below are a few areas worth reviewing that can help shape a broader operating mindset:

  • Product data quality: If your feeds are incomplete, outdated, or inconsistent, the risk may extend beyond Shopping campaigns. Product titles, availability, pricing, shipping details, and attributes can influence how platforms understand and surface your inventory across emerging discovery experiences.
  • Measurement health: Now is a good time to audit conversion actions, tag coverage, offline imports, and attribution settings. As journeys become less direct, weak measurement creates larger blind spots and poorer optimization inputs.
  • Audience strategy: Many accounts still rely on narrow audience assumptions or static segments. Revisit whether your current targeting reflects how customers actually behave today. There may be untapped value in layered signals, geographic nuance, seasonal behaviors, or adjacent intent patterns.
  • Search term coverage: If AI tools help users refine decisions earlier, the searches that remain may become more specific, comparative, or action-oriented. Review whether your keyword strategy and ad copy are aligned to that shift in intent.
  • Platform diversification: Secondary channels can become valuable learning environments before they become major budget lines. Even modest investment in Microsoft Ads can help teams test new audience models, automation controls, and reporting approaches that may influence broader strategy later.

Looking Ahead

Microsoft’s biggest advantage may not be trying to out-Google Google.

It may be continuing to invest where it already has a credible edge: advertiser workflow tools, B2B audience intelligence through LinkedIn, clearer visibility into AI-driven discovery, and commerce experiences built for a world where assistants help shape decisions.

That is a different lane, and it could be a valuable one for marketers if Microsoft keeps executing.

The next year will likely tell us whether these announcements were a strong signal of where the platform is headed or simply another round of product updates.

Which of Microsoft’s new AI features, if any, would you seriously consider testing in your own campaigns?

More Resources:


Featured Image: Juan Roballo/Shutterstock

Localized Distribution In The AI Era: The DIRHAM Framework via @sejournal, @gregjarboe

Last year, I taught a module on content marketing around the PESO model (Paid, Earned, Shared, and Owned media). Matt Bailey asked me to include more content about influencers in this year’s module; I joked that it might take me all morning to come up with a new acronym. He shot back, “Can you adapt it to a DIRHAM model instead of PESO?”

That’s when I had an epiphany: Buried beneath our banter was a strategic insight.

Publishing great content used to be enough. Write something valuable, post it, and trust that search engines, social feeds, and your audience will handle the rest. For most of the past decade, that assumption held. It no longer does.

Between your content and your audience now stand three powerful gatekeepers, and none of them are human. AI summarization systems like Google’s AI Overviews surface answers without delivering clicks. Social feed algorithms pre-select what users ever encounter, often before those users have articulated what they want. Private messaging networks carry enormous volumes of content sharing through channels that are invisible to any analytics tool. If your content isn’t built to pass through all three of these filters, quality becomes irrelevant. It simply won’t be found.

In response to this challenge, I created the DIRHAM framework.

Why The Old Frameworks No Longer Work

Content marketers generally have organized their thinking around PESO: Paid, Earned, Shared, and Owned media. The model served its purpose well as a categorization tool, helping teams allocate budgets and map campaigns across channels. The problem is that PESO was built to answer a distribution question that no longer captures the real strategic challenge. It told you where to place content. It said nothing about how to make content visible in a world where algorithms, not humans, decide what gets surfaced.

DIRHAM is a visibility system rather than a categorization scheme. It is behavior-driven and AI-aware, designed around how content is actually discovered today rather than how it traveled through digital channels a decade ago. The distinction matters because discovery itself has fragmented across three systems that operate on entirely different logic. Search has become an AI answer engine that returns summaries instead of links. Social platforms use recommendation algorithms that predict what users want before those users have searched for anything. And messaging apps carry significant content sharing through what marketers call dark social, private exchanges that leave no traceable footprint in your analytics dashboard.

Each of these systems decides relevance differently, which means a single distribution strategy cannot serve all three. That, in turn, exposes the deeper problem with channel-first thinking. Asking “where should we post?” is no longer the right starting point. The more productive question is how this particular audience actually discovers things, and what each system needs to see before it will serve your content to them.

The Six Pillars Of DIRHAM

D: Digital Advertising

The role of paid media has changed in ways that most campaign budgets haven’t caught up with yet. The old model treated paid advertising as a direct delivery mechanism: You bought impressions, people clicked, some of them converted. In the AI era, that logic is incomplete. Paid media’s primary strategic function now is to generate the early engagement signals that algorithms need before you should invest in distributing your content organically. Paid doesn’t deliver to the audience anymore. It earns the algorithmic attention that makes organic delivery possible.

This reframing has real implications for how budgets should be structured and how creative should be evaluated before spend. Rather than committing to a single campaign execution, the more effective approach is a three-stage cycle: Run small tests across multiple creative variations, use AI performance tools to identify which executions are generating genuine signal, then scale selectively into what’s actually working. Small bets, fast reads, concentrated fuel.

Targeting has matured in a parallel direction. Legacy demographic segmentation worked from surface assumptions about who a person was based on age, gender, and location. AI-powered clustering works from behavioral reality, tracking what people actually do, what they read past, what they share, what they ignore. Content that mirrors real behavioral patterns gets amplified. Content that shouts without matching those patterns gets filtered out, regardless of budget. And creative that looks like advertising at a glance will fail to generate the engagement signals that trigger wider distribution in the first place. Native creative, content that looks and feels like organic content in each platform’s environment, is not just aesthetically preferable. It is structurally necessary.

I: Influencer Partnerships

In an environment where AI-generated content floods every platform, human credibility has become the most effective filter against noise. Audiences, consciously or not, are calibrating their attention toward sources that have demonstrated genuine expertise or authentic experience, and away from the polished but anonymous brand voice that could have been written by anyone or anything. This is why influencer strategy in the DIRHAM model is not primarily about reach. It is about borrowed trust.

The distinction matters because it changes who you look for and what you ask them to do. A creator with 200,000 engaged followers who have followed them for three years because they trust their judgment is more valuable in this environment than a creator with 2 million followers and a transactional relationship with branded content. The former has built the authenticity, consistency, and credibility that together produce real trust. The latter has reach without the authority that makes recommendations land.

The operational implication is a move away from one-off campaign sponsorships toward integrated, ongoing relationships. When influencer programs feel bought rather than believed, they fail on two levels. They fail to generate the authentic engagement that algorithms reward, and they fail to produce the kind of trust transfer that makes the partnership valuable in the first place. The most effective influencer programs are built around shared narratives and long-term creative collaboration, which produces compounding community value that a single sponsored post cannot. This also means that creator selection has to account for context. In government and public sector campaigns, credibility and safety are the primary criteria, with success measured through sentiment and public awareness. In commercial campaigns, fit and demonstrated performance matter most, and success gets measured through conversion and sales velocity. Reach alone is never sufficient justification for a partnership.

R: Regional And Local Context

AI systems are not passive distributors. They actively parse content to determine who it is for, and generic content sends signals that are simply too ambiguous for the system to act on confidently. Without specific geographic or cultural markers, content can get deprioritized, not necessarily because it’s of poor quality, but because the algorithm cannot reliably categorize it or identify the right audience to serve it to. The counterintuitive result is that narrowing your focus tends to increase your reach. Anchoring content in regional or local specificity gives the system exactly the classification signal it needs to serve the content to people who will engage with it.

One of the most common mistakes brands make when addressing multilingual markets is treating bilingual content as a translation problem. It is not. Arabic and English audiences in the UAE, for example, engage with content on the same platforms through fundamentally different cultural frames. English-language content in that market tends to perform around adventure, exploration, and discovery. Arabic-language content, produced by creators with genuine cultural proximity, centers on heritage, family, and values that are better expressed in local dialect than in formal translated language. The difference is not vocabulary. It is intent and tone, and no translation process produces it reliably. What local creators bring to content distribution is something that should be understood as shared context: an intuitive grasp of reference, nuance, and community expectation that outside brands cannot replicate and cannot purchase directly. They can only access it by working genuinely with people who hold it.

H: Hybrid Content

Hybrid content is what happens when passive consumption and active involvement are designed into the same piece of content. The reason it matters so much in the current environment is that engagement is not merely a metric for how interesting your content was. It is the distribution mechanism itself. When users comment, complete a challenge, share to their own network, or otherwise participate in content, they are not just expressing interest. They are distributing the content on your behalf. Without that participation, reach is bounded by budget. With it, reach compounds through the network in ways that no paid campaign can replicate in isolation.

This changes the design question for content. Broad content, built for a generic audience and a generic platform, tends to produce passive consumption. People scroll past it, or watch it to completion, and move on. Specific content, anchored in a particular cultural reality or a particular community’s concerns, provokes a response. It invites people to add themselves to the story, to disagree or affirm, to share with someone they know, because it lands with enough specificity to feel personal. Gamification, photography challenges, and community incentives work in this context not as marketing gimmicks but as structural mechanisms for turning audience members into distributors. AI tools can accelerate the production of hybrid content significantly, handling drafting, formatting, and initial translation at volume. But the human editorial layer remains essential. Resonance, cultural accuracy, and the kind of tonal authenticity that makes people want to participate cannot be automated. The goal is not automated publishing; it is automated drafting with rigorous human curation.

A: AI Visibility

Becoming visible to AI answer engines requires a different optimization logic than traditional SEO. The governing rule is that AI systems reward reliability and structural clarity above creativity and cleverness. A headline that works brilliantly for a human reader because it is unexpected or witty may work against you in an LLM context, because the machine cannot confidently categorize content whose purpose is obscured by figurative language. Clear, consistent, authoritative content builds the kind of signal that answer engines recognize and cite over time.

Structure is the mechanism. AI models parse structural elements before they interpret meaning, which means clear headers function as navigation signals, declarative sentences enable clean fact extraction, and credibility markers such as named sources, cited research, and identified authorship communicate authority to the system in ways that stylistic sophistication simply does not. If the architecture of the content is unclear, the quality of what’s inside it goes unread.

There is also a significant measurement gap that most organizations have not addressed. AI and LLM conversations represent the fastest-growing discovery channel in most content categories, but they are almost entirely invisible to conventional SEO tools. Tools like Cairrot have emerged specifically to track brand citations inside AI models, showing where and how organizations appear when users ask ChatGPT, Perplexity, or Gemini a relevant question. The new SEO is not optimizing for a position on a search results page. It is optimizing to become the source an AI system trusts enough to cite.

M: Measuring Outcomes

The final pillar of DIRHAM is still where most organizations’ discipline breaks down, and where the gap between doing DIRHAM and doing it well tends to be widest. The standard that should govern every measurement decision is straightforward: If a metric doesn’t change what you do next, it doesn’t matter. Impressions, follower counts, and raw reach have always been easier to report than to act on, and in an era of infinite AI-generated content production, they have become almost entirely disconnected from influence or impact.

The hierarchy that actually serves strategic decisions looks different. Impressions and vanity metrics get ignored. Engagement signals get observed carefully because they reveal which content is generating the algorithmic response and community participation that the other pillars depend on. Behavioral change and decisions get optimized toward relentlessly, because those are the outcomes the content exists to produce. Every campaign run this way becomes the prototype for the next one. The data from this cycle funds better decisions in the next.

For organizations with “trust” instead of “cash” as a strategic objective, particularly in government and public sector contexts, the Hon and Grunig Trust Scorecard provides a quantifiable measurement approach. It assesses trust through three dimensions: Integrity, measured through whether stakeholders believe the organization treats people fairly and considers them in decisions; Dependability, measured through whether stakeholders believe the organization keeps its commitments; and Competence, measured through whether stakeholders believe the organization can deliver what it promises. Stakeholders rate these dimensions on a Likert scale, producing a quantifiable trust score that can be tracked over time and correlated with content and campaign activity.

DIRHAM In Action: The World’s Coolest Winter Campaign

Abstract frameworks earn their place by explaining real results. The UAE’s World’s Coolest Winter campaign, which concluded on Feb. 2, 2026, is an unusually clean example of the DIRHAM model operating at full scale, because the framework wasn’t applied after the fact. Distribution was the blueprint from the beginning.

The campaign’s paid media strategy used TikTok and Snapchat as the primary channels, with short-form cinematic video built specifically for scrolling behavior rather than for broadcast viewing. Instant-experience formats connected directly to destination booking, collapsing the distance between discovery and action. Critically, paid spend was deployed to generate algorithmic ignition rather than to deliver impressions. The goal was to earn enough early engagement signal that organic sharing would carry the campaign forward, which is exactly what happened. Paid lit the fire. Organic kept it burning.

On the influencer side, the campaign avoided the trap of centralizing its voice. Instead of a single spokesperson, it deployed influencer missions structured around distinct audience segments. Lifestyle creators on TikTok highlighted adventure and entertainment experiences, reaching audiences looking for something unexpected to do. Professional voices on LinkedIn surfaced the UAE as a destination for remote work and family travel, reaching audiences whose priorities are entirely different. The strategic logic was that diversity of influence produces diversity of reach. Trust is built through credible local voices, not through a polished corporate message broadcast at scale.

The regional dimension of the campaign revealed something that straightforward localization would have missed. English-language content was built around adventure, hidden gems, and the kind of active discovery that appeals to visitors approaching the country as travelers. Arabic-language content was built around heritage, privacy, and family, using local dialect and family-centric themes that resonated with residents and regional visitors through a completely different cultural logic. The same destination, communicated through entirely different frames. That specificity did two things simultaneously: It made the content more resonant for human audiences, and it gave AI discovery systems the clear categorical signals they need to serve content to the right people. The regional strategy wasn’t just a localization effort. It was an authority signal.

The hybrid content mechanism at the center of the campaign was a gamified digital passport system that invited visitors to earn stamps by experiencing all seven Emirates, with photography challenges and completion incentives that rewarded actual behavior rather than passive attention. This bridged digital content discovery with physical travel behavior, and it recruited participants as content creators in the process. Every visitor who shared a photograph or completed a challenge was generating authentic user content that no brand team could have produced centrally. The campaign’s AI visibility strategy depended on exactly this kind of volume: thousands of UAE residents posting under shared hashtags simultaneously created what the campaign called a Signal Storm. That mass of authentic, organic, contextually rich content fed AI discovery systems with the consistent high-volume signal that establishes topical authority at scale. Social proof of this kind cannot be manufactured. It must be engineered through genuine participation.

The outcomes validated the model. The campaign generated AED 12.5 billion in hotel revenues, attracted 5 million guests, representing a 5% increase over the prior period, and achieved an 84% nationwide hotel occupancy rate. These are behavioral outcomes, not impression counts. They are the direct result of distribution strategies built around how people actually discover, evaluate, and act on content. When distribution aligns with behavior, visibility compounds.

The Integrated Workflow

Understanding each pillar individually is necessary but insufficient. What makes DIRHAM work as a system is the way the pillars interact, and where the interaction breaks down.

Digital advertising without content relevance generates clicks that produce no signal worth amplifying. Influencer reach without genuine trust is wasted on an audience that has already learned to filter branded content. Regional specificity without hybrid participation anchors the content in place without recruiting the network to carry it further. AI visibility without structural clarity leaves authoritative content invisible to the systems that would otherwise surface it. Measurement that reports on impressions rather than behavioral change tells you what happened last quarter without informing you about what you should do this one. Each element depends on the others. Weakness in one area suppresses results across the whole system.

The workflow that holds this together operates as a continuous loop. It begins with paid signals to earn algorithmic attention, moves through influencer validation to establish human trust, anchors in local context to signal relevance to both algorithms and audiences, amplifies through participation by designing for users to become distributors, optimizes for machine readability, so AI systems can parse and cite the content, and closes with measurement of behavioral impact. That measurement then determines the budget, targeting, and creative decisions that ignite the next cycle. Measurement connects directly back to the D. The loop is continuous rather than linear, and the information flowing from the M back to the D is what makes the system improve over time.

Key Takeaways

After creating a rough draft of my updated online course on content marketing, I sent it to Bailey for his review. He quipped, “Great framework. Is it copyrighted?”

You can adopt the DIRHAM Framework with just as much confidence. Why? Because William Gibson, a speculative fiction writer, was strangely prescient when he observed, “The future has arrived – it’s just not evenly distributed yet.”

The World’s Coolest Winter campaign demonstrated four principles that hold across contexts far beyond UAE tourism.

  • Visibility is engineered. In the AI era, reach is not accidental. It is designed, and the design has to account for the three gatekeepers that now stand between content and audience. Distribution can no longer be treated as the final step in a content process. It must be the architecture around which the content is built.
  • Visibility beats volume. Strategic placement outperforms mass production. A smaller amount of content built for the specific behavioral context of each discovery system and each regional audience will consistently outperform a larger volume of generic content scattered across channels without strategic intent.
  • Trust over polish. Authentic local voices outperform corporate narration, and the gap is widening as AI content floods every platform. Human credibility is the scarcest resource in the current information environment, which means influencer strategy should be evaluated on the depth of trust the creator has built, not the size of the audience they have accumulated.
  • Measurement changes behavior. Metrics that don’t alter the decisions made in the next cycle are not measuring anything useful. The only numbers worth tracking are the ones that tell you what to do differently.

The DIRHAM model is systemic, scalable, and built to adapt as platforms and algorithms evolve, because it is grounded in human discovery behavior rather than in the specific mechanics of any particular platform. Content competes on distribution first. That has always been true to some degree, but it has never been as consequential as it is now.

More Resources:


Featured Image: Tetiana Yurchenko/Shutterstock

Google Won’t Act On Spam Reports If They Contain Personal Information via @sejournal, @martinibuster

Google updated their spam reporting documentation to make it clearer that spam reports are not wholly confidential and that it’s possible for personal identifiable information to be shared with the sites receiving a manual action.

Change In Response To Feedback

Google’s changelog noted that they were updating the spam reporting form based on feedback they’d received about personal information contained in the spam report that is shared with spammy sites that receive a manual action (formerly known as a penalty).

The update contains a new notice that spam reports containing personal information will not be processed.

The changelog noted:

“Clarifying when and why we may take manual action based on spam reports
What: Further clarified when and why we may take manual action based on spam reports.
Why: To address feedback we received about the change on using spam reports to take manual action.”

Google removed the following from their documentation:

“If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

The above wording was replaced with the following:

“Don’t include any personally identifying information in your submission. To comply with regulations, we must send the submission text to the site owner to help them understand the context of a manual action, if one is issued.

Because of this, we won’t process your submission if we determine it contains personally identifying information to protect privacy. Not including such information fully ensures your information is safe and prevents your submission from being discarded.”

Action Moving Forward

On the one hand it’s good that Google won’t proceed with a manual action if the report contains personal information. This means that if you’re submitting spam reports to Google, don’t name your site, business name, personal name or anything else that you don’t want the affected spammer to know.

Read the updated documentation here:

Report spam, phishing, or malware

Learn more about Google’s spam reporting tool: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Featured Image by Shutterstock/andre_dechapelle

Will fusion power get cheap? Don’t count on it.

Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap.

Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.

It’s tricky to make any predictions about the cost of a technology that doesn’t exist yet. But when there’s billions of dollars of public and private funding on the line, it’s worth considering what assumptions we’re making about our future energy mix and its cost.

One crucial measure is a metric called experience rate—the percentage by which an energy technology’s cost declines every time capacity doubles. A higher figure means a quicker price drop and better economic gains with scaling.

Historically, the experience rate is 12% for onshore wind power, 20% for lithium-ion batteries, and 23% for solar modules. Other energy technologies haven’t gotten cheap quite as quickly—fission is at just 2%.

In the new study, published in Nature Energy, researchers aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate. The team looked at three key characteristics that can correlate with experience rate: unit size, design complexity, and the need for customization. The larger and more complex a technology is, and/or the more it needs to be customized for different use cases, the lower the experience rate.

The researchers interviewed fusion experts, including public-sector researchers and those working at companies in the private sector. They had the experts evaluate fusion power plants on those characteristics and used that info to predict the experience rate. (One note here: The study focused only on magnetic confinement and laser inertial confinement, two of the leading fusion approaches, which together receive the vast majority of funding today. Other approaches could come with different cost benefits.)

Fusion plants will likely be relatively large, similar to other types of facilities (like coal and fission power plants) that rely on generating heat. They will probably need less customization than fission plants—largely because regulations and safety considerations should be simpler—but more than technologies like solar panels. And as for complexity, “there was almost unanimous agreement that fusion is incredibly complex,” says Lingxi Tang, a PhD candidate in the energy and technology policy group at ETH Zurich in Switzerland and one of the authors of the study. (Some experts said it was literally off the scale the researchers gave them.)

The final figure the researchers suggest for fusion’s experience rate is between 2% and 8%, meaning it will see a faster price reduction than nuclear power but not as dramatic an improvement as many common energy technologies being deployed today.

That means that it would take a lot of deployment—and likely quite a long time—for the price of building a fusion reactor to drop significantly, so electricity produced by fusion plants could be expensive for a while. And it’s a much slower rate than the 8% to 20% that many modeling studies assume today.

“On the whole, I think questions should be raised about current investment levels in fusion,” Tang says. (The US allocated over $1 billion to fusion in the 2024 fiscal year, and private-sector funding totaled $2.2 billion between July 2024 and July 2025.) “If you’re talking about decarbonization of the energy system, is this really the best use of public money?”

But some experts say that looking to the past to understand the future of energy prices might be misleading.“It’s a good exercise, but we have to be humble about how much we don’t know,” says Egemen Kolemen, a professor at the Princeton Plasma Physics Laboratory.

In 2000, many analysts predicted that solar power would remain expensive—but then production exploded and prices came crashing down, largely because China went all in, he says. “People weren’t exactly wrong then,” he adds. “They were just extrapolating what they saw into the future.”

How fast prices drop depends on regulations, geopolitical dynamics, and labor cost, he says: “We haven’t built the thing yet, so we don’t know.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: introducing the Nature issue

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the Nature issue

When we talk about “nature,” we usually mean something untouched by humans. But little of that world exists today. 

From microplastics in rainforest wildlife to artificial light in the Arctic Ocean, human influence now reaches every corner of Earth. In this context, what even is nature? And should we employ technology to try to make the world more “natural”?  

In our new Nature issue, MIT Technology Review grapples with these questions. We investigate birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice, within ourselves, and in the far future on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer. 

Together, these stories examine how technology has altered our planet—and how it might be used to repair it. Subscribe now to read the full print issue.

What’s next for large language models?

After ChatGPT launched in late 2022, the OpenAI chatbot became an everyday everything app for hundreds of millions of people. It led to LLMs being heralded as the new future. The entire tech industry was consumed by the inferno, with companies racing to spin up rival products.

But what’s the next big thing after LLMs? More LLMs—but better. Let’s call them LLMs+. Find out how they’re set to become cheaper, more efficient, and more powerful.

—Will Douglas Heaven

LLMs+ is on our list of the 10 Things That Matter in AI Right Now, MIT Technology Review’s guide to what’s really worth your attention in the busy, buzzy world of AI. We’ll be unpacking one item from the list each day here in The Download, so stay tuned.

Will fusion power get cheap? Don’t count on it.

Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study published in Nature Energy suggests that even if that future arrives, it might not come cheap.

The research team aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate—the percentage by which its cost declines every time capacity doubles. Their findings offer new clues on the technology’s path to deployment. Read the full story.

—Casey Crownhart

This story is from The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump signaled he’s open to reversing the Anthropic ban
What that really means in practice remains to be seen. (Reuters $)
+ Anthropic says there’s no “kill switch” for its AI. (Axios)
+ “Humans in the loop” in AI warfare is an illusion. (MIT Technology Review)

2 SpaceX plans to manufacture its own GPUs
To support the company’s growing AI ambitions. (Reuters $)
+ Musk is shifting SpaceX’s focus from Mars to AI ahead of its IPO. (NYT $)
+ SpaceX and Tesla may be on a collision course. (FT $)

3 Chinese tech giant Tencent has unveiled its first flagship AI model
A former OpenAI researcher is at the helm. (SCMP)
+ Chinese open models are spreading fast. (MIT Technology Review)

4 High earners are racing ahead on AI, deepening workplace divides
The division in adoption risks widening inequality. (FT $)
+ Startups are bragging they spend more on AI than staff. (404 Media)

5 Thousands of Samsung workers are demanding a new share of AI profits
Chip-division employees want 15% of the operating profit. (Bloomberg $)
+ Here’s why opinion on AI is so divided. (MIT Technology Review)

6 AI is helping mediocre Korean hackers steal millions
They’re vibe coding their malware. (Wired $)
+ AI is making online crimes easier. (MIT Technology Review)

7 Kalshi suspended three political candidates for betting on their own races
Including a Democrat and a Republican running for Congress. (CNN)
+ And an independent candidate who said he did it to make a point. (Gizmodo)
+ Lawmakers argue that prediction markets are a loophole for gambling. (NPR)

8 A ping-pong robot is beating elite human players for the first time
The Sony AI system was trained with reinforcement learning. (New Scientist)
+ Just days earlier, a humanoid smashed the human half-marathon record. (AP)

9 Crypto scammers are luring ships into the Strait of Hormuz
By falsely promising safe passage. (Ars Technica)

10 ‘Age tech’ could help us grow old comfortably at home
Apps, wearables, and remote monitoring could fill caregiving gaps. (NYT $)

 

Quote of the day

“It’s a hallucinogenic business plan.”

—Ross Gerber, the chief executive of Gerber Kawasaki, an investment firm that owns SpaceX shares, tells the New York Times that he’s unimpressed by Musk’s changing goals for the aerospace company. 

One More Thing

Photos of victims are displayed under white crosses at a memorial for the August 2023 wildfire victims

AP PHOTO/LINDSEY WASSON


This grim but revolutionary DNA technology is changing how we respond to mass disasters

After hundreds went missing in Maui’s deadly fires, victims were identified with rapid DNA analysis—an increasingly vital tool for putting names to the dead in mass-casualty events.

The technology helped identify victims within just a few hours and bring families some closure more quickly than ever before. But it also previews a dark future marked by the rising frequency of catastrophic events.

Find out how this forensic breakthrough is preparing us for a more volatile world.


—Erika Hayasaki

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ This fascinating dive into botanical history reveals the origins of the first true plants.
+ Here’s how to use Google’s reference desk to find what ordinary search engines miss.
+ Watch duct tape get deconstructed to reveal the physics behind its legendary stickiness.
+ When Radiohead covers Joy Division, the result is a beautiful intersection of two legendary musical eras.