California is set to become the first US state to manage power outages with AI

California’s statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned. 

“We wanted to modernize our grid operations. This fits in perfectly with that,” says Gopakumar Gopinathan, a senior advisor on power system technologies at the California Independent System Operator—known as the CAISO and pronounced KAI-so. “AI is already transforming different industries. But we haven’t seen many examples of it being used in our industry.” 

At the DTECH Midwest utility industry summit in Minneapolis on July 15, CAISO is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI. The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights. 

But while CAISO may deliver electrons to cutting-edge Silicon Valley companies and laboratories, the actual task of managing the state’s electrical system is surprisingly analog. 

Today, CAISO engineers scan outage reports for keywords about maintenance that’s planned or in the works, read through the notes, and then load each item into the grid software system to run calculations on how a downed line or transformer might affect power supply.

“Even if it takes you less than a minute to scan one on average, when you amplify that over 200 or 300 outages, it adds up,” says Abhimanyu Thakur, OATI’s vice president of platforms, visualization, and analytics. “Then different departments are doing it for their own respective keywords. Now we consolidate all of that into a single dictionary of keywords and AI can do this scan and generate a report proactively.” 

If CAISO finds that Genie produces reliable, more efficient data analyses for managing outages, Gopinathan says, the operator may consider automating more functions on the grid. “After a few rounds of testing, I think we’ll have an idea about what is the right time to call it successful or not,” he says. 

Regardless of the outcome, the experiment marks a significant shift. Most grid operators are using the same systems that utilities have used “for decades,” says Richard Doying, who spent more than 20 years as a top executive at the Midcontinent Independent System Operator, the grid operator for an area encompassing 15 states from the upper Midwest down to Louisiana. 

“These organizations are carved up for people working on very specific, specialized tasks and using their own proprietary tools that they’ve developed over time,” says Doying, now a vice president at the consultancy Grid Strategies. “To the extent that some of these new AI tools are able to draw from data across different areas of an organization and conduct more sophisticated analysis, that’s only helpful for grid operators.”

Last year, a Department of Energy report found that AI had potential to speed up studies on grid capacity and transmission, improve weather forecasting to help predict how much energy wind and solar plants would produce at a given time, and optimize planning for electric-vehicle charging networks. Another report by the energy department’s Loan Programs Office concluded that adding more “advanced” technology such as sensors to various pieces of equipment will generate data that can enable AI to do much more over time. 

In April, the PJM Interconnection—the nation’s largest grid system, spanning 13 states along the densely populated mid-Atlantic and Eastern Seaboard—took a big step toward embracing AI by inking a deal with Google to use its Tapestry software to improve regional planning and speed up grid connections for new power generators. 

ERCOT, the Texas grid system, is considering adopting technology similar to what CAISO is now set to use, according to a source with knowledge of the plans who requested anonymity because they were not authorized to speak publicly. ERCOT did not respond to a request for comment. 

Australia offers an example of what the future may look like. In New South Wales, where grid sensors and smart technology are more widely deployed, AI software rolled out in February is now predicting the production and flow of electricity from rooftop solar units across the state and automatically adjusting how much power from those panels can enter the grid. 

Until now, much of the discussion around AI and energy has focused on the electricity demands of AI data centers (check out MIT Technology Review’s Power Hungry series for more on this).

“We’ve been talking a lot about what the grid can do for AI and not nearly as much about what AI can do for the grid,” says Charles Hua, a coauthor of one of last year’s Energy Department reports who now serves executive director of PowerLines, a nonprofit that advocates for improving the affordability and reliability of US grids. “In general, there’s a huge opportunity for grid operators, regulators, and other stakeholders in the utility regulatory system to use AI effectively and harness it for a more resilient, modernized, and strengthened grid.” 

For now, Gopinathan says, he’s remaining cautiously optimistic. 

“I don’t want to overhype it,” he says. 

Still, he adds, “it’s a first step for bigger automation.”

“Right now, this is more limited to our outage management system. Genie isn’t talking to our other parts yet,” he says. “But I see a world where AI agents are able to do a lot more.”

The Download: California’s AI power plans, and and why it’s so hard to make welfare AI fair

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

California is set to become the first US state to manage power outages with AI

California’s statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned. 

At an industry summit in Minneapolis tomorrow, the California Independent System Operator is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI. 

The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights. Read the full story.

—Alexander C. Kaufman

Why it’s so hard to make welfare AI fair

There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.

But the city of Amsterdam did spend a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?

Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump’s ‘big, beautiful bill’ is already hurting sick children
And hundreds of hospitals are likely to close, too. (New Yorker $)
+ His administration is going after easy targets, which includes sick children. (Salon $)

2 The US overseas worker purge is hitting Amazon hard
Its warehouse employees are losing their right to work in the US. (NYT $)
+ The US State Department has fired more than 1,350 workers so far. (Reuters)

3 Nvidia’s CEO claims China’s military probably won’t use its AI chips
But then he would say that, wouldn’t he. (Bloomberg $)
+ Even after the Trump administration has eased chip software tool export restrictions. (FT $)
+ Rival Huawei is planning a major AI chip overhaul. (The Information $)

4 Scientists are reportedly hiding LLM instructions in their papers 
Instructing models to give their work positive peer reviews. (The Guardian)

5 Amazon is dragging its heels launching its web version of Alexa
It appears the company underestimated just how much work they had to do. (WP $)

6 SpaceX’s revenue is on the up 
As Tesla continues to struggle. (WSJ $)
+ Musk is not in favor of merging Tesla with xAI. (Reuters)
+ Trump is still planning to slash NASA’s budget. (The Atlantic $)
+ Rivals are rising to challenge the dominance of SpaceX. (MIT Technology Review)

7 The Air India crash was caused by a cut in the plane’s fuel supply
Cockpit voice recordings reveal that one pilot asked another why he’d cut off the supply. (CNN)

8 The UK’s attempt to ape DOGE isn’t going well
Councils are already blocking Reform UK’s attempts to access sensitive data. (FT $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

9 Even crypto executives can fall for crypto scams
Just ask the top brass from MoonPay, which lost $250,000 worth of Ethereum. (The Verge)
+ The people using humour to troll their spam texts. (MIT Technology Review)

10 Why landline phones refuse to die 📞
The business world still loves them. (WSJ $)

Quote of the day

“We don’t like to work like that. I’m a Buddhist, so I believe in karma. I don’t want to steal anyone’s money.”

—A man forced to work in an online scam center in Myanmar recounts his experience to Nikkei.

One more thing

China wants to restore the sea with high-tech marine ranches

A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.

Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.

The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You can easily make ice cream at home with just two ingredients
+ Pink Floyd fans, this lecture is for you. 
+ Lose yourself for a few minutes in the story behind an ancient Indian painting. (NYT $)
+ Remember the days of idly surfing the web? Here’s how you can still recreate them.

Keywords Don’t Work for AI SEO

The search engine industry is changing quickly, upended by AI platforms that have altered queries and informational journeys.

Consumers increasingly type extended prompts into AI platforms versus single keywords or phrases in traditional search engines. Prompts are much longer and are often voice-activated with follow-up questions.

Moreover, prompts are typically much more descriptive and thus harder to predict.

Nonetheless, knowing shoppers’ prompts is essential for understanding their needs. Unfortunately, third-party “question” tools are typically keyword-driven, unsuited for prompts.

Prompt analytics tools

New prompt-analysis tools are coming.

For example, Similarweb’s new “AI Traffic Intelligence” reveals the prompts driving traffic to any web page. The tool is an add-on to the company’s “Web Intelligence” platform, which starts at $1,500 per year. Similarweb has not published pricing for the AI addition. There is no free trial.

Aiso is another tool to see actual prompts based on topic, brand, or keywords. Users describe their business, products, and services. Aiso then shows actual, related prompts from ChatGPT and other AI platforms. Pricing starts at $75 per month, with a free trial.

New tools such as Similarweb’s “AI Traffic Intelligence” help understand shopper prompts.

Prompt optimization

Knowing the prompts of shoppers is useful only for understanding their needs and journeys. I know of no way to optimize a site for prompts because AI platforms don’t match content to queries, unlike search engines. Both rely on intent, but AI prompts must interpret entire sentences, paragraphs, and follow-ups.

For example, most brick-and-mortar businesses don’t use “local” in their description.

A prompt from a hair-salon owner might resemble this:

I own a hair salon and need more people in my area to be aware of my business. Which kind of marketing agency or expert do I need (with my small budget)?

An AI platform would understand that local search is a good option. Here’s the response from Google’s AI Mode to the above prompt:

Here’s a breakdown of the types of marketing expertise to consider for your hair salon, particularly with a limited budget…:

SEO experts or consultants

What they do: SEO specialists help your website rank higher in search engine results for relevant keywords, making it easier for potential clients in your area to find your salon when they search online for services like “hair salon in [Your City]”.

Why it’s good for you: A strong online presence is crucial for attracting new clients. A well-optimized website, along with a claimed and optimized Google Business Profile, will significantly boost your salon’s visibility in local searches.

Things to consider: Look for those specializing in local SEO or SEO for the beauty industry. [Here, AI Mode recommended actual local SEO agencies specializing in the beauty industry.] Ask for case studies or testimonials showcasing their success in ranking local businesses.

The biggest ecommerce change from AI platforms is the impact on shopping journeys. Consumers’ prompts go beyond keywords, such as “hair salons in my town,” to include, say, prices, driving distance, specializations, and more.

Hence, optimizing for AI focuses on problem-solving and positioning a business as a solution provider.

Aligning landing pages to keywords remains viable for traditional search engines, where keywords are still predictable, but AI optimization means answering all kinds of relevant (unpredictable) questions.

Thus merchants looking for AI visibility should create problem-solving content. Researching keywords and prompts can help understand those problems, but attempting to match content is fruitless.

Meta Follows YouTube In Crackdown On Unoriginal Content via @sejournal, @MattGSouthern

Meta announced that it will implement stronger measures against accounts sharing “unoriginal” content on Facebook.

This marks the second major platform policy update in days following YouTube’s similar announcement about mass-produced and repetitive content.

Meta revealed it has removed approximately 10 million profiles impersonating large content creators, and taken action against 500,000 accounts involved in “spammy behavior or fake engagement”.

A Platform-Wide Movement Against Content Farms

Meta’s announcement closely follows YouTube’s monetization update, which clarified its stance on “inauthentic” content.

Both platforms are addressing the growing problem of accounts profiting from reposting others’ work without permission or meaningful additions.

According to Meta, accounts that repeatedly reuse someone else’s videos, photos, or text posts will lose access to Facebook’s monetization programs and face reduced visibility across all content.

Facebook is also testing a system that adds links on duplicate videos to direct viewers to the original creator.

Here’s an example of what that will look like on a reposted video:

Screenshot from: creators.facebook.com/blog/combating-unoriginal-content, July 2025.

Meta stated in its official blog post:

“We believe that creators should be celebrated for their unique voices and perspectives, not drowned out by copycats and impersonators.”

What Counts As Unoriginal Content?

Both Meta and YouTube distinguish between unoriginal content and transformative content, like reaction videos or commentary.

Meta emphasizes that content becomes problematic when creators repost others’ material without permission or meaningful enhancements, such as editing or voiceover.

YouTube creator liaison Renee Richie offered a similar clarification ahead of its own update, stating:

“This is a minor update to YouTube’s long-standing YPP policies to help better identify when content is mass-produced or repetitive”.

How AI & Automation Factor In

Neither platform bans AI-generated content outright. However, their recent updates appear designed to address a wave of low-quality, automated material that offers little value to viewers.

YouTube affirms that creators may use AI tools as long as the final product includes original commentary or educational value, with proper disclosure for synthetic content.

Meta’s guidelines similarly caution against simply “stitching together clips” or relying on recycled content, and encourage “authentic storytelling.”

These concerns implicitly target AI-assisted compilations that lack originality.

Potential Impact

For content creators, the updates from Meta and YouTube reinforce the importance of originality and creative input.

Those who produce reaction videos, commentary, or curated media with meaningful additions are unlikely to be affected. They may even benefit as spammy accounts lose visibility.

On the other hand, accounts that rely on reposting others’ content with minimal editing or variation could see reduced reach and loss of monetization.

To support creators, Meta introduced new post-level insights in its Professional Dashboard and a tool to check if a page is at risk of distribution or monetization penalties. YouTube is similarly offering guidance through its Creator Liaison and support channels.

Best Practices For Staying Compliant

To maintain monetization eligibility, Meta recommends:

  • Posting primarily original content filmed or created by the user.
  • Making meaningful enhancements such as editing, narration, or commentary when using third-party content.
  • Prioritizing storytelling over short, low-effort posts.
  • Avoiding recycled content with watermarks or low production value.
  • Writing high-quality captions with minimal hashtags and capitalization.

Looking Ahead

Meta and YouTube’s updates indicate a wider industry move against unoriginal content, especially AI-generated “slop” and content farms.

While the enforcement rollout may not affect every creator equally, these moves indicate a shift in priorities. Originality and value-added content are becoming the new standard.

The era of effortless monetization through reposting is being phased out. Moving forward, success on platforms like Facebook and YouTube will depend on creative input, storytelling, and a commitment to original expression.


Featured Image: Novikov Aleksey/Shutterstock

Google’s New Graph Foundation Model Catches Spam Up To 40x Better via @sejournal, @martinibuster

Google published details of a new kind of AI based on graphs called a Graph Foundation Model (GFM) that generalizes to previously unseen graphs and delivers a three to forty times boost in precision over previous methods, with successful testing in scaled applications such as spam detection in ads.

The announcement of this new technology is referred to as expanding the boundaries of what has been possible up to today:

“Today, we explore the possibility of designing a single model that can excel on interconnected relational tables and at the same time generalize to any arbitrary set of tables, features, and tasks without additional training. We are excited to share our recent progress on developing such graph foundation models (GFM) that push the frontiers of graph learning and tabular ML well beyond standard baselines.”

Google's Graph Foundation Model shows 3-40 times performance improvement in precision

Graph Neural Networks Vs. Graph Foundation Models

Graphs are representations of data that are related to each other. The connections between the objects are called edges and the objects themselves are called nodes. In SEO, the most familiar type of graph could be said to be the Link Graph, which is a map of the entire web by the links that connect one web page to another.

Current technology uses Graph Neural Networks (GNNs) to represent data like web page content and can be used to identify the topic of a web page.

A Google Research blog post about GNNs explains their importance:

“Graph neural networks, or GNNs for short, have emerged as a powerful technique to leverage both the graph’s connectivity (as in the older algorithms DeepWalk and Node2Vec) and the input features on the various nodes and edges. GNNs can make predictions for graphs as a whole (Does this molecule react in a certain way?), for individual nodes (What’s the topic of this document, given its citations?)…

Apart from making predictions about graphs, GNNs are a powerful tool used to bridge the chasm to more typical neural network use cases. They encode a graph’s discrete, relational information in a continuous way so that it can be included naturally in another deep learning system.”

The downside to GNNs is that they are tethered to the graph on which they were trained and can’t be used on a different kind of graph. To use it on a different graph, Google has to train another model specifically for that other graph.

To make an analogy, it’s like having to train a new generative AI model on French language documents just to get it to work in another language, but that’s not the case because LLMs can generalize to other languages, which is not the case for models that work with graphs. This is the problem that the invention solves, to create a model that generalizes to other graphs without having to be trained on them first.

The breakthrough that Google announced is that with the new Graph Foundation Models, Google can now train a model that can generalize across new graphs that it hasn’t been trained on and understand patterns and connections within those graphs. And it can do it three to forty times more precisely.

Announcement But No Research Paper

Google’s announcement does not link to a research paper. It’s been variously reported that Google has decided to publish less research papers and this is a big example of that policy change. Is it because this innovation is so big they want to keep this as a competitive advantage?

How Graph Foundation Models Work

In a conventional graph, let’s say a graph of the Internet, web pages are the nodes. The links between the nodes (web pages) are called the edges. In that kind of graph, you can see similarities between pages because the pages about a specific topic tend to link to other pages about the same specific topic.

In very simple terms, a Graph Foundation Model turns every row in every table into a node and connects related nodes based on the relationships in the tables. The result is a single large graph that the model uses to learn from existing data and make predictions (like identifying spam) on new data.

Screenshot Of Five Tables

Image by Google

Transforming Tables Into A Single Graph

The research paper says this about the following images which illustrate the process:

“Data preparation consists of transforming tables into a single graph, where each row of a table becomes a node of the respective node type, and foreign key columns become edges between the nodes. Connections between five tables shown become edges in the resulting graph.”

Screenshot Of Tables Converted To Edges

Image by Google

What makes this new model exceptional is that the process of creating it is “straightforward” and it scales. The part about scaling is important because it means that the invention is able to work across Google’s massive infrastructure.

“We argue that leveraging the connectivity structure between tables is key for effective ML algorithms and better downstream performance, even when tabular feature data (e.g., price, size, category) is sparse or noisy. To this end, the only data preparation step consists of transforming a collection of tables into a single heterogeneous graph.

The process is rather straightforward and can be executed at scale: each table becomes a unique node type and each row in a table becomes a node. For each row in a table, its foreign key relations become typed edges to respective nodes from other tables while the rest of the columns are treated as node features (typically, with numerical or categorical values). Optionally, we can also keep temporal information as node or edge features.”

Tests Are Successful

Google’s announcement says that they tested it in identifying spam in Google Ads, which was difficult because it’s a system that uses dozens of large graphs. Current systems are unable to make connections between unrelated graphs and miss important context.

Google’s new Graph Foundation Model was able to make the connections between all the graphs and improved performance.

The announcement described the achievement:

“We observe a significant performance boost compared to the best tuned single-table baselines. Depending on the downstream task, GFM brings 3x – 40x gains in average precision, which indicates that the graph structure in relational tables provides a crucial signal to be leveraged by ML models.”

Is Google Using This System?

It’s notable that Google successfully tested the system with Google Ads for spam detection and reported upsides and no downsides. This means that it can be used in a live environment for a variety of real-world tasks. They used it for Google Ads spam detection and because it’s a flexible model that means it can be used for other tasks for which multiple graphs are used, from identifying content topics to identifying link spam.

Normally, when something falls short the research papers and announcement say that it points the way for future but that’s not how this new invention is presented. It’s presented as a success and it ends with a statement saying that these results can be further improved, meaning it can get even better than these already spectacular results.

“These results can be further improved by additional scaling and diverse training data collection together with a deeper theoretical understanding of generalization.”

Read Google’s announcement:

Graph foundation models for relational data

Featured Image by Shutterstock/SidorArt

Nearly 8 In 10 Americans Use ChatGPT For Search, Adobe Finds via @sejournal, @MattGSouthern

A new report from Adobe states that 77% of Americans who use ChatGPT treat it as a search engine.

Among those surveyed, nearly one in four prefer ChatGPT over Google for discovery, indicating a potential shift in user behavior.

Adobe surveyed 800 consumers and 200 marketers or small business owners in the U.S. All participants self-reported using ChatGPT as a search engine.

ChatGPT Usage Spans All Age Groups

According to the findings, usage is strong across demographics:

  • Gen X: 80%
  • Gen Z: 77%
  • Millennials: 75%
  • Baby Boomers: 74%

Notably, 28% of Gen Z respondents say they start their search journey with ChatGPT. This suggests younger users may be leading the shift in default discovery behavior.

Trust In AI Search Is Rising

Adobe’s report indicates growing trust in conversational AI. Three in ten respondents say they trust ChatGPT more than traditional search engines.

That trust appears to influence behavior, with 36% reporting they’ve discovered a new product or brand through ChatGPT. Among Gen Z, that figure rises to 47%.

The top use cases cited include:

  • Everyday questions (55%)
  • Creative tasks and brainstorming (53%)
  • Financial advice (21%)
  • Online shopping (13%)

Why Users Choose AI Over Traditional Search

The most common reason people use ChatGPT for search is its ability to quickly summarize complex topics (54%). Additionally, 33% said it offers faster answers with fewer clicks than Google.

Respondents also report that AI results feel more personalized. A majority (81%) prefer ChatGPT for open-ended, creative questions, while 77% find its responses more tailored than traditional search results.

Marketers Shift Focus To AI Visibility

Adobe’s survey suggests businesses are already responding to the shift. Nearly half of marketers and business owners (47%) say they use ChatGPT for marketing, primarily to create product descriptions, social media copy, and blog content.

Looking ahead, two-thirds plan to increase their investment in “AI visibility,” with 76% saying it’s essential for their brand to appear in ChatGPT results in 2025.

What Works In AI-Driven Discovery

To improve visibility in conversational AI results, marketers report the best-performing content types are:

  • Data-driven articles (57%)
  • How-to guides (51%)

These formats may align well with AI’s tendency to surface factual, instructive, and referenceable information.

Why This Matters

Adobe’s findings highlight the need for marketers to adapt strategies as users turn to AI tools for product discovery.

Instead of replacing SEO, AI visibility can complement it. Brands tailoring content for conversational search may gain an edge in reaching audiences through personalized pathways.


Featured Image: Roman Samborskyi/Shutterstock

The AI Desktop/Mobile Divide: 90% Of AI Search Traffic Ignores Mobile Strategy

The AI search revolution has arrived with fanfare, transforming how users discover information across platforms like ChatGPT, Perplexity, and Google’s AI Overviews.

Yet, beneath the headlines lies a counterintuitive reality that’s reshaping how we approach the age-old debate and strategies on desktop vs. mobile: Over 90% of AI-powered search referrals originate from desktop devices.

While mobile accounts for more than half of global web traffic, AI search engines are making their biggest impact on desktop – a complete reversal of typical user behavior patterns that creates both challenges and more mobile opportunities for marketers.

Currently, some of the findings I share below contradict conventional wisdom.

Recent analysis of referral traffic across leading AI search platforms in the U.S. and Europe shows a striking disconnect between where users consume content and where AI engines drive meaningful traffic.

This gap represents one of the most significant untapped opportunities in the current search landscape.

AI Desktop Vs. Mobile Referral Numbers Tell A Surprising Story

The data from BrightEdge Generative Parser (my employer for disclosure) paints a clear picture of desktop dominance across virtually every AI-powered search platform.

  • ChatGPT leads the desktop concentration, with 94% of referral traffic coming from desktop devices, leaving just 6% for mobile users. This massive skew occurs despite ChatGPT’s widespread mobile app adoption.
  • Perplexity pushes desktop dominance even further, with 96.5% of referrals originating from desktop and mobile barely registering at 3.4%. For a platform positioning itself as a research-focused AI engine, this pattern suggests that users prefer desktop environments for gathering in-depth information.
  • Microsoft’s Bing maintains similar patterns, with 95% of desktop referrals vs. 4% mobile, despite integration across Microsoft’s ecosystem and the introduction of Copilot features.
  • Google Gemini follows suit, with 91% of traffic coming from desktop and 5% from mobile, indicating that even Google’s AI offerings struggle to capture mobile referral momentum.

The lone exception? Google Search itself maintains the mobile majority at 53% mobile vs. 44% desktop, but this reflects its entrenched position as the default search engine across mobile browsers, particularly Safari on iPhones.

Source: BrightEdge The Open Frontier of Mobile AI Search, June 2025

Why Mobile AI Search Isn’t Converting To Referrals

The disparity isn’t about user engagement; AI search activity on mobile is likely booming.

Instead, it’s about architectural design choices that fundamentally alter user flows and referral patterns.

The In-App Preview Problem

Mobile AI platforms often intercept the first click on citations, showing content previews within their own interfaces.

This creates a multi-step process where users must click again to reach external websites, significantly reducing referral traffic compared to desktop experiences, where first clicks typically lead directly to source sites.

ChatGPT exemplifies this pattern. On desktop, citation clicks immediately redirect users to source websites. On mobile, the app frequently displays in-app content previews, requiring users to take additional action to generate actual referrals.

The Discovery Vs. Research Divide

Desktop and mobile AI searches serve fundamentally different user intents.

Mobile users often engage in discovery-oriented searches, seeking quick answers, product comparisons, and immediate problem-solving.

Desktop users tend to gravitate toward comprehensive research, detailed analysis, and tasks that require sustained attention.

This behavioral split suggests AI platforms are evolving into distinct experiences rather than responsive versions of the same product.

Google’s AI Overviews demonstrate this evolution clearly: Ecommerce queries are three times more likely to trigger mobile AI Overviews (13.5% vs. 4.5% on desktop), treating shopping searches as educational discovery rather than direct product promotion.

Meanwhile, desktop AI Overviews command 80% more screen real estate (1110 px vs. 617 px) and appear for 39% more keywords than mobile devices, but show more consistent day-to-day patterns.

This suggests Google is actively experimenting with mobile AI formats while maintaining predictable desktop experiences.

The Apple Factor: Mobile’s Hidden Gatekeeper

Apple’s role as mobile web gatekeeper cannot be understated.

With Safari as the default browser on nearly a billion devices, Apple controls mobile search behavior in ways that could reshape the entire landscape overnight.

Current data shows that 58% of Google’s mobile search traffic to brand websites originates from iPhones, making Apple’s browser defaults critically important for AI search adoption.

Unlike Google, which has integrated AI features across its mobile search experience, Apple has not yet embedded AI-powered search into its mobile web stack.

This creates a massive structural opportunity. A single change in Safari’s default search provider or the introduction of native AI search features could trigger a significant redistribution of AI-powered traffic across the mobile ecosystem.

Three Strategic Imperatives For Marketers

1. Develop Device-Specific AI Content Strategies

Traditional SEO focuses on keywords and rankings, but AI search requires understanding device context and user intent patterns.

Mobile AI users prioritize quick discovery and shopping-oriented queries, while desktop users seek comprehensive information and detailed analysis.

Desktop-Comprehensive Approach

  • Develop in-depth, research-oriented content that supports detailed analysis.
  • Create comprehensive guides and comparative resources that leverage the 80% larger screen space available in desktop AI Overviews.
  • Build authority through detailed explanations and expert insights.
  • Design content clusters that support extended research sessions and take advantage of desktop’s 39% higher keyword coverage.

Mobile-First AI Optimization

  • Create concise, discovery-focused content that answers immediate questions.
  • Optimize for product comparison and shopping-related queries.
  • Design content that works well in in-app preview formats.
  • Focus on local and immediate-need content themes.

2. Prepare For Mobile AI Search When The Market Heats Up

The current desktop dominance in AI referrals represents a temporary market condition rather than a permanent state.

As mobile AI platforms mature and address current referral limitations, early movers will capture significant advantages.

Build Mobile AI Foundations Today

Responsive design excellence becomes critical when AI engines start citing mobile content more frequently.

Ensure your site adapts seamlessly across various screen sizes, orientations, and device modes to maximize citation potential regardless of how AI platforms display your content.

Optimize for speed and accessibility with fast page load times and mobile-friendly content that includes appropriately sized text, images, and interactive elements.

We are seeing AI engines increasingly factor user experience signals into their citation decisions. Schema markup is recommended so AI engines can interpret the structured data on your mobile pages and present users with content that they need and want.

Improve Core Web Vitals as these metrics become crucial for mobile AI performance. Core Web Vitals measure webpage quality beyond loading speed, correlating directly with user experience.

For mobile AI optimization, every millisecond matters – small improvements can have a significant impact on citation likelihood.

Track Desktop Vs. Mobile AI Performance

Monitor AI Overview differences using keyword reporting tools that switch between desktop and mobile AI Overviews.

This enables you to observe performance gaps and identify platform-specific opportunities.

The data reveals striking differences:

  • Desktop AI Overviews claim 80% more screen real estate (1110 px vs. 617 px), allowing for more detailed explanations and citation opportunities.
  • Desktop shows 39% more keyword coverage than mobile devices, but this gap represents a future mobile opportunity.
  • Ecommerce queries are three times more likely to trigger mobile AI Overviews, as platforms treat shopping searches as educational discovery on mobile.
Source: BrightEdge, May 2025

Have Different Content Strategies For Both Desktop And Mobile

Create mobile-first educational content and product guides rather than traditional product pages.

Mobile AI engines favor discovery-oriented content that helps users understand products and make informed decisions.

Ensure dual-platform accessibility by configuring your site’s crawling capabilities for both mobile and desktop views. Your content must be prepared for AI citation regardless of screen size or platform interface.

Watch Apple and Google industry moves: With Apple’s potential entry into AI search, content strategies should account for possible Safari integration changes that could dramatically shift mobile search behavior overnight.

3. Leveraging The Current Desktop Opportunity

While mobile AI search matures, desktop presents immediate opportunities for brands ready to optimize for AI-powered referrals.

  • Desktop AI citation optimization: Focus on creating quotable, authoritative content that AI engines can easily cite and reference. This includes structured data markup, clear section headers, and direct answers to common questions.
  • Comprehensive content development: Desktop AI users engage with longer-form, detailed content. Invest in comprehensive guides, thorough analysis, and expert commentary that support extended research sessions.
  • Multi-modal content integration: Desktop environments support richer media experiences. Combine text, video, infographics, and interactive elements to increase citation potential across different AI platforms.

More Mobile AI Disruption Is Coming

The current 90% desktop dominance in AI referrals represents a temporary market imbalance rather than a permanent shift away from mobile. Several factors suggest significant mobile AI search growth ahead.

Platform incentives align toward mobile expansion. AI search companies understand that capturing mobile market share is essential for long-term growth, and current referral limitations likely drive the active development of mobile-optimized solutions.

User behavior patterns favor mobile AI adoption. Once technical barriers to mobile AI referrals are addressed, user preferences for mobile-first interactions should drive rapid adoption.

Apple’s AI integration timeline creates a sense of urgency. With Apple controlling mobile browser defaults and reportedly developing AI search capabilities, the mobile AI landscape could transform rapidly.

Key Takeaways

The AI search revolution is creating two distinct experiences: desktop-focused referral traffic and mobile-focused engagement that don’t yet translate to website visits. This divide presents both immediate opportunities and strategic imperatives for marketers:

Immediate opportunities exist in desktop AI optimization. With 90% of AI referrals coming from desktops, brands can capture significant traffic by optimizing for desktop AI search patterns and citation preferences.

Mobile AI strategy requires different thinking. Mobile AI optimization isn’t about responsive design. It’s about understanding discovery-focused user intent and preparing for different referral mechanisms as more AI search engines hit the market.

Apple remains the wild card. Any changes to Safari’s default search behavior or introduction of native AI features could reshape mobile search overnight, making preparation essential.

The brands that recognize this desktop-mobile divide and develop device-specific AI strategies will gain significant competitive advantages as the AI search ecosystem matures.

The question isn’t whether mobile AI search will grow. It’s whether your plan will be ready when it does.

The future of AI search lies not in choosing between desktop and mobile but in mastering both experiences as distinct opportunities to serve different user needs and capture referral traffic across the entire search journey.

Unless otherwise indicated, any data mentioned above was taken from this BrightEdge study. The data was for May 2025 and is based on thousands of actual website referrals for medium to large brands across the world.

More Resources:


Featured Image: Collagery/Shutterstock

Malware Discovered In Gravity Forms WordPress Plugin via @sejournal, @martinibuster

WordPress security company Patchstack published an advisory about a serious vulnerability in Gravity Forms caused by a supply chain attack. Gravity Forms responded immediately and released an update to fix the issue.

Supply Chain Attack

Patchstack has been monitoring an attack on a WordPress plugin in which the attackers uploaded an infected version of the plugin directly to the publisher’s repository and fetched other files from a domain name similar to the official domain. This, in turn, led to a serious compromise of websites that used that plugin.

A similar attack was observed in Gravity Forms and was immediately addressed by the publisher. Malicious code had been injected into Gravity Forms (specifically in gravityforms/common.php) by the attackers. The code caused the plugin, when installed, to make HTTP POST requests to the rogue domain gravityapi.org, which was registered just days before the attack and controlled by the attacker.

The compromised plugin sent detailed site and server information to the attacker’s server and enabled remote code execution on the infected sites. In the context of a WordPress plugin, a remote code execution (RCE) vulnerability occurs when an attacker can run malicious code on a targeted website from a remote location.

Patchstack explained the extent of the vulnerability:

“…it can perform multiple processes:

  • Upload an arbitrary file to the server.
  • List all of the user accounts on the WordPress site (ID, username, email, display name).
  • Delete any user accounts on the WordPress site.
  • Perform arbitrary file and directory listings on the WordPress server.”

That last one means that the attacker can view any file, regardless of permissions, which would include the wp-config.php file which contains database credentials.

Gravity Forms Responds

RocketGenius, the publishers of Gravity Forms, took immediate action and uploaded a fixed version of the plugin right away, on the very same day. The domain name registrar, Namecheap, suspended the rogue typosquatted domain which effectively blocked any compromised websites from contacting the attackers.

Gravity Forms has released an update to the plugin, version 2.9.13. Users may want to consider updating to the very latest version.

Read more at Patchstack:

Malware Found in Official Gravity Forms Plugin Indicating Supply Chain Breach

Featured Image by Shutterstock/Warm_Tail

Google Explains How To Approach Content For SEO via @sejournal, @martinibuster

Google’s John Mueller and Martin Splitt discussed the problem of how to approach content for achieving business goals, the wisdom of setting expectations, and observed that it may not matter whether a site is optimized if the content is already achieving its intended results.

Getting The Content Right

Anyone can write, but it’s hard to communicate in a way that meets the audience’s needs. One thing SEOs often get wrong is content, which remains the most important ranking factor in modern search engines.

A common mistake is publishing entire sentences that waste time. I think that happens when writers are trying to meet an arbitrary word count and providing context for the high volume keywords they want to rank for.

Martin Splitt started the discussion by asking how to go about writing content and shared his own experience writing content and getting it wrong because he was writing for himself and not for what the audience needs to read.

Splitt shared:

“…how would I know how to go about content? Because now I know who I want to address and probably also roughly what I want to do. But, I mean, that’s a whole different skillset, right? That’s like copywriting and probably some researching and maybe some lettering and editing, and wow. That’s a lot. I love to write. I love to write.

…But I love having a technical writer on the team. Lizzi is a tremendous help with anything that is writing. I honestly thought I’m a good, reasonably good writer. And then Lizzi came and asked three questions on a piece of documentation that I thought was almost perfect.

I basically started questioning the foundations of the universe because I was like, “Okay, no, this document doesn’t even make sense. I haven’t answered the fundamental questions that I need to answer before I can even start writing. I’ve written like three pages.

Holy moly, that is a skill that is an amazingly tricky skill to acquire, I think. How do I start writing? Just write what I think I should be writing, I guess.”

Writing is easy to do, but difficult to do well. I’ve seen many sites that have the SEO fundamentals in place, but are undermined by the content. Splitt’s experience highlights the value in getting a second opinion on content.

Site Visitors Are Your Inspiration

Mueller and Splitt next move on to the topic of what publishers and SEOs should write about it and their answer is to focus on what users want, encouraging to do something as simple as asking their readers or customers.

Mueller observed:

“I think, if you have absolutely no inspiration, one approach could be to ask your existing customers and just ask them like:

  • How did you find me?
  • What were you looking for?
  • Where were you looking?
  • Were you just looking on a map? What is it that brought you here?

This is something that you can ask anyone, especially if you have a physical business.

..It’s pretty easy to just ask this randomly without scaring people away. That’s kind of one aspect I would do and try to build up this collection of ‘these are different searches that people have done in different places, maybe on different systems, and I want to make sure I’m kind of visible there.’”

Set Reasonable Expectations

John Mueller and Martin Splitt next provide a reality check on the keyword phrases that publishers and SEOs choose to optimize for. It’s not always about the difficulty of the phrases; it’s also about how relevant they are to the website.

Mueller commented about what to do with the keyword phrases that are chosen for targeting:

“And then I would take those and just try them out and see what comes up, and think about how reasonable it would be for one of your pages, perhaps to show up there and how reasonable it can be, I think is something where you have to be brutally honest with yourself, because it’s sometimes tempting to say, “Well, I would like to appear first for the search bookstore on the internet.” Probably that’s not going to happen. I mean, who knows? But there’s a lot of competition for some of these terms.

But, if you’re talking about someone searching for bookstores or bookstores in Zurich or bookstores on Maps or something like that, then that’s a lot more well defined and a lot easier for you to look at and see, what are other people doing there? Maybe my pages are already there. And, based on that, you can try to build out, what is it that I need to at least mention on my pages.”

Mueller followed up by downplaying whether a site is search optimized or not, saying that what’s important is if the site is performing as well as intended. Whether or not it’s properly optimized doesn’t matter if it’s already doing well as it is. Some may argue that the site could be doing better, but that’s outside of the context of what Mueller was commenting on. Mueller’s context was a business owner who was satisfied with the performance of the site.

Mueller observed:

“I mean, it all depends on how serious you take your goal, right? If you’re like a small local business you’re saying, ‘Well, I have a website and I hear I should make it SEO, but I don’t really care.’ Then it’s like do whatever you want kind of thing. If you have enough business and you’re happy. There’s no one to judge you to say, “Your website is not SEO optimized.”

Listen to Episode 95 of the Search Off The Record at about the ten minute mark:

Featured Image by Shutterstock/Krakenimages.com

Llms.txt Could Help AI Find Your Store

A proposed standard would guide large language models such as ChatGPT, Gemini, and Claude to relevant web content, helping online shops show up in AI-powered shopping.

The llms.txt file is akin to both the familiar robots.txt and an XML sitemap. For ecommerce merchants, the new, two-part standard could help share product information and other content directly with AI and the consumers who increasingly shop on those platforms.

Consumers increasingly shop on generative AI platforms. ChatGPT recommended these shoes and stores when prompted, “What are the best trail-running shoes?”

AI Visibility

Jeremy Howard, the co-founder of Fast.ai and Answer.AI, proposed the llms.txt standard in September 2024. The aim is to help LLMs process and understand content from large, complex websites.

Thus, if adopted, the (nonbinding) standard could be a way for websites, like ecommerce businesses, to influence AI visibility.

The llms.txt proposal is gaining traction thanks to some support from Perplexity and Anthropic (Claude), and growing interest from the web community.

What Is llms.txt?

Llms.txt is a machine-readable file for AI. It signals to AI systems which content (i) is available for their use and (ii) is relatively easy to summarize and cite.

The llms.txt concept is similar to a robots.txt file or a sitemap. It can “allow” or “disallow” various AI tools from using portions of a site’s content.

User-Agent: openai
Disallow: /

User-Agent: anthropic
Disallow: /private/

User-Agent: *
Allow: /

The LLMs scraping a site do not have to follow these directives, but the allow and disallow statements are a way of signaling which content could be useful.

For example, an ecommerce site might “disallow” content in a shopping cart or checkout flow, encouraging an LLM’s bot not to waste its time.

User-Agent: *
Disallow: /cart/
Disallow: /checkout/

Similarly, the file could push the AI toward content it can use, allowing paths to shipping guides, blog posts, and product descriptions.

User-Agent: openai
Allow: /shopping-guides/
Allow: /blog/
Allow: /products/

This allow-disallow feature is the most cited among observers, but the file’s primary purpose is to provide content.

Structure

Many examples in the llms.txt directory follow the following pattern in Markdown, a popular, lightweight text markup language:

  • H1 title
  • Blockquote summary
  • Sections (## Docs / ## Examples / ## Optional)
  • Bullet links with [title](URL): description for .md content

Here is an example of an online store that sells science fiction movie memorabilia.

# sci-fi-memorabilia-example.com

> Premium science fiction collectibles, props, replicas, and guides.

## Docs
- [Star Wars Lightsaber Replica](.../star-wars-lightsaber-replica.md): Features, display use, product link.
- [Top 10 Sci-Fi Collectibles](...).md
- [Blade Runner Blaster Guide](...).md

## Optional
- [Blog on cosplay tips](...).md

User-Agent: openai
Allow: /blog/
...

Contact: support@sci-fi-memorabilia-example.com
Policy: https://sci-fi-memorabilia-example.com/policy

This file tells the LLM something about the website and offers alternative content in the form of linked Markdown (.md) files.

This format is similar to an XML sitemap that leads the crawler to various content pages.

Simplified Content

An llms.txt file for an ecommerce site could feature many links to Markdown versions of product detail pages.

In the example above, note the link to a page for a “Star Wars Lightsaber Replica.”

## Docs
- [Star Wars Lightsaber Replica](.../star-wars-lightsaber-replica.md): Features, display use, product link.

The associated .md file contains no navigation or JavaScript content, but it does include the information an LLM would need to summarize, share, and cite the product. The machine-readable file is clean, concise, and factual.

Here is an example of what the AI would find in one of these product.md documents.

# Star Wars Lightsaber Replica – Luke Skywalker Edition

> A premium collectible replica of Luke Skywalker's green lightsaber from *Return of the Jedi*. Built for display, cosplay, or collection.

## Overview

This officially licensed replica features an all-metal hilt, removable LED blade, and motion-sensitive sound effects. It closely follows the design used on screen and comes with a stand and collector packaging. Ideal for fans, collectors, and cosplayers seeking accuracy and quality.

## Key Features

- Green LED blade with ignition and clash effects
- All-metal hilt modeled after Luke Skywalker's weapon in *Return of the Jedi*
- Removable blade for wall-mounted display
- Display stand and collector box included
- Powered by AA batteries (not included)
- Officially licensed by Lucasfilm

## Who It's For

- Star Wars collectors seeking screen-accurate replicas
- Cosplayers attending conventions or fan events
- Gift buyers shopping for Star Wars fans
- Sci-fi enthusiasts building a prop display

## Source

https://sci-fi-memorabilia-example.com/products/star-wars-lightsaber-replica

## Tags

star-wars, lightsaber, luke-skywalker, collectible, replica, prop

## Last Updated

2025-07-12

Extracting the document’s structure, we find:

  • Clear title,
  • Short overview paragraph (what it is, why it matters),
  • Structured sections (features, uses, tags),
  • A canonical source link (so the LLM can cite the store’s product page).

During Interface

A key feature of llms.txt is aiding LLMs at the “time of interface,” when a user is interacting with the chatbot.

A user asks a question about replica lightsabers. The AI visits the sci-fi memorabilia site, consumes the Markdown file, and quickly responds.

Think of the product Markdown file as CliffsNotes for the LLM!

AI Discoverability

Llms.txt resembles early optimization efforts in organic search. Similarly, the standard could help optimize for chatbots and answer engines — call it “LLM SEO.”

Llms.txt may or may not get traction, but it’s evidence that both LLM companies and website operators seek ways to aid AI in indexing and referencing pages.

It could shape the future of AI discoverability.