New Ecommerce Tools: January 7, 2026

This week’s installment of new products and services for merchants includes marketing and advertising platforms, livestream tools, pop-up and form builders, fulfillment networks, AI voice agents, agentic commerce, and reverse logistics.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Orca launches LiveMax to book a shoppable livestream in minutes. Orca, a livestream and social commerce provider, has launched LiveMax, a self-serve tool that empowers brands and retailers to book and execute shoppable livestreams on TikTok Shop and Amazon Live. According to Orca, LiveMax enables any brand to schedule a produced livestream quickly. Orca’s production resources include professional hosts and producers.

Home page of Orca

Orca

PayPal Ads launches Transaction Graph Insights and Measurement. PayPal Ads has launched its Transaction Graph Insights and Measurement Program, providing merchants and advertisers with a view into shopper behavior, campaign effectiveness, and data-driven recommendations. The tools help understand cross-merchant, cross-surface shopper journeys to deliver brand-specific recommendations and independent campaign validation with third-party partners.

Slingwave brings AI-powered unified measurement to ecommerce.  Slingwave has unveiled its AI-native marketing platform for ecommerce and direct-to-consumer brands. The system combines marketing mix modeling, agile marketing attribution, and experimentation with an intelligence layer and customized models that run millions of scenarios to deliver a clear plan for optimizing spend. According to Slingwave, the platform learns with every campaign, ensuring recommendations continuously improve.

Getsitecontrol updates widget builder for pop-ups and forms. Getsitecontrol, an email marketing platform for ecommerce, has released a redesigned widget editor that offers enhanced visual control when designing website pop-ups, forms, and teasers. The editor introduces a visual element tree that displays the complete structure of each widget in a sidebar. Getsitecontrol now allows users to fine-tune every visual aspect of their widgets, including margins, paddings, alignment, sizes, and colors. The result, says Getsitecontrol, is professional widgets that adapt to any screen size.

ReturnPro launches Shopify app. ReturnPro, a provider of returns management and reverse logistics, has launched its Returns Portal App on the Shopify App Store. The app combines returns initiation with a connected reverse supply chain and recommerce ecosystem. Shopify merchants gain access to ReturnPro’s infrastructure, including more than 1,000 partner drop-off locations. Merchants can resell refurbished inventory through their Shopify storefronts or distribute products across ReturnPro’s network of integrated marketplaces, creating secondary revenue streams and reducing write-offs.

Home page of ReturnPro

ReturnPro

Stord acquires Shipwire to expand its fulfillment network. Stord, a logistics provider for pre-purchase, checkout, delivery, and returns, has acquired Shipwire, a subsidiary of Ceva Logistics. Stord says the acquisition continues its expansion of fulfillment networks by adding 12 locations, strengthening its presence in Europe, and maintaining access to Ceva’s global network of warehouses through Shipwire’s existing logistics agreements. Ceva manages 120 million square feet of warehouse space worldwide.

Amazon launches Alexa+ for users to chat with its assistant. Amazon has launched an Alexa+ website that lets select users chat with its assistant via their browser. Users can access Alexa.com to get quick answers, explore complex topics, create content, and more. Alexa.com combines information with real-world actions, offering integrations across devices for shopping, home control, cooking, and entertainment, per Amazon. Customers with early access to Alexa+ can visit Alexa.com while logged into their Amazon account and start chatting.

ITTRackNap launches marketplace and subscription commerce platform. ITTRackNap, an AI-powered marketplace and subscription automation platform for cloud and technology providers, announced its U.S. launch. The platform enables managed service providers, telecommunications and connectivity providers, and technology distributors to launch and scale cloud and digital commerce faster and cost-effectively. RackNap streamlines and lowers the cost of channel back-office operations through native integrations with hyperscalers and portals, including Microsoft, Amazon Web Services, Google, and Acronis.

PubMatic launches AgenticOS for agent-to-agent advertising. PubMatic, an ad tech company, has launched AgenticOS, an operating system to orchestrate autonomous, agent-to-agent advertising across digital environments. AgenticOS deploys a three-layer framework to plan, transact, and optimize programmatic advertising: (i) an Nvidia-powered infrastructure layer, (ii) an application layer with embedded agentic capabilities to interpret intent through protocols such as the Ad Context and Model Context, and (iii) a transaction layer that connects agentic decisioning to PubMatic’s Activate buying platform.

Home page of PubMatic

PubMatic

eBay introduces credit notes for U.S. seller fees and tax reversals. eBay is issuing separate credit notes for all seller fees, charges, and tax reversals in the U.S. A credit note reduces or cancels an invoice. Each credit note will show the reduced amounts and reference to the original invoice. According to eBay, the update improves transparency and helps match charges with reversals.

Cloudhands launches cross-model AI platform. Cloudhands, a marketplace for AI tools, has announced a new unified platform that lets users move among leading models such as OpenAI, Anthropic, and Google while keeping their conversation history, documents, tasks, and creative work connected. Interested users can join the waitlist for the platform, which will launch early this year, per Cloudhands.

xAI launches Grok Business and Grok Enterprise. xAI, the chatbot natively integrated into X, has launched Grok Business and Grok Enterprise, two new tiers providing access to Grok 3, Grok 4, and Grok 4 Heavy. Grok Business offers a self-serve process for small-to-medium teams. For larger organizations, Grok Enterprise includes Grok Business plus Custom Single Sign-On, Directory Sync, and audit and security controls.

VoAgents launches enterprise voice AI platform for customer conversations. VoAgents, a provider of enterprise voice tools, has launched voice AI agents capable of handling inbound and outbound calls. The platform’s self-learning capability means voice agents improve with every interaction. Core platform features include customizable voice personalities and workflows tailored to brand requirements, calendar and customer-management integrations, real-time call recordings and transcripts, outbound campaign management, and more. VoAgents offers access to all leading language models, including OpenAI and Anthropic.

Home page of VoAgents

VoAgents

Most Major News Publishers Block AI Training & Retrieval Bots via @sejournal, @MattGSouthern

Most top news publishers block AI training bots via robots.txt, but they’re also blocking the retrieval bots that determine whether sites appear in AI-generated answers.

BuzzStream analyzed the robots.txt files of 100 top news sites across the US and UK and found 79% block at least one training bot. More notably, 71% also block at least one retrieval or live search bot.

Training bots gather content to build AI models, while retrieval bots fetch content in real time when users ask questions. Sites blocking retrieval bots may not appear when AI tools try to cite sources, even if the underlying model was trained on their content.

What The Data Shows

BuzzStream examined the top 50 news sites in each market based on SimilarWeb traffic share, then deduplicated the list. The study grouped bots into three categories: training, retrieval/live search, and indexing.

Training Bot Blocks

Among training bots, Common Crawl’s CCBot was the most frequently blocked at 75%, followed by Anthropic-ai at 72%, ClaudeBot at 69%, and GPTBot at 62%.

Google-Extended, which trains Gemini, was the least blocked training bot at 46% overall. US publishers blocked it at 58%, nearly double the 29% rate among UK publishers.

Harry Clarkson-Bennett, SEO Director at The Telegraph, told BuzzStream:

“Publishers are blocking AI bots using the robots.txt because there’s almost no value exchange. LLMs are not designed to send referral traffic and publishers (still!) need traffic to survive.”

Retrieval Bot Blocks

The study found 71% of sites block at least one retrieval or live search bot.

Claude-Web was blocked by 66% of sites, while OpenAI’s OAI-SearchBot, which powers ChatGPT’s live search, was blocked by 49%. ChatGPT-User was blocked by 40%.

Perplexity-User, which handles user-initiated retrieval requests, was the least blocked at 17%.

Indexing Blocks

PerplexityBot, which Perplexity uses to index pages for its search corpus, was blocked by 67% of sites.

Only 14% of sites blocked all AI bots tracked in the study, while 18% blocked none.

The Enforcement Gap

The study acknowledges that robots.txt is a directive, not a barrier, and bots can ignore it.

We covered this enforcement gap when Google’s Gary Illyes confirmed robots.txt can’t prevent unauthorized access. It functions more like a “please keep out” sign than a locked door.

Clarkson-Bennett raised the same point in BuzzStream’s report:

“The robots.txt file is a directive. It’s like a sign that says please keep out, but doesn’t stop a disobedient or maliciously wired robot. Lots of them flagrantly ignore these directives.”

Cloudflare documented that Perplexity used stealth crawling behavior to bypass robots.txt restrictions. The company rotated IP addresses, changed ASNs, and spoofed its user agent to appear as a browser.

Cloudflare delisted Perplexity as a verified bot and now actively blocks it. Perplexity disputed Cloudflare’s claims and published a response.

For publishers serious about blocking AI crawlers, CDN-level blocking or bot fingerprinting may be necessary beyond robots.txt directives.

Why This Matters

The retrieval-blocking numbers warrant attention here. In addition to opting out of AI training, many publishers are opting out of the citation and discovery layer that AI search tools use to surface sources.

OpenAI separates its crawlers by function: GPTBot gathers training data, while OAI-SearchBot powers live search in ChatGPT. Blocking one doesn’t block the other. Perplexity makes a similar distinction between PerplexityBot for indexing and Perplexity-User for retrieval.

These blocking choices affect where AI tools can pull citations from. If a site blocks retrieval bots, it may not appear when users ask AI assistants for sourced answers, even if the model already contains that site’s content from training.

The Google-Extended pattern is worth watching. US publishers block it at nearly twice the UK rate, though whether that reflects different risk calculations around Gemini’s growth or different business relationships with Google isn’t clear from the data.

Looking Ahead

The robots.txt method has limits, and sites that want to block AI crawlers may find CDN-level restrictions more effective than robots.txt alone.

Cloudflare’s Year in Review found GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives across top domains. The report also noted that most publishers use partial blocks for Googlebot and Bingbot rather than full blocks, reflecting the dual role Google’s crawler plays in search indexing and AI training.

For those tracking AI visibility, the retrieval bot category is what to watch. Training blocks affect future models, while retrieval blocks affect whether your content shows up in AI answers right now.


Featured Image: Kitinut Jinapuck/Shutterstock

Google’s Mueller Weighs In On SEO vs GEO Debate via @sejournal, @MattGSouthern

Google Search Advocate John Mueller says businesses that rely on referral traffic should think about how AI tools fit into the picture.

Mueller responded to a Reddit thread asking whether SEO is still enough or whether practitioners need to start considering GEO, a term some in the industry use for optimizing visibility in AI-powered answer engines like ChatGPT, Gemini, and Perplexity.

“If you have an online business that makes money from referred traffic, it’s definitely a good idea to consider the full picture, and prioritize accordingly,” Mueller wrote.

What Mueller Said

Mueller didn’t endorse or reject the GEO terminology. He framed the question in terms of practical business decisions rather than new optimization techniques.

“What you call it doesn’t matter, but ‘AI’ is not going away, but thinking about how your site’s value works in a world where ‘AI’ is available is worth the time,” he wrote.

He also pushed back on treating AI visibility as a universal priority. Mueller suggested practitioners look at their own data first.

Mueller added:

“Also, be realistic and look at actual usage metrics and understand your audience (what % is using ‘AI’? what % is using Facebook? what does it mean for where you spend your time?).”

Why This Matters

I’ve been tracking Mueller’s public statements for years, and this one lands differently than the usual “it depends” responses he’s known for. He’s reframing the GEO question as a resource allocation problem rather than a terminology debate.

The GEO conversation has picked up steam over the past year as AI answer engines started sending measurable referral traffic. I’ve covered the citation studies, the traffic analyses, and the research comparing Google rankings to LLM citations. What’s been missing is a clear signal from Google: is this a distinct discipline, or just rebranded SEO?

Mueller’s answer is consistent with what Google said at Search Central Live, when Gary Illyes emphasized that AI features share infrastructure with traditional Search. The message from both is that you probably don’t need a separate framework, but you do need to understand how discovery is changing.

What I find more useful is his emphasis on checking your own numbers. Current data shows ChatGPT referrals at roughly 0.19% of traffic for the average site. AI assistants combined still drive less than 1% for most publishers. That’s growing, but it’s not yet a reason to reorganize your entire strategy.

The industry has a habit of chasing trends that apply to some sites but not others. Mueller’s pushing back on that pattern. Look at what percentage of your audience actually uses AI tools before reallocating resources toward them.

Looking Ahead

The GEO terminology will likely stick, regardless of Google’s stance. Mueller’s framing puts the decision back on individual businesses to measure their own audience behavior.

For practitioners, this means the homework is in your analytics. If AI referrals are showing up in your traffic sources, they’re worth understanding. If they’re not, you have other priorities.


Featured Image: Roman Samborskyi/Shutterstock

Google’s Mueller Explains ‘Page Indexed Without Content’ Error via @sejournal, @MattGSouthern

Google Search Advocate John Mueller responded to a question about the “Page Indexed without content” error in Search Console, explaining the issue typically stems from server or CDN blocking rather than JavaScript.

The exchange took place on Reddit after a user reported their homepage dropped from position 1 to position 15 following the error’s appearance.

What’s Happening?

Mueller clarified a common misconception about the cause of “Page Indexed without content” in Search Console.

Mueller wrote:

“Usually this means your server / CDN is blocking Google from receiving any content. This isn’t related to anything JavaScript. It’s usually a fairly low level block, sometimes based on Googlebot’s IP address, so it’ll probably be impossible to test from outside of the Search Console testing tools.”

The Reddit user had already attempted several diagnostic steps. They ran curl commands to fetch the page as Googlebot, checked for JavaScript blocking, and tested with Google’s Rich Results Test. Desktop inspection tools returned “Something went wrong” errors while mobile tools worked normally.

Mueller noted that standard external testing methods won’t catch these blocks.

He added:

“Also, this would mean that pages from your site will start dropping out of the index (soon, or already), so it’s a good idea to treat this as something urgent.”

The affected site uses Webflow as its CMS and Cloudflare as its CDN. The user reported the homepage had been indexing normally with no recent changes to the site.

Why This Matters

I’ve covered this type of problem repeatedly over the years. CDN and server configurations can inadvertently block Googlebot without affecting regular users or standard testing tools. The blocks often target specific IP ranges, which means curl tests and third-party crawlers won’t reproduce the problem.

I covered when Google first added “indexed without content” to the Index Coverage report. Google’s help documentation at the time noted the status means “for some reason Google could not read the content” and specified “this is not a case of robots.txt blocking.” The underlying cause is almost always something lower in the stack.

The Cloudflare detail caught my attention. I reported on a similar pattern when Mueller advised a site owner whose crawling stopped across multiple domains simultaneously. All affected sites used Cloudflare, and Mueller pointed to “shared infrastructure” as the likely culprit. The pattern here looks familiar.

More recently, I covered a Cloudflare outage in November that triggered 5xx spikes affecting crawling. That was a widespread incident. This case appears to be something more targeted, likely a bot protection rule or firewall setting that treats Googlebot’s IP addresses differently from other traffic.

Search Console’s URL Inspection tool and Live URL test remain the primary ways to identify these blocks. When those tools return errors while external tests pass, server-level blocking becomes the likely cause. Mueller made a similar point in August when advising on crawl rate drops, suggesting site owners “double-check what actually happened” and verify “if it was a CDN that actually blocked Googlebot.”

Looking Ahead

If you’re seeing the “Page Indexed without content” error, check the CDN and server configurations for rules that affect Googlebot’s IP ranges. Google publishes its crawler IP addresses, which can help identify whether security rules are targeting them.

The Search Console URL Inspection tool is the most reliable way to see what Google receives when crawling a page. External testing tools won’t catch IP-based blocks that only affect Google’s infrastructure.

For Cloudflare users specifically, check bot management settings, firewall rules, and any IP-based access controls. The configuration may have changed through automatic updates or new default settings rather than manual changes.

Why Global Search Misalignment Is An Engineering Feature And A Business Bug via @sejournal, @billhunt

Google’s AI Overviews (AIO) represent a fundamental architectural shift in search. Retrieval has moved from a localized ranking-and-serving model, designed to return the most appropriate regional URL, to a semantic synthesis model, designed to assemble the most complete and defensible explanation of a topic.

This shift has introduced a new and increasingly visible failure mode: geographic leakage, where AI Overviews cite international or out-of-market sources for queries with clear local or commercial relevance.

This behavior is not the result of broken geo-targeting, misconfigured hreflang, or poor international SEO hygiene. It is the predictable outcome of systems designed to resolve ambiguity through semantic expansion, not contextual narrowing. When a query is ambiguous, AI Overviews prioritize explanatory completeness across all plausible interpretations. Sources that resolve any sub-facet with greater clarity, specificity, or freshness gain disproportionate influence – regardless of whether they are commercially usable or geographically appropriate for the user.

From an engineering perspective, this is a technical success. The system reduces hallucination risk, maximizes factual coverage, and surfaces diverse perspectives. From a business and user perspective, however, it exposes a structural gap: AI Overviews have no native concept of commercial harm. The system does not evaluate whether a cited source can be acted upon, purchased from, or legally used in the user’s market.

This article reframes geographic leakage as a feature-bug duality inherent to generative search. It explains why established mechanisms such as hreflang struggle in AI-driven experiences, identifies ambiguity and semantic normalization as force multipliers in misalignment, and outlines a Generative Engine Optimization (GEO) framework to help organizations adapt in the generative era.

The Engineering Perspective: A Feature Of Robust Retrieval

From an AI engineering standpoint, selecting an international source for an AI Overview is not an error. It is the intended outcome of a system optimized for factual grounding, semantic recall, and hallucination prevention.

1. Query Fan-Out And Technical Precision

AI Overviews employ a query fan-out mechanism that decomposes a single user prompt into multiple parallel sub-queries. Each sub-query explores a different facet of the topic – definitions, mechanics, constraints, legality, role-specific usage, or comparative attributes.

The unit of competition in this system is no longer the page or the domain. It is the fact-chunk. If a particular source contains a paragraph or explanation that is more explicit, more extractable, or more clearly structured for a specific sub-query, it may be selected as a high-confidence informational anchor – even if it is not the best overall page for the user.

2. Cross-Language Information Retrieval (CLIR)

The appearance of English summaries sourced from foreign-language pages is a direct result of Cross-Language Information Retrieval.

Modern LLMs are natively multilingual. They do not “translate” pages as a discrete step. Instead, they normalize content from different languages into a shared semantic space and synthesize responses based on learned facts rather than visible snippets. As a result, language differences no longer serve as a natural boundary in retrieval decisions.

Semantic Retrieval Vs. Ranking Logic: A Structural Disconnect

The technical disconnect observed in AI Overviews, where an out-of-market page is cited despite the presence of a fully localized equivalent, stems from a fundamental conflict between search ranking logic and LLM retrieval logic.

Traditional Google Search is designed around serving. Signals such as IP location, language, and hreflang act as strong directives once relevance has been established, determining which regional URL should be shown to the user.

Generative systems are designed around retrieval and grounding. In Retrieval-Augmented Generation pipelines, these same signals are frequently treated as secondary hints, or ignored entirely, when they conflict with higher-confidence semantic matches discovered during fan-out retrieval.

Once a specific URL has been selected as the source of truth for a given fact, downstream geographic logic has limited ability to override that choice.

The Vector Identity Problem: When Markets Collapse Into Meaning

At the core of this behavior is a vector identity problem.

In modern LLM architectures, content is represented as numerical vectors encoding semantic meaning. When two pages contain substantively identical content, even if they serve different markets, they are often normalized into the same or near-identical semantic vector.

From the model’s perspective, these pages are interchangeable expressions of the same underlying entity or concept. Market-specific constraints such as shipping eligibility, currency, or checkout availability are not semantic properties of the text itself; they are metadata properties of the URL.

During the grounding phase, the AI selects sources from a pool of high-confidence semantic matches. If one regional version was crawled more recently, rendered more cleanly, or expressed the concept more explicitly, it can be selected without evaluating whether it is commercially usable for the searcher.

Freshness As A Semantic Multiplier

Freshness amplifies this effect. Retrieval-Augmented Generation systems often treat recency as a proxy for accuracy. When semantic representations are already normalized across languages and markets, even a minor update to one regional page can unintentionally elevate it above otherwise equivalent localized versions.

Importantly, this does not require a substantive difference in content. A change in phrasing, the addition of a clarifying sentence, or a more explicit explanation can tip the balance. Freshness, therefore, acts as a multiplier on semantic dominance, not as a neutral ranking signal.

Ambiguity As A Force Multiplier In Generative Retrieval

One of the most significant, and least understood, drivers of geographic leakage is query ambiguity.

In traditional search, ambiguity was often resolved late in the process, at the ranking or serving layer, using contextual signals such as user location, language, device, and historical behavior. Users were trained to trust that Google would infer intent and localize results accordingly.

Generative retrieval systems respond to ambiguity very differently. Rather than forcing early intent resolution, ambiguity triggers semantic expansion. The system explores all plausible interpretations in parallel, with the explicit goal of maximizing explanatory completeness.

This is an intentional design choice. It reduces the risk of omission and improves answer defensibility. However, it introduces a new failure mode: as the system optimizes for completeness, it becomes increasingly willing to violate commercial and geographic constraints that were previously enforced downstream.

In ambiguous queries, the system is no longer asking, “Which result is most appropriate for this user?”

It is asking, “Which sources most completely resolve the space of possible meanings?”

Why Correct Hreflang Is Overridden

The presence of a correctly implemented hreflang cluster does not guarantee regional preference in AI Overviews because hreflang operates at a different layer of the system.

Hreflang was designed for a post-retrieval substitution model. Once a relevant page is identified, the appropriate regional variant is served. In AI Overviews, relevance is resolved upstream during fan-out and semantic retrieval.

When fan-out sub-queries focus on definitions, mechanics, legality, or role-specific usage, the system prioritizes informational density over transactional alignment. If an international or home-market page provides the “first best answer” for a specific sub-query, that page is retrieved immediately as a grounding source.

Unless a localized version provides a technically superior answer for the same semantic branch, it is simply not considered.

In short, hreflang can influence which URL is served. It cannot influence which URL is retrieved, and in AI Overviews, retrieval is where the decision is effectively made.

The Diversity Mandate: The Programmatic Driver Of Leakage

AI Overviews are explicitly designed to surface a broader and more diverse set of sources than traditional top 10 search results.

To satisfy this requirement, the system evaluates URLs, not business entities, as distinct sources. International subfolders or country-specific paths are therefore treated as independent candidates, even when they represent the same brand and product.

Once a primary brand URL has been selected, the diversity filter may actively seek an alternative URL to populate additional source cards. This creates a form of ghost diversity, where the system appears to surface multiple perspectives while effectively referencing the same entity through different market endpoints.

The Business Perspective: A Commercial Bug

The failures described below are not due to misconfigured geo-targeting or incomplete localization. They are the predictable downstream consequence of a system optimized to resolve ambiguity through semantic completeness rather than commercial utility.

1. The Commercial Blind Spot

From a business standpoint, the goal of search is to facilitate action. AI Overviews, however, do not evaluate whether a cited source can be acted upon. They have no native concept of commercial harm.

When users are directed to out-of-market destinations, conversion probability collapses. These dead-end outcomes are invisible to the system’s evaluation loop and therefore incur no corrective penalty.

2. Geographic Signal Invalidation

Signals that once governed regional relevance – IP location, language, currency, and hreflang – were designed for ranking and serving. In generative synthesis, they function as weak hints that are frequently overridden by higher-confidence semantic matches selected upstream.

3. Zero-Click Amplification

AI Overviews occupy the most prominent position on the SERP. As organic real estate shrinks and zero-click behavior increases, the few cited sources receive disproportionate attention. When those citations are geographically misaligned, opportunity loss is amplified.

The Generative Search Technical Audit Process

To adapt, organizations must move beyond traditional visibility optimization towards what we would now call Generative Engine Optimization (GEO).

  1. Semantic Parity: Ensure absolute parity at the fact-chunk level across markets. Minor asymmetries can create unintended retrieval advantages.
  2. Retrieval-Aware Structuring: Structure content into atomic, extractable blocks aligned to likely fan-out branches.
  3. Utility Signal Reinforcement: Provide explicit machine-readable indicators of market validity and availability to reinforce constraints the AI does not infer reliably on its own.

Conclusion: Where The Feature Becomes The Bug

Geographic leakage is not a regression in search quality. It is the natural outcome of search transitioning from transactional routing to informational synthesis.

From an engineering perspective, AI Overviews are functioning exactly as designed. Ambiguity triggers expansion. Completeness is prioritized. Semantic confidence wins.

From a business and user perspective, the same behavior exposes a structural blind spot. The system cannot distinguish between factually correct and consumer-engagable information.

This is the defining tension of generative search: A feature designed to ensure completeness becomes a bug when completeness overrides utility.

Until generative systems incorporate stronger notions of market validity and actionability, organizations must adapt defensively. In the AI era, visibility is no longer won by ranking alone. It is earned by ensuring that the most complete version of the truth is also the most usable one.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

How Search Engines Tailor Results To Individual Users & How Brands Should Manage It

How many times have you seen different SERP layouts and results across markets?

No two people see the same search results, as per Google’s own documentation. No two users receive identical outputs from AI platforms either, even when using the same prompt. In a time of information overload, this raises an important question for global marketers: How do we manage and leverage personalized search experiences across multiple markets?

Today, clarity and transparency matter more than ever. Users have countless choices and distractions, so they expect experiences that feel relevant, trustworthy, and aligned with their needs in the moment. Personalization is now central to how potential customers discover, evaluate, and engage with brands.

Search engines have been personalizing results for years based on language, search behavior, device type, and technical elements such as hreflang. With the quick evolution of generative artificial intelligence (AI), personalization has expanded into summarized answers on AI platforms and hyper-personalized experiences that depend on internal data flows and processes.

This shift forces marketers to rethink how they measure visibility and business impact. According to McKinsey, 76% of users feel frustrated when experiences are not personalized, which shows how closely relevance and user satisfaction are linked.

At the same time, long-tail discovery increasingly happens outside of search engines, particularly on platforms like TikTok. Statista reports that 78% of global internet users now research brands and products on social media.

All of this is happening while most users know little about how search engines or AI systems operate.

Regardless of where people search, the implications extend far beyond algorithms. Personalization affects how teams collaborate, how data moves across departments, and how global organizations define success.

This article explores what personalization means today and how global brands can turn it into a competitive advantage.

From SERPs To AI Summaries

Search engines no longer return lists of blue links alone or People Also Ask (PAA). They now provide summarized information in AI Overviews and AI Mode, currently for informational queries.

Google often surfaces AI summaries first and URLs second, while continuously testing different layouts for mobile and desktop, as shown below.

Screenshot from search for [what is a nepo baby], Google, December 2025

Google’s Search Labs experiments, including features such as Preferred Sources, show how layouts and summaries change based on context, trust signals, and behavioral patterns.

Large language models (LLMs) add another layer. They adjust responses based on user context, intent, and sometimes whether the user has a free or paid account. Because users rarely get exactly what they need on the first attempt, they re-prompt the AI, creating iterative conversations where each instruction or prompt influences the next.

What prompts users to click through to a source or research it on search engines, whether it is curiosity, uncertainty, boredom, a call-to-action, or the model stating it does not know, is still unclear. Understanding this behavior will soon be as important as traditional click-through rate (CTR) analysis.

For global brands, the challenge is not simply keeping up with technology. It’s maintaining a consistent brand voice and value exchange across channels and markets when every user sees a different interpretation of the brand. Trust is now as important as visibility.

This landscape increases the importance of market research, segmentation, cultural insights, and competitive analysis. It also raises concerns about echo chambers, search inequality, and the barriers brands face when entering new markets or reaching new audiences.

Meanwhile, the long tail continues to shift to platforms like TikTok, where discovery works very differently from traditional search. And as enthusiasm for AI cools, many professionals believe we have entered the “trough of disillusionment” stage described by Jackie Fenn’s technology adoption lifecycle.

What Personalization Means Today

In marketing, personalization refers to tailoring content, offers, and experiences based on available data.

In search, it describes how search engines customize results and SERP features for individual users using signals such as:

  • Data patterns.
  • Inferred interests.
  • Location.
  • Search behavior.
  • Device type.
  • Language.
  • AI-driven memory (which is discussed below).

The goal of search engines is to provide relevant results and keep users engaged, especially as people now search across multiple channels and AI platforms. As a result of this, two people searching the same query rarely see identical results. For example:

  • A cuisine enthusiast searching for [apples] may see food-related content.
  • A tech-oriented user may see Apple product news.

SERP features can also vary across markets and profiles. People Also Ask (PAA) questions and filters may differ by region, language, or click behavior, and may not appear at all. For example, the query “vote of no confidence” displays different filters and different top results in Spain and the UK, and PAA does not appear in the UK version.

AI platforms push this further with session-based memory. Platforms like AI Mode, Gemini, ChatGPT, and Copilot handle context in a way that makes users feel there are real conversations, with each prompt influencing the next. In some cases, results from earlier responses may also be surfaced.

A human-in-the-loop (HITL) approach is essential to evaluate, monitor, and correct outputs before using them.

How Personalization Technically Works

Personalization operates across several layers. Understanding these helps marketers see where influence is possible.

1. SERP Features And Layout

Google and Bing adapt their layouts based on history, device type, user engagement, and market signals. Featured Snippets, PAA modules, videos, forums, or Top Stories may appear or disappear depending on behavior and intent.

2. AI Overviews, AI Mode, And Bing Copilot

AI platforms can:

  • Summarize content from multiple URLs.
  • Adapt tone and depth based on user behavior.
  • Personalize follow-up suggestions.
  • Integrate patterns learnt within the session or even previous sessions.

Visibility now includes being referenced in AI summaries. Current patterns show this depends on:

  • Clear site and URL structure.
  • Factual accuracy.
  • Strong entity signals.
  • Online credibility.
  • Fresh, easily interpreted content.

3. Structured Data And Entity Consistency

When algorithms understand a brand, they can personalize results more accurately. Schema markup helps avoid entity drift, where regional websites are mistaken for separate brands.

Bing uses Microsoft Graph to connect brand data with the Microsoft ecosystem, extending the influence of structured data.

4. Context Windows And AI Memory

LLMs simulate “memory” using context windows, which is the amount of information they can consider at once. This is measured in tokens, which represent words or parts of words. It is what makes conversations feel continuous.

This has some important implications:

  • Semantic consistency matters.
  • Tone should be unified across markets.
  • Messaging needs to be coherent across content formats.

Once an AI system associates a brand with a specific theme, that context can persist for a while, although it is unclear how long for. This is probably why LLMs favor fresh content as a way to reinforce authority.

5. Recommenders

In ecommerce and content-heavy sites, recommenders show personalized suggestions based on behavior. This reduces friction and increases time on site.

Benefits Of Personalization

When personalization works, users and brands can benefit from:

  • Reduced user friction.
  • Increased user satisfaction.
  • Improved conversion rates.
  • Stronger engagement.
  • Higher CTR.

This can positively influence the customer lifetime value. However, these benefits rely on consistent and trustworthy experiences across channels.

Potential Drawbacks

Alongside the benefits, personalization brings some challenges that marketers need to be aware of. These are not reasons to avoid personalization, but important considerations when planning global strategies. Consider:

  • Filter bubbles reduce exposure to diverse viewpoints and competing brands.
  • Privacy concerns increase as platforms rely on more behavioral and demographic data.
  • Reduced result diversity makes it harder for new or smaller brands to appear.
  • Global templates lose effectiveness when markets expect local nuance.

This means that brands using the same template or unified content across markets for globalization lose even more effectiveness in markets, as cultural nuance, context, or different user motivations are expected. Furthermore, purchase journeys vary across markets. Hence, the effectiveness of hyper-personalization.

It is probably more important than ever that brands spend time researching and planning to gain or maintain visibility in global markets, as well as strengthening their brand perception.

Managing Personalization Across Teams And Channels

At the moment, LLMs tend to favor strong, clearly structured brands and websites. If a brand is not well understood online, it is less likely to be referenced in AI summaries.

Successful digital and SEO projects rely on strong internal processes. When teams work in isolation, inconsistencies appear in data, content, and technical implementation, which then surface as inconsistencies in personalized search.

Common issues include:

  • Weak global alignment.
  • Translations that miss local relevance.
  • Conflicting schema markup.
  • Local pages ranking for the wrong intent.
  • Important local keywords being ignored.

Below is a framework to help organizations manage personalization across markets and channels.

1. Shared Objectives And Understanding Across Teams

Many search or marketing challenges can be prevented by building a shared understanding across teams of:

  • Business and project goals.
  • Issues across markets.
  • Search developments across markets.
  • Audience segmentation.
  • Integrated insights across all channels.
  • Data flows that connect global and local teams.
  • AI developments.

2. Strengthen The Technical Elements Of Your Website

Reinforce the technical elements of your website so that it is easy for search engines and LLMs to understand your brand across markets to avoid entity drift:

  • Website structure.
  • Schema markup on the appropriate sections.
  • Strong on-page structure.
  • Strong internal linking.
  • Appropriate hreflang.

3. Optimize For Content Clusters And User Intent, Not Keywords

Structure is everything. Organizing content into clusters helps users and search engines understand the website clearly, which supports personalization.

4. Use First-Party Data To Personalize On-Site Experiences

Internal search and logged-in user experiences are important to understand your users and build user journeys based on behavior. This helps with content relevance and stronger intent signals.

First-party data can support:

  • Personalized product recommendations.
  • Dynamic filters.
  • Auto-suggestions based on browsing behavior.

5. Maintain Cross-Channel Consistency

A coherent experience supports stronger personalization and prevents fragmented journeys, and search is only one personalized environment. Tone, structure, messaging, and data should remain consistent across:

  • Social platforms.
  • Email.
  • Mobile apps.
  • Websites and on-site search.

Clear and consistent USPs should be visible everywhere.

6. Strengthen Your Brand Perception

With so much online competition, brands whose work is being referenced positively across the internet. It is the old PR: Focus on your strengths and publish well-researched work, with stats that are useful to your target users.

Conclusion: Turning Personalization Into An Advantage

Conway’s Law matters more than ever. The idea that organizations design systems that mirror their own communication structures is highly visible in search today. If teams operate in silos, those silos often show up in fragmented content, inconsistent signals, and mixed user experiences. Personalization then amplifies these gaps even further by not being cited on AI platforms or the wrong information being spread.

Understanding how personalization works and how it shapes visibility, trust, and user behavior helps brands deliver experiences that feel coherent rather than confusing.

Success is no longer just about optimizing for Google. It is about understanding how people search, how AI interprets and summarizes content, how brands are referenced across the web, and how teams collaborate across channels to present a unified message.

Where every search result is unique, the brands that succeed will be the ones that coordinate, connect, and communicate clearly, both internally and across global markets, to help strengthen the perception of their brand.

More Resources:


Featured Image: Master1305/Shutterstock

Google Ads Using New AI Model To Catch Fraudulent Advertisers via @sejournal, @martinibuster

Google published a research paper about a new AI model for detecting fraud in the Google Ads system that’s a strong improvement over what they were previously using. What’s interesting is that the research paper, dated December 31, 2025,  says that the new AI is deployed, resulting in an improvement in the detection rate of over 40 percentage points and achieving 99.8% precision on specific policies.

ALF: Advertiser Large Foundation Model

The new AI is called ALF (Advertiser Large Foundation Model), the details of which were published on December 31, 2025. ALF is a multimodal large foundation model that analyzes text, images, and video, together with factors like account age, billing details, and historical performance metrics.

The researchers explain that many of these factors in isolation won’t flag an account as potentially problematic, but that comparing all of these factors together provides a better understanding of advertiser behavior and intent.

They write:

“A core challenge in this ecosystem is to accurately and efficiently understand advertiser intent and behavior. This understanding is critical for several key applications, including matching users with ads and identifying fraud and policy violations.

Addressing this challenge requires a holistic approach, processing diverse data types including structured account information (e.g., account age, billing details), multi-modal ad creative assets (text, images, videos), and landing page content.

For example, an advertiser might have a recently created account, have text and image ads for a well known large brand, and have had a credit card payment declined once. Although each element could exist innocently in isolation, the combination strongly suggests a fraudulent operation.”

The researchers address three challenges that previous systems were unable to overcome:

1. Heterogeneous and High-Dimensional Data
Heterogeneous data refers to the fact that advertiser data comes in multiple formats, not just one type. This includes structured data like account age and billing type and unstructured data like creative assets such as images, text, and video. High-dimensional data refers to the hundreds or thousands of data points associated with each advertiser, causing the mathematical representation of each one to become high-dimensional, which presents challenges for conventional models.

2. Unbounded Sets of Creative Assets
Advertisers could have thousands of creative assets, such as images, and hide one or two malicious ones among thousands of innocent assets. This scenario overwhelmed the previous system.

3. Real-World Reliability and Trustworthiness
The system needs to be able to generate trustworthy confidence scores that a business has malicious intent because a false positive would otherwise affect an innocent advertiser. The system must be expected to work without having to constantly retune it to catch mistakes.

Privacy and Safety

Although ALF analyzes sensitive signals like billing history and account details, the researchers emphasize that the system is designed with strict privacy safeguards. Before the AI processes any data, all personally identifiable information (PII) is stripped away. This ensures that the model identifies risk based on behavioral patterns rather than sensitive personal data.

The Secret Sauce: How It Spots Outliers

The model also uses a technique called “Inter-Sample Attention” to improve its detection skills. Instead of analyzing a single advertiser in a vacuum, ALF looks at “large advertiser batches” to compare their interactions against one another. This allows the AI to learn what normal activity looks like across the entire ecosystem and make it more accurate in spotting suspicious outliers that don’t fit into normal behavior.

Alf Outperforms Production Benchmarks

The researchers explain that their tests show that ALF outperforms a heavily tuned production baseline:

“Our experiments show ALF significantly outperforms a heavily tuned production baseline while also performing strongly on public benchmarks. In production, ALF delivers substantial and simultaneous gains in precision and recall, boosting recall by over 40 percentage points on one critical policy while increasing precision to 99.8% on another.”

This result demonstrates that ALF can deliver measurable gains across multiple evaluation criteria under actual real-world production conditions, rather than just in offline or benchmarked environments.

Elsewhere they mention tradeoffs in speed:

“The effectiveness of this approach was validated against an exceptionally strong production baseline, itself the result of an extensive search across various architectures and hyperparameters, including DNNs, ensembles, GBDTs, and logistic regression with feature cross exploration.

While ALF’s latency is higher due to its larger model size, it remains well within the acceptable range for our production environment and can be further optimized using hardware accelerators. Experiments show ALF significantly outperforms the baseline on key risk detection tasks, a performance lift driven by its unique ability to holistically model content embeddings, which simpler architectures struggled to leverage. This trade-off is justified by its successful deployment, where ALF serves millions of requests daily.”

Latency refers to the amount of time the system takes to produce a response after receiving a request, and the researcher data shows that although ALF increases this response time relative to the baseline, the latency remains acceptable for production use and is already operating at scale while delivering substantially better fraud detection performance.

Improved Fraud Detection

The researchers say that ALF is now deployed to the Google Ads Safety system for identifying advertisers that are violating Google Ads policies. There is no indication that the system is being used elsewhere such as in Search or Google Business Profiles. But they did say that future work could focus on time-based factors (“temporal dynamics”) for catching evolving patterns. They also indicated that it could be useful for audience modeling and creative optimization.

Read the original PDF version of the research paper:

ALF: Advertiser Large Foundation Model for Multi-Modal Advertiser Understanding

Featured Image by Shutterstock/Login

Site Kit by Google integration available for all Yoast customers 

This first release of 2026 brings Site Kit by Google insights into your Yoast SEO Dashboard. After introducing the integration in phases throughout 2025, we are pleased to share that the rollout is now complete and available to all Yoast customers using WordPress. 

What you can see in your Yoast SEO Dashboard 

You can now view key performance data from Google Search Console and Google Analytics via Site Kit in your Yoast SEO Dashboard, without changing tools or tabs. These insights include search impressions, clicks, average click through rate, average position, and organic sessions, which are combined with your Yoast SEO and readability scores so you can better understand how content quality relates to real search performance. 

Find opportunities faster 

The integration also surfaces your top performing content and search queries, helping you quickly spot which pages and topics are driving results and where improvements may have the most impact. Connecting Site Kit by Google is straightforward. Once connected, insights become available immediately, giving you faster access to the data you need to guide your SEO work. 

If you are interested in the technical background of this integration and our collaboration with Google, we share the full story on our developer blog

Get started 

Update to Yoast SEO 26.7 to start using Site Kit by Google insights in your Dashboard and streamline your workflow with key performance data in one place. For step by step guidance on enabling the integration, see our help center guide

If you would like to share your experience, you can provide feedback through our survey to help guide future improvements.  

The State of AEO & GEO in 2026 [Webinar] via @sejournal, @hethr_campbell

How AI Search Is Reshaping Visibility & Strategy

AI search is rapidly changing how brands are discovered and how visibility is earned. 

As AI Overviews, ChatGPT, Perplexity, and other answer engines take center stage, traditional SERP rankings are no longer the only measure of success. 

For enterprise SEO leaders, the focus has shifted to understanding where to invest, which strategies actually move the needle, and how to prepare for 2026.

Join Pat Reinhart, VP of Services and Thought Leadership at Conductor, and Lindsay Boyajian Hagan, VP of Marketing at Conductor, as they unpack key insights from The State of AEO and GEO in 2026 Report. This session provides a clear look at how enterprise teams are adapting to AI-driven discovery and where AEO and GEO strategies are headed next.

What You’ll Learn

Why Attend?

This webinar offers data-backed clarity on what is working in AI search today and what to prioritize moving forward. You will gain actionable insights to refine your strategy, focus resources effectively, and stay competitive as AI continues to reshape search in 2026.

Register now to access the latest guidance on growing AI visibility in 2026.

🛑 Can’t make it live? Register anyway, and we’ll send you the recording.

Powering up (and saving) the planet

Water shortages in Southern California made an indelible impression on Evelyn Wang ’00 when she was growing up in Los Angeles. “I was quite young, perhaps in first grade,” she says. “But I remember we weren’t allowed to turn our sprinklers on. And everyone in the neighborhood was given disinfectant tablets for the toilet and encouraged to keep flushing to a minimum. I didn’t understand exactly what was happening. But I saw that everyone in the community was affected by the scarcity of this resource.”

Today, as extreme weather events increasingly affect communities around the world, Wang is leading MIT’s effort to tackle the interlinked challenges of a changing climate and a burgeoning global demand for energy. Last April, after wrapping up a two-year stint directing the US Department of Energy’s Advanced Research Projects Agency–Energy (ARPA-E), she returned to the campus where she’d been both an undergraduate and a faculty member to become the inaugural vice president for energy and climate. 

“The accelerating problem of climate change and its countless impacts represent the greatest scientific, technical, and policy challenge of this or any age,” MIT President Sally Kornbluth wrote in a January 2025 letter to the MIT community announcing the appointment. “We are tremendously fortunate that Evelyn Wang has agreed to lead this crucial work.”

A time to lead

MIT has studied and worked on problems of climate and energy for decades. In recent years, with temperatures rising, storms strengthening, and energy demands surging, that work has expanded and intensified, spawning myriad research projects, policy proposals, papers, and startups. The challenges are so urgent that MIT launched several Institute-wide initiatives, including President Rafael Reif’s Climate Grand Challenges (2020) and President Kornbluth’s Climate Project (2024).   

But Kornbluth has argued that MIT needs to do even more. Her creation of the new VP-level post that Wang now holds underscores that commitment. 

Wang is well suited for the role. The Ford Professor of Engineering at MIT and former head of the Department of Mechanical Engineering, she joined the faculty in 2007, shortly after she completed her PhD at Stanford University. Her research centers on thermal management and energy conversion and storage, but she also works on nano-engineered surfaces and materials, as well as water harvesting and purification. Wang and her colleagues have produced a device based on nanophotonic crystals that could double the efficiency of solar cells—one of MIT Technology Review’s 10 breakthrough technologies of 2017. And the device she invented with Nobel laureate Omar Yaghi for extracting water from very dry air was named one of 2017’s top 10 emerging technologies by Scientific American and the World Economic Forum, and in 2018 earned her the Prince Sultan Bin Abdulaziz International Prize for Water. (See story on this water harvesting research in the January/February issue of MIT Technology Review.)

Wang has a deep knowledge of the Institute—and even deeper roots here. (See “Family Ties,” MIT Alumni News, March/April 2015.) Her parents met at MIT as PhD students from Taiwan in the 1960s; they were married in the MIT chapel. When Wang arrived at MIT in 1996 as a first-year student, her brother Alex ’96, MEng ’97, had just graduated and was working on a master’s degree in electrical engineering. Her other brother, Ben, would earn his PhD at MIT in 2007. She even met her husband, Russell Sammon ’98, at MIT. Apart from her time at ARPA-E, a very brief stint at Bell Labs, and sabbatical work at Google, she has spent her entire professional career at the Institute. So she has a rare perspective on the resources MIT can draw on to respond to the climate and energy challenge.

“The beating heart of MIT is innovation,” she says. “We are innovators. And innovation is something that will help us leapfrog some of the potential hurdles as we work toward climate and energy solutions. Our ability to innovate can enable us to move closer toward energy security, toward sustainable resource development and use.”

The prevailing innovative mindset at MIT is backed by a deep desire to tackle the problem.Many people on campus are passionate about climate and energy, says Wang. “That is why President Kornbluth made this her flagship initiative. We are fortunate to have so many talented students and faculty, and to be able to rely on our infrastructure. I know they will all step up to meet the challenges.” But she is quick to point out that the problems are too large for any one entity—including MIT—to address on its own. So she’s aiming to encourage more collaboration both among MIT researchers and with other institutions.

“If we want to solve this problem of climate change, if we want to change the trajectory in the next decade, we cannot continue to do business as usual,” she says. “That is what is most exciting about this problem and, frankly, is why I came back to campus to take this job.” 

Hand in hand

The coupling of climate and energy in Wang’s portfolio is strategic. “Energy and climate are two sides of the same coin,” she explains. “A major reason we are seeing climate change is that we haven’t deployed solutions at a scale necessary to mitigate the CO2 emissions from the energy sector. The ways we generate energy and manage emissions are fundamental to any strategy addressing climate change. At the same time, the world demands more and more energy—a demand we can’t possibly meet through a single means.”

“Zero-emissions and low-carbon approaches will not be enough to supply the necessary energy or to reverse our impact on the climate … We need to do something truly transformational. That is the heart of the challenge.”

What’s more, she contends that switching from fossil fuels to cleaner energy, while fundamental, is only part of the solution. “Zero-emissions and low-­carbon approaches will not be enough to supply the necessary energy or to reverse our impact on the climate,” she says. “We need to consider the environmental impacts of these new fuels we develop and deploy. We need to use data analysis to move goods and energy more efficiently and intelligently. We need to consider raising more of our food in water and using food by-products and waste to help sequester carbon. In short, we need to do something truly transformational. That is the heart of the challenge.”

That challenge seems destined to grow more daunting in the coming years. There are still, Wang observes, areas of “energy poverty”—places where people cannot access sufficient energy to sustain their well-being. But solving that problem will only drive up energy production and consumption worldwide. The explosive growth of AI will likely do the same, since the huge data centers that power the technology require enormous quantities of energy for both computation and cooling. 

Wang believes that while AI will continue to drive electricity demand, it can also contribute to creating a more sustainable future.“We can use AI to develop climate and energy solutions,” she says. “AI can play a primary role in solution sets, can give us new and improved ways to manage intermittent loads in the energy grid. It can help us develop new catalysts and chemicals or help us stabilize the plasma we’ll use in nuclear fusion. It could augment climate and geospatial modeling that would allow us to predict the impact of potential climate solutions before we implement them. We could even use AI to reduce computational needs and thereby ease cooling demand.”

Change the narrative, change the culture  

MIT was humming with climate and energy research long before Wang returned to campus in 2025 after wrapping up her work at ARPA-E. Almost 400 researchers across 90% of MIT’s departments responded to President Reif’s 2020 Climate Grand Challenges initiative. The Institute awarded $2.7 million to 27 finalist teams and identified five flagship projects, including one to create an early warning system to help mitigate the impact of climate disasters, another to predict and prepare for extreme weather events, and an ambitious project to slash nearly half of all industrial carbon emissions. 

About 250 MIT faculty and senior researchers are now involved in the Climate Project at MIT, a campus-wide initiative launched in 2024 that works to generate and implement climate solutions, tools, and policy proposals. Conceived to bolster MIT’s already significant efforts as a leading source of technological, behavioral, and policy solutions to global climate issues, the Climate Project has identified six “missions”: decarbonizing energy and industry; preserving the atmosphere, land, and oceans; empowering frontline community action; designing resilient and prosperous cities; enabling new policy approaches; and wild cards, a catch-all category that supports development of unconventional solutions outside the scope of the other missions. Faculty members direct each mission. 

With so much climate research already underway,a large part of Wang’s new role is to support and deepen existing projects. But to fully tap into MIT’s unique capabilities, she says, she’s aiming to foster some cultural shifts. And that begins with identifying ways to facilitate cooperation—both across the Institute and with external partners—on a scale that can make “something truly transformational” happen, she says. “At this stage, with the challenges we face in energy and climate, we need to do something ambitious.”  

This solar thermophotovoltaic device Wang’s lab developed with Marin Soljačić converts solar heat that’s usually wasted into usable light, potentially doubling the efficiency of typical solar cells.
COURTESY OF THE RESEARCHERS
Wang’s group worked with Gang Chen’s lab to develop this highly transparent insulating silica aerogel. It transmits 95% of light, letting sunlight pass through easily as it retains solar heat.
COURTESY OF THE RESEARCHERS
This prototype of a two-stage water harvesting system developed by the Wang lab and collaborators can draw water from the air at humidity levels as low as 20%, using only sunlight or another source of low-grade heat.
ALINA LAPOTIN

In Wang’s view, getting big results depends on taking a big-picture, holistic approach that will require unprecedented levels of collaboration. “MIT faculty have always treasured their independence and autonomy,” she says. “Traditionally, we’ve tried to let 1,000 flowers bloom. And traditionally we’ve done that well, often with outstanding results. But climate and energy are systems problems, which means we need to create a systems solution. How do we bring these diverse faculty together? How do we align their efforts, not just in technology, but also in policy, science, finance, and social sciences?”  

To encourage MIT faculty to collaborate across departments, schools, and disciplines, Wang recently announced that the MIT Climate Project would award grants of $50,000 to $250,000 to collaborative faculty teams that work on six- to 24-month climate research projects. Student teams are invited to apply for research grants of up to $15,000.“We can’t afford to work in silos,” she says. “People from wildly diverse fields are working on the same problems and speaking different professional languages. We need to bring people together in an integrative way so we can attack the problem holistically.” 

Wang also wants colleagues to reach beyond campus. MIT, she says, needs to form real, defined partnerships with other universities, as well as with industries, investors, and philanthropists. “This isn’t just an MIT problem,” she says. “Individual efforts and 1,000 flowers alone will not be enough to meet these challenges.”

Thinking holistically—and in terms of systems—will help focus efforts on the areas that will have the greatest impact. At a Climate Project presentation in October, Wang outlined an approach that would focus on building well-being within communities. This will begin with efforts to empower communities by developing coastal resilience, to decarbonize ports and shipping, and to design and build data centers that integrate smoothly and sustainably with nearby communities. She encouraged her colleagues to think in terms of big-picture solutions for the future and then to work on the components needed to build that future. 

“As researchers, we sometimes jump to a solution before we have fully defined the problem,” she explains. “Let’s take the problem of decarbonization in transportation. The solution we’ve come up with is to electrify our vehicles. When we run up against the problem of the range of these vehicles, our first thought is to create higher-density batteries. But the real problem we’re facing isn’t about batteries. It’s about increasing the range of these vehicles. And the solution to that problem isn’t necessarily a more powerful battery.”

“Too often the narrative around climate is steeped in doom and gloom … The goal of any climate project is to build and protect well-being. How can we help communities thrive, empower people to live as they wish, even as the climate is changing?”

Wang is confident that her MIT colleagues have both the capacity and the desire to embrace the holistic approach she envisions. “When I was accepted to MIT as an undergraduate and visited the campus, the thing that made me certain I wanted to enroll here was the people,” she recalls. “They weren’t just talented. They had so many different interests. They were passionate about solving big problems. And they were eager to learn from one another. That spirit hasn’t changed. And that’s the spirit I and my team can tap into.”

Wang believes MIT and other institutions working on climate and energy solutions also need to change how we talk about the challenge. “Too often the narrative around climate is steeped in doom and gloom,” she says. “The underlying issue here is our well-being. That’s what we care about, not the climate. The goal of any climate project is to build and protect well-being. How can we help communities thrive, empower people to live as they wish, even as the climate is changing? How can we create conditions of resilience, sustainability, and prosperity? That is the framework I would like us to build on.” For example, in areas where extreme weather threatens homes or rising temperatures are harming human health, we should be developing affordable technologies that make dwellings more resilient and keep people cooler.

Wang’s colleagues at MIT concur with her assessment of the mission ahead. They also have a deep respect for her scholarship and leadership. “I couldn’t think of a better person to represent MIT’s diverse and powerful ability to attack climate,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and recipient of the US National Medal of Science for her work on the Antarctic ozone hole. “Communicating MIT’s capabilities to provide what the nation needs—not only in engineering but also in … economics, the physical, chemical, and biological sciences, and much more—is an immense challenge.” But she’s confident that Wang will do it justice. “Evelyn is a consummate storyteller,” she says.   

“She’s tremendously quick at learning new fields and driving toward what the real nuggets are that need to be addressed in solving hard problems,” says Elsa Olivetti, PhD ’07, the Jerry McAfee Professor in Engineering and director of the MIT Climate Project’s Decarbonizing Energy and Industry mission. “Her direct, meticulous thinking and leadership style mean that she can focus teams within her office to do the work that will be most impactful at scale.”

Wang’s experience at ARPA-E is expected to be especially useful. “The current geopolitical situation and the limited amount of research funding available relative to the scale of the climate problem pose formidable challenges to bringing MIT’s strengths to bear on the problem,” says Rohit Karnik, director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) and a collaborator with Wang on numerous projects and initiatives since both joined the mechanical engineering faculty in 2007. “Evelyn’s leadership experience at MIT and in government, her ability to take a complex situation and define a clear vision, and her passion to make a difference will serve her well in her new role.” 

Wang’s new MIT appointment is seen as a good thing beyond the Institute as well. “A role like this requires a skill set that is difficult to find in one individual,” says Krista Walton, vice chancellor for research and innovation at North Carolina State University. Walton and Wang collaborated on multiple projects, including the DARPA work that produced the device for extracting water from very dry air based on the original prototype Wang co-developed. “You need scientific depth, an understanding of the federal and global landscape, a collaborative instinct and the ability to be a convener, and a strategic vision,” Walton says—and she can’t imagine a better pick for the job. 

“Evelyn has an extraordinary ability to bridge fundamental science with real-world application,” she says. “She approaches collaboration as a true partnership and not as a transaction.”

A challenging funding climate   

Climate scientists explore broad swaths of time, tracking trends in temperature, greenhouse gases, volcanic activity, vegetation, and more across hundreds, thousands, and even millions of years. Even average temperatures and precipitation levels are calculated over periods of three decades. 

Wang and grad students Jan Luka Čas, SM ’25, and Briana Cuero examine a small test device for a hydrogel-based thermal battery they are developing with fellow PhD student Liliosa Cole.
KEN RICHARDSON

But in the realm of politics, change happens much faster, prompting sudden and sometimes startling shifts in culture and policy. The current US administration has proposed widespread budget cuts in climate and energy research. These have included slicing more than $1.5 billion from the National Oceanic and Atmospheric Administration (NOAA), canceling multiple climate-related missions at NASA, and shuttering the US Global Change Research Program (USGCRP), the agency responsible for publishing the National Climate Assessment. The Trump administration submitted a budget request that would cut the National Science Foundation budget from more than $9 billion to just over $4 billion for 2026. The New York Times reported that NSF grant funding for STEM education from January through mid-May 2025 was down 80% from its 10-year average, while NSF grant awards for math, physics, chemistry, astronomy, and materials science research were down 67%. In September, the US Department of Energy announced it had terminated 223 projects that “did not adequately advance the nation’s energy needs, were not economically viable, and would not provide a positive return on investment of taxpayer dollars.” Among the agencies affected is ARPA-E, the agency Wang directed before returning to MIT. Meanwhile, MIT research programs that rely on government sources of funding will also feel the impact of the cuts.

Acknowledging the difficulties she and MIT may face in the present moment, Wang still prefers to look forward. “Of course this is a challenging time,” she says. “There are near-term challenges and long-term challenges. We need to focus on those long-term challenges. As President Kornbluth has said, we need to continue to advocate for research and education. We need to pursue long-term solutions, to follow our convictions in addressing problems in energy and climate. And we need to be ready to seize the opportunities that reside in these long-term challenges.”

Wang also sees openings for short-term collaboration—areas where MIT and the current administration can find common ground and goals. “There is still a huge area of opportunity for us to align our interests with those of this administration,” she says. “We can move the needle forward together on energy, on national security, on minerals, on economic competitiveness. All these are interests we share, and there are pathways we can follow to meet these challenges to our nation together. MIT is a major force in the nuclear space, in both fission and fusion. These, along with geothermal, could provide the power base we need to meet our energy demands. There are significant opportunities for partnerships with this or any administration to unlock some of these innovations and implement them.”

A moonshot factory

While she views herself as a researcher and an academic, Wang’s relevant government experience should prove especially useful in her VP role at MIT. In her two years as director of ARPA-E, she coordinated a broad array of the US Department of Energy’s early-stage research and development in energy generation, storage, and use. “I think I had the best job in government,” she says. Designed to operate at arm’s length from the Department of Energy, ARPA-E searches for high-risk, high-reward energy innovation projects. “More than one observer has called ARPA-E a moonshot factory,” she says. 

Seeking out and facilitating “moonshot”-­worthy projects at a national and sometimes global scale gave Wang a broader lens through which to view issues of energy and climate. It also taught her that big ideas do not translate into big solutions automatically. “I learned what it takes to make an impact on energy technology at ARPA-E and I will be forever grateful,” she says. “I saw how game-changing ideas can take a decade to go from concept to deployment. I learned to appreciate the diversity of talent and innovation in the national ecosystem composed of laboratories, startups, and institutions. I saw how that ecosystem could zero in to identify real problems, envision diverse pathways, and create prototypes. And I also saw just how hard that journey is.” 


Climate and energy research at MIT

MIT researchers are tackling climate and energy issues from multiple angles, working on everything from decarbonizing energy and industry to designing resilient and prosperous cities. Find out more at climateproject.mit.edu.


While MIT is already an important element in that ecosystem, Wang and her colleagues want the Institute to play an even more prominent role. “We can be a convener and collaborator, first across all of MIT’s departments, and then with industry, the financial world, and governments,” she says. “We need to do aggressive outreach and find like-minded partners.” 

“Although the problems of climate and climate change are global, the most effective way MIT can address them is locally,” said Wang at the October presentation of the MIT Climate Project. “Working across schools and disciplines, collaborating with external partners, we will develop targeted solutions for individual places and communities—solutions that can then serve as templates for other places and communities.” But she also cautions against one-size-fits-all solutions. “Solar panels, for example, work wonderfully, but only in areas that have sufficient space and sunlight,” she explains. “Institutions like MIT can showcase a diversity of approaches and determine the best approach for each individual context.”

Most of all, Wang wants her colleagues to be proactive. “Because MIT is a factory of ideas, perhaps even a moonshot factory, we need to think boldly and continue to think boldly so we can make an impact as soon as possible,” she says. She also wants her colleagues to stay hopeful and not feel daunted by a challenge that can at times feel overwhelming. “We will build pilots, one at a time, and demonstrate that these projects are not only possible but practical,” she says. “And that is how we will build a future everyone wants to live in.”