From climate-warming pollutant to useful material

Although it is less abundant than carbon dioxide, methane gas contributes disproportionately to global warming. Its molecular structure of single carbon atoms bound to four hydrogen atoms makes it a potentially useful building block for products that could keep this carbon out of the atmosphere, but it’s hard to get it to react with other molecules under ordinary conditions.

Now a catalyst designed by MIT chemical engineer Michael Strano and colleagues could help solve that problem.

The catalyst has two components. The first, a mineral called a zeolite, converts methane to methanol. The second, a natural enzyme called alcohol oxidase, converts the methanol to formaldehyde. With the addition of urea, a nitrogen-containing molecule found in urine, the formaldehyde can be turned into a polymer used in particleboard, textiles, and other products.

The researchers say this catalyst could act to seal cracks in pipes transporting natural gas, a common source of methane leakage. It could also be used to coat surfaces that are exposed to methane gas, producing polymers that could be collected for use in manufacturing.

“Other systems operate at high temperature and high pressure,” says MIT postdoc Jimin Kim, lead author with Daniel Lundberg, PhD ’24, of a paper on the work. That takes money and energy. But, she says, “I think our system could be very cost-effective and scalable.” 

This is your brain on movies

The cerebral cortex contains regions devoted to processing different types of sensory information, including visual and auditory input. Now researchers led by Robert Desimone, director of MIT’s McGovern Institute for Brain Research, and colleagues have developed the most comprehensive picture yet of what all these regions do. They achieved this by analyzing data collected as people performed a surprisingly complex task: watching a movie.

Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using functional magnetic resonance imaging (fMRI) to measure brain activity as subjects perform a single task (such as looking at faces) or do nothing. The problem is that while people are resting, many parts of the cortex may not be active at all.

“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic and contextual information,” says Reza Rajimehr, a research scientist in the McGovern Institute and the lead author of a paper on the work. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”

Using high-resolution fMRI data collected by an NIH-funded consortium, the researchers analyzed brain activity from 176 people as they watched a variety of movie clips. Then they used a machine-learning algorithm to analyze the activity patterns of each brain region. What they found was 24 networks with different activity patterns and functions. Some are located in sensory areas such as the visual or auditory cortex, while others respond to features such as actions, language, or social interactions. The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex that appears highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.

Three of the networks they found are involved in “executive control” and were most active during transitions between clips. The researchers also observed that when networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.

“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”

The researchers hope that their new map will serve as a starting point for more precise study of what each of these networks is doing. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies.

“This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Desimone. “It’s not going to give us all the answers, but it generates a lot of interesting ideas.” 

Laser imaging peers deeper into living tissue

Metabolic imaging is a valuable noninvasive method for studying living cells with laser light, but it’s been constrained by the way light scatters when it shines into tissue, limiting the resolution and depth of penetration. MIT researchers have developed a new technique that more than doubles the usual depth limit while boosting imaging speeds, yielding richer and more detailed images.

This technique does not require samples to be sliced and stained with contrast dyes. Instead, when a specialized laser shines light deep into tissues, certain molecules within them emit light of different colors, revealing molecular contents and cellular structures. By using a recently developed fiber shaper—a device controlled by bending it—the researchers can tune the color and pulses of light to minimize scattering and maximize the signal. This allows them to see much further and capture clearer images. In tests, the light was able to penetrate more than 700 micrometers into a sample, whereas the best previous techniques reached about 200 micrometers.

This method is particularly well suited for applications like cancer research, tissue engineering, drug discovery, and the study of immune responses. “It opens new avenues for studying and exploring metabolic dynamics deep in living biosystems,” says Sixian You, an assistant professor of EECS and senior author of a paper on the technique.

Recent books from the MIT community

Differential Privacy
By Simson L. Garfinkel ’87, PhD ’05 
MIT PRESSS, 2025, $18.95

Small, Medium, Large: How Government Made the US into a Manufacturing Powerhouse
By Colleen A. Dunlavy, PhD ’88  
POLITY BOOKS, 2024, $29.95

The Miraculous from the Material: Understanding the Wonders of Nature 
By Alan Lightman, professor of the practice of the humanities 
PANTHEON, 2024, $36

The Path to Singularity: How Technology Will Challenge the Future of Humanity
By J. Craig Wheeler ’65, with a foreword by Neil deGrasse Tyson 
PROMETHEUS BOOKS, 2024, $32.95

Assembly by Design: The United Nations and Its Global Interior
By Olga Touloumi, SM ’06
UNIV. OF MINNESOTA PRESS, 2024, $35

The Finite Element Method: Its Basis and Fundamentals 
By O.C. Zienkiewicz, R.L. Taylor, and Sanjay Govindjee ’86 
BUTTERWORTH-HEINNEMANN, 2024, $286.99

Where Biology Ends and Bias Begins: Lessons on Belonging from Our DNA 
By Shoumita Dasgupta ’97 
UNIV. OF CALIF. PRESS, 2025, $29.95

A Moving Meditation: Life on a Cape Cod Kettle Pond 
By Stephen G. Waller ’73 
BRIGHT LEAF, 2023, $24.95


Send book news to MITAlumniNews@technologyreview.com or 196 Broadway, 3rd Floor Cambridge, MA 02139

Charts: U.S. Retail Ecommerce Sales Q4 2024

New data from the U.S. Department of Commerce (PDF) shows that retail ecommerce growth continues to outpace brick-and-mortar. In the fourth quarter of 2024, total U.S. domestic retail sales reached $1.88 trillion, a modest 1.8% increase from Q3. Online shopping showed stronger growth, with sales climbing to $308.9 billion, a 2.7% increase over the prior quarter.

According to the DoC, ecommerce sales are for “goods and services where the buyer places an order (or the price and terms of the sale are negotiated) over an Internet, mobile device, extranet, electronic data interchange network, electronic mail, or other comparable online system. Payment may or may not be made online.”

Ecommerce accounted for 16.4% of total U.S. retail sales in Q4 2024, up slightly from 16.3% in the prior quarter.

The DoC reports U.S. retail ecommerce sales in Q4 2024 grew by 9.4% compared to Q4 2023, while total quarterly retail sales experienced a 3.8% annual rise over the same period.

Mullenweg Rebuffs Plea To Restore Automattic’s WordPress Core Contributions via @sejournal, @martinibuster

AA WordPress developer pleaded with Matt Mullenweg at WordCamp Asia 2025, asking him to restore Automattic’s contributions to the WordPress core. Mullenweg apologized and said it’s not up to him; it’s up to WP Engine to drop their lawsuit, and he encouraged the community to put pressure on WP Engine.

Automattic’s Scaled-Back WordPress Contributions

Automattic announced in January 2025 that they were scaling back contributions to the WordPress core to those related to security and critical updates. Contributions that would otherwise had gone to core would be diverted to for-profit initiatives related to Automattic and WordPress.com.

Automattic attributed its January 2025 decision to WP Engine’s lawsuits:

“We’ve made the decision to reallocate resources due to the lawsuits from WP Engine. This legal action diverts significant time and energy that could otherwise be directed toward supporting WordPress’s growth and health. We remain hopeful that WP Engine will reconsider this legal attack, allowing us to refocus our efforts on contributions that benefit the broader WordPress ecosystem.”

WP Engine’s lawsuits, however, were a response to Matt Mullenweg’s WordCamp USA 2024 statements and also activities against WP Engine (like the WP Engine Tracker website) . A federal judge has since sided with WP Engine and granted its request for a preliminary injunction against Automattic and Mullenweg.

WordCamp Attendee Urges Mullenweg To Reinstate Core Contributions

A WordCamp Asia 2024 attendee stepped up during the Q&A portion of the conference and shared his concerns, as a business owner and a plugin developer, for the stagnation of WordPress core development.

He said:

“Hi Matt. So this is not about a question, but I am a bit concerned about like if I see that the last five years or even ten years Automattic is the biggest core contributor in the code base and everything. So it’s not actually biggest, maybe 60%, 70% of the commit… as a company, Automattic do that.

So you recently published in a blog post that you are pulling out all the contribution and everything. So as a developer, as a business owner, …my whole business depends on WordPress. We build WordPress plugins, I think if there is no Automattic in the core contribution, the whole development will be super slow.

I want to request you to reconsider that, and at least in the core development maybe you can make some changes, give more resources in the core. Because it’s complicated, …someone needs to work and I think Automattic has lots of resources, experienced people in there, so I want to request you to reconsider your position and give more developers to the core.”

Matt Mullenweg States Condition For Restoring Core Contributions

Mullenweg responded that Automattic’s spending millions of dollars to defend itself against WP Engine. He insisted that the decision to restore Automattic’s core contributions hinges on WP Engine dropping their lawsuits and encouraged the person to ask WP Engine.

Mullenweg answered:

“Yeah, thank you. Well, it’s definitely not a situation I want to be in. As we said, we’re pausing things. But very, very excited to return to having all those hundred-ish folks back doing some of the work we were doing before.

But right now we’re facing not just a maker and taker program problem… but maker-attacker. So well Automattic’s having to spend millions of dollars, per month sometimes, to defend against these attacks from WP Engine and with the court injunction, it’s just hard to be both be motivated and to just spare the resources to contribute so much.

Now, they could end it tomorrow. And I would love to welcome WP Engine back into the fold, back at WordCamp and everything. But we can’t end it, we can only defend it, you know, to all the legal attacks and they are increasing actually. And they’re coming after me personally too. As soon as they stop that, we’ll get back to it.

So please, I can’t stop it. Ask them.”

Mullenweg Asks Audience To Pressure WP Engine To Drop Lawsuit

The person asking the question said he understood Mullenweg’s position but insisted that, as an end user, he wants the software to continue to thrive. For that reason, he pleaded for Automattic to find a way to restore core contributions.

Mullenweg answered the developers second plea and asked the audience to pressure WP Engine to drop the lawsuit:

“I can’t until the lawsuit is over. So if there’s anything y’all can do to put pressure for the lawsuit to end, that would be the fastest thing to get our contributions back.”

He ended his response with a smile, saying:

“So… sorry about that.”

Concern Over Cuts To Core Contribution

The WordPress developer expressed deep concern and anxiety about the pace of WordPress core development. He emphasized that Automattic has historically provided a significant portion of core contributions and feared that without its support, WordPress development would slow significantly, impacting his business and those of others who rely on the platform.

Matt Mullenweg’s response did not directly address the WordPress developer’s plea to reconsider Automattic’s core contribution cuts. His answer framed the decision to restore core contributions as out of his control because it is dependent on WP Engine dropping its lawsuit. He stated that the lawsuit costs Automattic millions of dollars.

Mullenweg’s main points in his response to restoring Automattic’s core contributions were:

  • Automattic’s reduced contributions result from the financial and legal burden of defending against WP Engine’s lawsuit.
  • WP Engine’s legal actions make it difficult for Automattic to contribute at previous levels.
  • He urged the audience to pressure WP Engine to drop the lawsuit.

Watch The Question and Answer segment at the 6:21:32 minute mark:

73% Of Marketers Use Generative AI, Consumer Acceptance Up via @sejournal, @MattGSouthern

Recent studies by Gartner and Adobe show that generative AI is becoming a key tool in marketing.

Almost three-quarters of marketing teams now use GenAI, and most consumers are comfortable with AI in advertising.

AI Adoption In Marketing

A survey by Gartner of 418 marketing leaders found that 73% of marketing teams use generative AI.

However, 27% of CMOs say their organizations have limited or no use of GenAI in their marketing campaigns.

Correlation With Top Performers

Marketing teams that consistently exceed targets and meet customer acquisition goals are adopting AI faster than competitors.

Greg Carlucci, Senior Director Analyst in the Gartner Marketing Practice, states:

“The most successful marketing organizations are leading the way when it comes to GenAI adoption.”

Most marketers are using GenAI for:

  • Creative development (77%)
  • Strategy work (48%)
  • Campaign evaluation (47% reporting benefits)

Challenges With Generative AI

Despite spending almost half their budgets on campaigns, 87% of CMOs faced performance problems last year, and nearly half had to end underperforming campaigns early.

The Gartner study found:

“On average, 87% of CMOs report they experienced campaign performance issues in the last 12 months, with 45% reporting that they sometimes, often, or always had occasion to terminate campaigns early in the last year due to poor performance.”

CMOs identified several departments as barriers to their success:

  • Finance (31%)
  • Executive leadership (26%)
  • Sales (26%)

Opportunities With Generative AI

Adobe’s research highlights personalization as the primary AI opportunity for marketers.

Heather Freeland, Chief Brand Officer at Adobe, notes:

“Across all industries, there is an insatiable demand for content as customers expect every encounter with a brand to be personalized.”

She adds:

“Just when this challenge seemed insurmountable, the emergence of generative AI is presenting creative and marketing teams with a new way to keep pace with customer demands while also breaking through with their brands.”

The study finds that 97% of marketers believe mass personalization is achievable with AI, but most find it challenging without appropriate tools.

AI Acceptance Among Consumers

Consumers say that knowing content was created by AI either makes them more engaged or does not change their engagement at all.

Adobe’s study found:

Three in four consumers surveyed agree that knowing content was AI-produced would either improve or not impact their likelihood of engaging with it.

Consumers are even willing to share their data for a better AI-driven experience.

Adobe’s study finds the top data points consumers are willing to share include:

“… past purchases (56%), products they’ve viewed (52%), their gender (47%), age (41%), and language (35%).”

Generational Differences

Different age groups prefer personalization in different channels.

According to Adobe’s research:

“Gen Z respondents show a higher affinity for personalized content from the consumer electronics industry, particularly music (45%) and video games (43%)…

This contrasts with Baby Boomers, who prefer personalization in retail industry content, specifically from grocery stores (46%).”

The study also found:

“Millennials prefer personalized email campaigns (45%) and website content (40%), while Gen Z values social media personalization (51%).”

Measurable Results

Adobe reports that the implementation of GenAI tools delivered performance improvements.

Its report states:

“… in one of our first generative AI-powered email tests, we used the tool to quickly build and test five versions of an Adobe Photoshop email. It delivered a more than 10% increase in click-through rates, and a subsequent test reported a 57% increase in click rates for an Adobe Illustrator email.”

Additionally:

“Testing scale and speed transformed our approach to content optimization, significantly enhancing our marketing performance and efficiency.”

What This Means

Generative AI is shifting from a novel technology to a standard practice within marketing.

Marketing departments are facing tighter budgets while consumer demand for personalized content grows. Generative AI offers a potential solution to create personalized content at scale.

Further, using AI to personalize marketing messages will unlikely impact consumer perception of your brand. Some marketers believe it may even improve retention.

Adobe’s research suggests:

“Over one in four (26%) marketer respondents agree that AI-powered personalization will increase consumer brand loyalty.”

If you want to incorporate AI into your advertising strategy but are unsure where to start, data suggests that the best approach is to enhance personalization.


Featured Image: Frame Stock Footage/Shutterstock

The State Of AI Chatbots And SEO via @sejournal, @Kevin_Indig

Last week, I published a meta-analysis of AI Overviews and their impact on SEO.

Today, I publish an analysis of the research on AI chatbots and their potential impact on customer acquisition and purchase decisions.

Image Credit: Lyna ™

I’ve analyzed 14 studies and research papers to answer five key questions:

    1. How valuable is AI chatbot visibility?
    2. How can you grow your AI chatbot visibility?
    3. How are people searching on AI chatbots?
    4. What challenges are associated with AI chatbots?
    5. Where are AI chatbots headed?

This analysis is perfect for you if you:

  • Are unsure about whether to invest in AI chatbot visibility.
  • Want an overview of the state of AI chatbots.
  • Look for ways to optimize for AI chatbots.

I don’t include AI Overviews in this analysis since I’ve covered them in depth in last week’s Memo.

Sources:

Image Credit: Kevin Indig

Get the spreadsheet.

How Valuable Is AI Chatbot Visibility?

While AI chatbot traffic currently represents a tiny percentage of overall traffic, the data shows early evidence for the value of citations and mentions.

AI chatbot adoption is skyrocketing, referral traffic to websites is growing, and traffic quality is high.

Adoption

ChatGPT has over 400 million weekly users as of January 2025.1

Semrush, 12/24: Most ChatGPT users are from the U.S. (25%) or India (12%), followed by India, Brazil, the UK and Germany. 70% are male, and over 50% are between 18 and 34 years old.

Higher Visibility, 02/25: 71.5% of consumers use ChatGPT for searching but complementary to Google, not as a replacement.

Ahrefs, 02/25: 63% of websites receive at least some traffic from AI sources. Only 0.17% of total visits came from AI Chatbots, with top sites achieving up to 6%.

  • 98% of AI traffic comes from three AI chatbots: ChatGPT (> 50%), Perplexity (30.7%), and Gemini (17.6%).
  • Smaller sites get proportionally more visits from AI.

Semrush, 02/25: The generative AI market was valued at $67 billion in 2024 and is expected to grow annually by 24.4% through 2030.

Referral Traffic

Semrush, 12/24: ChatGPT referrals to websites grew by 60% between June and October.

Semrush, 02/25: ChatGPT’s reach has expanded dramatically, sending traffic to over 30,000 unique domains daily in November 2024, up from less than 10,000 in July.

  • Online services, education, and mass media are getting the most referral traffic from ChatGPT after filtering out authentication URLs. Retail, finance, and healthcare show lower volumes.

Growth Memo, 02/25: The quality of AI chatbot traffic is superior in several key metrics:

  • The average session duration is 10.4 minutes for AI chatbot referrals versus 8.1 minutes for Google traffic.
  • Users view more pages: 12.4 pages on average for AI chatbot referrals compared to 11.8 for Google traffic.

Impact On Purchase Decisions:

Adobe, 10/24: 25% of Britons use AI while shopping online.

  • AI usage rose 10x between July and September to 10 billion visits to UK retail websites and ~100 million products.
  • Most shoppers are looking for deals:

In an Adobe survey of 5,000 U.S. consumers, 7 in 10 respondents who have used generative AI for shopping believe it enhances their experience. Additionally, 20% of respondents turn to generative AI to find the best deals, followed by quickly finding specific items online (19%) and getting brand recommendations (15%).

Semrush, 02/25: 46% of ChatGPT queries use the Search feature.

The research paper “A comparative study on the effect of ChatGPT recommendation and AI recommender systems on the formation of a consideration set” by Chang et al. looked at 471 consumers to understand:

  • Whether ChatGPT impacts consumer choices.
  • The process that impacts choices.
  • The impact on products with low-brand awareness vs. high-brand awareness.

Results:

  • ChatGPT does influence the consumer purchase journey and products recommended by ChatGPT are more likely to be adopted.
  • Products with low brand awareness see higher trust after a recommendation from ChatGPT.

My Take

  • ChatGPT had 560 million unique worldwide visitors in December 2024, compared to Google’s 6.5 billion. For comparison, that’s still small but about the size of X/Twitter today.
  • ChatGPT sending more referral traffic to a diverse list of domains is probably a strategic move to win the web over and establish itself more as an alternative to Google. I don’t think OpenAI has to do that. I think they strategically chose to.
  • So far, it seems young men in the U.S., BRIC, and Europe are the major users of ChatGPT. If that’s your target audience, optimizing for AI chatbot visibility should be a higher priority.
  • To be crystal clear, I don’t think anybody has to optimize for AI chatbot visibility. I’m confident that most industries will be fine doing classic SEO for years to come. Some will even be fine in a decade. However, you can’t unsee the rapid adoption, which leads us to a situation where two things are true: classic SEO still works, and there is a first-mover advantage on AI chatbots.

How Can You Grow Your AI Chatbot Visibility?

Improving AI chatbot visibility is a mix of known and new levers.

Crawlability

Being visible on AI chatbots starts with being visible to their crawlers. Crystal Carter, head of SEO Commus at Wix, calls this “retrievability.”

Groomed XML sitemaps, strong internal linking, fast server response, and clean HTML are a good start.

LLM crawlers are less forgiving than Google when it comes to JavaScript and client-side rendering for critical SEO components. Avoid at all cost!

Brand Strength

Ziff Davis, 11/24: A Ziff Davis study compares Domain Authority in curated (OpenWebText, OpenWebText2) with uncurated public web indices (Common Crawl, C4) to investigate how major AI companies like OpenAI, Google, and Meta trained their large language models. The unsurprising conclusion is that AI developers prefer curated text to train their models, naturally giving commercial publishers more visibility.

Semrush, 12/24: Google tends to show larger domains, ChatGPT smaller ones. The opposite is true for transactional searches: Search GPT prefers larger domains, Google smaller ones.

Seer, 01/25: Backlinks showed no correlation with AI chatbot visibility.

Organic Ranks

Seer, 01/25: Brands ranking on page 1 of Google showed a strong correlation (~0.65) with LLM mentions. Bing rankings also mattered, but a little less (~0.5–0.6).

Semrush, 02/25: The overlap between Google, Perplexity, and ChatGPT search is low (25-35% on average). However, the overlap between ChatGPT search and Bing is much higher (average = 7 domains) than with Google (4 domains).

Go Off-Google

Semrush, 02/25: YouTube is the third largest domain by referral traffic from ChatGPT. Facebook, LinkedIn, and GitHub are in the top 10.

Growth Memo, 02/25: Amazon, eBay, and Walmart dominate in Google Search just as much as in AI chatbots.

My Take

  • There is a big question of how important backlinks are for AI chatbot visibility. I think there is a trap to think they have a direct impact. The way I understand the data is that they help with Google/Bing visibility, which passively translates to AI chatbot visibility. They might also help with LLM crawler discoverability. So, they’re still important but not as much as the content itself.
  • The biggest lever seems to be citable content on and off of Google: Industry reports with exclusive research and data, original surveys and case studies, and thought leadership content from recognized experts.
  • I wouldn’t restrict myself from optimizing for AI chatbot visibility as a small business with little to no visibility on classic search engines.
  • Ecommerce is an outlier because the journey is so much more transactional than for B2B or media. On one hand, the strong visibility of big ecommerce platforms like Amazon provides a direct path for AI chatbot visibility for merchants. On the other hand, integrating with programs like Perplexity’s Buy With Pro seems worth trying out.

How Are People Searching On AI Chatbots?

Consumers use AI chatbots differently than Google unless they turn on search features.

Semrush, 02/25: 70% of ChatGPT queries represent entirely new types of intent that don’t fit traditional search categories (navigational, informational, commercial, transactional).

  • Users are asking longer, more complex questions, with non-search-enabled ChatGPT prompts averaging 23 words compared to 4.2 words when search is enabled.

Higher Visibility, 02/25: People use different AI chatbots for different user intents, e.g., Google for initial product research, ChatGPT for product comparison, and Instagram for discovering new products. However, almost 80% stick to traditional search engines for informational searches.

Growth Memo, 02/25: AI chatbots send significantly more traffic to homepages (22% on average) compared to Google (10%) yet still maintain higher engagement metrics. This trend suggests that AI chatbots are effectively preparing users for brand interactions.

My Take

  • It’s fascinating to see that when people turn on Search in ChatGPT, they use shorter queries and emulate their behavior on Google. I wonder if this behavior sticks over the long term or not. If so, we can assume a stronger carryover from players who dominate classic search engines today to AI chatbots. If not, it might open the field to new players.
  • I’ve long been dissatisfied with our broad classification of user intents (information, navigational, etc.). We had this wrong for a long time. It’s too coarse. 70% of use cases are likely task-related and don’t fit our model for classic search engines. AI chatbots are more than search engines but solve the same problems, just with different means. That’s also where I see Google lagging behind: Consumers already associate AI chatbots with tasks rather than finding information.

What Challenges Are Associated With AI Chatbots?

AI chatbots make for a compelling marketing channel but put marketers in front of tracking and bias problems.

Tracking

We can track the referral source for almost all AI chatbots, but some traffic can still fall into the direct traffic bucket.

Citations in ChatGPT typically include a “utm_source=chatgpt.com” parameter, but links in search results don’t have the parameter.2

Ahrefs, 02/25: AI traffic is likely underreported because AI chatbots like Copilot get clustered into direct while they’re actually referrals.

Brand Bias

Semrush, 12/24: Consumers and users are skeptical about AI output. 50% say they trust it more when it’s been reviewed by a human.

In the paper “Global is Good, Local is Bad?” Kamruzzaman et al. conducted experiments with fill-in-the-blank questions across four product categories and 15 countries (English only). The researchers studied the effect of:

  • Brand attribute bias: global vs. local brands.
  • Socio-economic bias: luxury vs non-luxury brands.
  • Geo bias: local brands when the domestic country is specified.

Results:

  • LLMs across multiple models (GPT-4o, Llama-3, Gemma-7B, Mistral-7B) consistently associate global brands with positive and local brands with negative attributes.
  • LLMs tend to recommend luxury brands to people from high-income countries. In contrast, non-luxury brands are more commonly suggested for people from low-income countries, even when models were given the flexibility to suggest the same brands for both groups.

The underlying reasons are that local brand names are underrepresented in LLM training data, and large companies can afford larger marketing campaigns and, therefore, create more bias.

In the paper “Generative AI Search Engines as Arbiters of Public Knowledge: An Audit of Bias and Authority” by Li et al., researchers tested how ChatGPT, Bing Chat, and Perplexity answer questions about four major topics: climate change, vaccination, alternative energy, and trust in media. They wanted to see if the AI showed bias in its answers and how it tried to appear trustworthy.

The results:

  • The AI tends to match the emotion of the question. If you ask a negative question, you get a negative answer.
  • Different topics got different emotional treatment, e.g., vaccination and alternative energy got more positive responses than climate change and media trust.
  • Bing Chat and Perplexity heavily cite news media and businesses.
  • Heavy reliance on U.S. sources (65% of sources), even when used in other countries.
  • Too many commercial/business sources, especially for topics like alternative energy.
  • Some models mix unreliable sources with good ones.
  • Answers often include uncertain language and hedging to avoid taking strong positions.

My Take

  • We’re used to significant tracking gaps from Google and Bing, so unless AI chatbots try to persuade site owners with more data, we’ll have to continue to operate with aggregate data, as I mentioned in Death of the Keyword.
  • AI chatbot bias is serious. User trust is key to winning, so I assume AI developers are aware and try to solve the problem. Until then, we have to factor bias in with our optimization strategies and do our best to clearly indicate the target audience for our product in our content.

Conclusion: Where It’s All Going

The data we have today shows that AI chatbots are developing into a significant customer acquisition channel with many familiar mechanics.

However, their task-based nature, bias, and demographics suggest we should be cautious when using the same approach as classic search engines.

Don’t forget – Search is just a means to an end. Ultimately, people search to solve problems, i.e., do tasks.

The fact that AI chatbots can skip the search part and do tasks on the spot means they’re superior to classic search engines. For this reason, I expect Google to add more agentic capabilities to AI Overviews or launch a new Gemini-based product in Search.

The underlying technology allows AI chatbots to fork off search engine ranks and develop their own signals. And it evolves rapidly.

The evolution so far went from machine learning in the pre-2022 era to early LLMs and now inference models (think: reasoning).

Better reasoning allows LLMs to recognize user intent even better than classic search engines, making it easier to train models on better sources to mention or cite.

This brings me to the question of whether Google/Bing incumbents will also dominate AI chatbots down the road. Right now, the answer is yes. But for how long?

Generational preferences could be the biggest driver of new platforms. The easiest way for Google to become irrelevant is to lose young people.

  • Semrush, 02/25: Searchers over 35 years use Google more often than ChatGPT. People between 18 and 24 use ChatGPT 46.7% of the time, compared to Google with 24.7%.
  • Higher Visibility, 02/25: 82% of Gen Z occasionally use AI chatbots, compared to 42% of Baby Boomers.

There is a chance that multimodality will quickly play a more prominent role in AI chatbot adoption. So far, text interfaces dominate.

But Google already reports 10 billion searches with Google Lens, and Meta’s Ray Ban smartglasses are very successful. Other than Google Search, the LLM answer format is easy to transport to other devices and modalities, which could transform AI.3


1 ChatGPT now has 400 million weekly users — and a lot of competition

2 Deep Dive: Tracking How ChatGPT + Search & Others Send Users To Your Site

3 Google Lens Reaches 10 Billion Monthly Searches


Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: Why Is Google Not Indexing My Pages? via @sejournal, @rollerblader

This week’s Ask An SEO question comes from Harjeet:

“Hi! I have a website that provides information of warehouses all over United states. The problem is that only eight to nine pages are indexed by Google. But it has many dynamic pages.

For example, if you do a search for any location on homepage search bar, it will open the location page with the listings.

But Google is not indexing the location pages and warehouse listings as well. Can you help me solve this issue?”

That’s a great question, and I think I can help.

After looking at your website, it looks like you have a crawling issue with a lack of actual pages and not an indexing one.

I can verify you only have nine pages in Google’s indexes. This is because you only have nine actual pages on your website.

I want to start with an overview of some of the site issues. Then, after the overview, I’ll share how to resolve a large chunk of this so you can begin getting pages crawled and indexed.

Identifying The Gaps In Your Website Structure

It’s very hard for both consumers and search engines to navigate your website because you are missing links and easy-to-find navigation.

There are also two sitemaps, and one of them is incorrect. The good news is your correct one is listed in robots.txt, but it only has your site’s navigation in it and not the pages that are being created dynamically.

To start, list the most important subpages in your sitemap so search engines can find them more easily.

The robots.txt file also includes a disallow directive for all user agents which is problematic because this is blocking crawling and not guiding crawlers to proper folders and pages.

Specify “allow” for the important folders and pathways you want crawled. This guides spiders to the pages and folders you feel are most important.

On top of the sitemap and robots.txt issues, there is no internal linking. Internal links are important to allow crawlers to move around your site and find new pages that need indexing.

Next is to look at the quality of your content and how it is displayed.

The content on your site is thin and there is no content that can provide the kind of information that experts in storage in the local area would know.

Add unique content and location information to each page individually (more on this below in the how to fix this issue section).

The site is also missing robots meta tags and canonical links. Meta robot tags give directions on whether the pages should be indexed or not and if links should be followed or not.

Canonical links say which version is the official version of a page and can help deduplicate similar or competing pages. Adding these will help search engines know what to do as the pages and links are discovered.

Last, you have no city or state-based pages as dedicated resources, but you do have them in a drop-down in the search box. The issue here is these pages don’t exist unless searched for.

If they don’t exist on an official URL, search engines cannot find them and index them easily.

There are other tech and content issues here, but I’m going to jump into fixing the main one, which is getting more pages indexed for you. And this is an incredibly easy one, so this is good news.

Filling The Missing SEO Elements

There are a few steps I would take if this were my site or project:

  1. Build a cities and/or regions and states folder structure.
  2. Get unique content for each.
  3. Add in breadcrumbs and internal links.
  4. Modify the robots.txt and sitemaps.
  5. Do local PR work, not spammy links and directories.

Build A Folder Structure

When you allow a site search to generate URLs for cities, states, and locations, you create a ton of competing URLs.

People can spell a city wrong or use an abbreviation: Philly, Philadelphia (and is it Missouri or PA), or neighborhoods like Fishtown or Center City if it is the Pennsylvania version of Philadelphia.

And if you live in the DMV (DC, MD, VA) like I do, you may type in Washington or Columbia Heights vs. the state or city name. Washington could also be either DC or the state.

Building a folder structure lets you guide people to the correct location and makes it easier for search engines to know where you offer services.

For example, EU companies can structure by country; individual countries like Mexico could have states like Jalisco and Oaxaca and the major cities in each.

Create Unique Content

Original location-based content is easy to use for your niche. Each region has different climates, and cities do, too. Bring these into consideration when writing the copy.

If the region has more humidity, talk about how the building protects against mold, mildew, corrosion, and other humidity issues that impact storage and warehousing.

For areas that lose power because of hurricanes, snow, or even heat, talk about backup systems and refrigeration for temperature-critical items being stored.

You could include city-based content and localize with original talking points, including directions to the location. Don’t forget to include physical addresses, hours of operation, and phone numbers.

I wrote this guide to localized title tags and descriptions a while back, it applies here, too.

Add Breadcrumbs And Internal Links

Now that you have unique content for the states and major cities in an easy-to-navigate directory structure, help users and search engines find them.

Add breadcrumbs with breadcrumb schema to the site – ideally, at the top of the page so users can click and use them.

In the unique copy for the state pages, link to the city pages. If it will benefit the user, link back to the main head state pages from the city pages.

An example for this would be if one city has people renting in other cities in that state, or if one location fills up, other locations that are close by and are easily accessible as an alternative.

The copy may mention looking for other warehousing options in that state or other city/region. Use the name of the state or city with the call to action and make them an internal link.

You’re helping the user find a solution, and search engines understand what each page is about.

This is how you build an effective and meaningful internal linking structure for SEO. The main goal is to help a customer; the benefit is you make it easier for search engines to understand your site structure.

Modify The Robots.txt And Sitemaps

At this point, you have a good site and user experience. Now is the time to make sure the states and cities are in the sitemap, and then modify your robots.txt to allow the folders.

You can request a crawl from Search Console for Google and Bing’s version of Webmaster Tools.

As the spiders access robots.txt, they will find your listed pages, then through internal links find other pages to crawl to find all your pages.

With proper meta robots and canonicals, they will see clean pathways and be able to crawl your site better.

Drive Demand Through Local PR

Anyone can list in directories and pay to play. Yes, some may help you, but the real benefit is local PR.

Getting your business featured in the media drives local demand. People in those regions begin searching for your company by name and the services as modifiers on the keywords.

This may or may not help with SEO, but it does one thing: It builds consumer trust and gets you localized, high-value backlinks. These can send trust signals and customers to you.

Local media includes local blogs that do not allow guest posting or sponsored posts, TV and print media like local newspapers, radio stations and podcasts with descriptions with links, and other platforms that people in the city or state use.

You may be able to add a PR bar with “As Seen In” featuring local media logos to build trust with that region.

Summary

The good news is that you don’t have an indexing problem. The pages that exist are indexed.

You have discovery problems because only nine pages actually exist on your website.

The solution here is to build the state and city-based pages, fill them with content and site structure, and then do the work to build trust with PR and trust-building activities.

I hope this helps. Thank you for asking your question – it’s a good one!

More Resources:


Featured Image: Sammby/Shutterstock

WordCamp Asia: No Plans For WordPress In 5 Years via @sejournal, @martinibuster

An awkward Q&A at WordCamp Asia 2025 saw Matt Mullenweg struggle to answer where WordPress will be in five years. Apparently caught off guard, he turned to the Lead Architect of Gutenberg for ideas, but he couldn’t answer either.

Project Gutenberg

Gutenberg is a reimagining of how WordPress users can build websites without knowing any code, with a visual interface of blocks for different parts of a web page, which is supposed to make it easy. Conceived as a four phase project, it’s been in development since 2017 and is currently in phase three.

The four phases are:

  • Phase 1: Easier Editing
  • Phase 2: Customization
  • Phase 3: Collaborative Editing
  • Phase 4: Multilingual Support

There’s a perception that Project Gutenberg has not been enthusiastically received by the WordPress developer community or by regular users, even though there are currently 85.9 million installations of the Gutenberg WordPress editor.

However, one developer at WordCamp Asia told Matt Mullenweg at the end of conference Q&A session that she was experiencing hesitations from people she speaks with about using WordPress and expressed frustration about how difficult it was to use it.

She said:

“Some of those hesitations were it’s easy to get overwhelmed. You know, when you look up how to learn WordPress, and I had to be really motivated… for myself to actually study it and kind of learn the basics of blocks… So do you have any advice on how I could convince my friends to start a WordPress site or how to address these challenges myself? You know like, getting overwhelmed and feeling like there’s just so much. I’m not a coder and things like that… any advice you can offer small business owners?”

The whole purpose of the Gutenberg block editor was to make it easier for non-coders to use WordPress. So a WordPress user asking for ideas on how to convince people to use WordPress presented an unflattering view of the success of the WordPress Gutenberg Project.

Where Will WordPress Be In Five Years?

Another awkward moment was when someone else asked Matt Mullenweg where he saw WordPress being in five years. The question seemingly caught him off guard as he was unable to articulate what the plan is for the world’s most popular content management system.

Mullenweg had been talking about the importance of AI and of some integrations being tested in the commercial version at WordPress.com. So the person asking the question asked if he had any other ideas beyond AI.

The person asked:

“If you have other ideas beyond AI or even how we consume WordPress five years from now that might be different from today.”

Matt Mullenweg answered:

“Yeah, it’s hard to think about anything except AI right now. And as I said a few years ago, before ChatGPT came out, learn AI deeply. Everyone in the room should be playing with it. Try out different models. Check out Grok, check out DeepSeek, two of the coolest ones that just launched.

And for WordPress, at that point will be past all the phases of Gutenberg. I think… I don’t know…”

It was at this point that Mullenweg calls on Matías Ventura, Lead Architect of Gutenberg, to ask him if he has any ideas of where WordPress is headed in five years.

He continued:

“Matías, what do you think? What’s post-Gutenberg? We’ve been working for so long, it’s…”

Matías Ventura, Lead Architect of Gutenberg, came up to a microphone to help Mullenweg answer the question he was struggling with.

Matías answered:

“I mean, hopefully we’ll be done by then so…”

Mullenweg commented:

“Sometimes that last 10% takes, you know, 90% of the time.”

Matías quipped that it can take a hundred years then continued his answer, which essentially admitted that there were no plans without actually admitting that there were no plans for five years out.

He continued his answer:

“I don’t know, I think, well in the talk I gave I… also reflected a bit that part of the thing is just discovering as we go, like figuring out how like, right now it’s AI that’s shaping reality but who knows, in a few decades what it would be. And to me, the only conviction is that yeah, we’ll need to adapt, we’ll need to change. And that’s part of the fun of it, I think. So I’m looking forward to whatever comes.”

Mullenweg jumped in at this point with his thoughts:

“That’s a good point of the, you know, how many releases we have of WordPress right now, 60 or whatever… 70 probably…. Outside of Gutenberg, we haven’t had a roadmap that goes six months or a year, or a couple versions, because the world changes in ways you can’t predict.

But being responsive is, I think, really is how organisms survive.

You know, Darwin, said it’s not the fittest of the species that survives. It’s the one that’s most adaptable to change. I think that’s true for software as well.”

Mullenweg Challenged To Adapt To Change

His statement about being adaptable to change set up another awkward moment at the 6:55:47 minute mark where Taco Verdonschot, co-owner of Progress Planner, stood up to the microphone and asked Mullenweg if he really was committed to being adaptable.

Taco Verdonschot is formerly of Yoast SEO and currently sponsored to work on WordPress by Emilia Capital (owned by Joost de Valk and Marieke van de Rakt).

Taco asked:

“I’m Taco, co-owner of Progress Planner. I was wondering, you were talking about adaptability before and survival of the fittest. That means being open to change. What we’ve seen in the last couple of months is that people who were talking about change got banned from the project. How open are you to discussing change in the project?”

Mullenweg responded:

“Sure. I don’t want to go too far into this but I will say that talking about change will not get you banned. There’s other behaviors… but just talking about change is something that we do pretty much every day. And we’ve changed a lot over the years. We’ve changed a lot in the past year. So yeah. But I don’t want to speak to anyone personally, you know. So keep it positive.”

Biggest Challenges WordPress Will Face In Next Five Years

Watch the question and answer at the 6:19:24 mark