Under 10% of an earthquake’s energy makes the ground shake

Earthquakes are driven by energy stored up in rocks over millennia—energy that, once released, we perceive mainly in the form of the ground’s shaking. But a quake also generates a flash of heat and fractures and damages underground rocks. And exactly how much energy goes into each of these three processes is exceedingly difficult to measure in the field.

Now, with the help of carefully controlled miniature “lab quakes,” MIT geophysicist Matěj Peč and colleagues have quantified this so-called energy budget. Only about 1% to 10% of a lab quake’s energy causes physical shaking, they found, while 1% to 30% goes into breaking up rock and creating new surfaces. The vast majority heats up the area around a quake’s epicenter, producing a temperature spike that can actually melt surrounding material.

The team also found that the fractions of quake energy producing heat, shaking, and rock fracturing can shift depending on the tectonic activity the region has experienced in the past. “The deformation history—essentially what the rock remembers—really influences how destructive an earthquake could be,” says postdoc Daniel Ortega-Arroyo, PhD ’25, lead author of a paper on the work. “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”

The lab quakes—which involve subjecting specially prepared samples of powdered granite and magnetic particles to steadily increasing pressure in a custom-built apparatus—are a simplified analogue of what occurs during a natural earthquake. Down the road, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable that region is to future quakes.

Building materials are getting closer to doubling as batteries

Concrete already builds our world, and an MIT-invented variant known as electron-­conducting carbon concrete (ec3, pronounced “e c cubed”) holds out the possibility of helping power it, too. Now that vision is one step closer. 

Made by combining cement, water, ultra-fine carbon black, and electrolytes, ec3 creates a conductive “nanonetwork” that could enable walls, sidewalks, and bridges to store and release electrical energy like giant batteries. To date, the technology has been limited by low voltage and scalability challenges. But the latest work by the MIT team that invented ec3 has increased the energy storage capacity by an order of magnitude. With the improved technology, about five cubic meters of concrete—the volume of a typical basement wall—could store enough energy to meet the daily needs of the average home.

A weight-bearing arch made of electron-conducting carbon concrete (ec3) integrates supercapacitor electrodes to power a light.
MIT EC³ HUB

The researchers achieved this progress by using high-resolution 3D imaging to learn more about how the conductive carbon network—essentially, the electrode—functions and interacts with electrolytes. Equipped with their new understanding, the team experimented with different electrolytes and their concentrations. “We found that there is a wide range of electrolytes that could be viable candidates for ec3,” says Damian Stefaniuk, a research scientist at the MIT Electron-Conducting Carbon-Cement-Based Materials Hub, led by associate professor Admir Masic. “This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way electrolytes were added to the mix, making it possible to cast thicker electrodes that stored more energy.

While ec3 doesn’t rival conventional batteries in energy density, itcan in principle be incorporated directly into architectural elements and last as long as the structure itself. To show how structural form and energy storage can work together, the team built a miniature arch that supported its own weight and an additional load while powering an LED light. 

5 Content Marketing Ideas for February 2026

Between Valentine’s Day, Presidents’ Day, and the start of the spring buying cycle, ecommerce marketers have plenty of opportunities to publish relevant and timely content in February 2026.

Content marketing is the process of creating, publishing, and promoting articles, videos, and podcasts to attract, engage, and retain customers. For ecommerce businesses, content does more than inform. It differentiates, builds trust, and supports discovery when shoppers are researching rather than buying.

Content is also foundational for lifecycle and social-media marketing, and search-engine and genAI optimization.

What follows are five content marketing ideas your ecommerce business can use in February 2026.

Valentine How-to

Photo of a male and female preparing a meal in a kitchen

Valentine-themed content could include something as simple as planning a dinner at home.

Valentine’s Day remains one of the most reliable seasonal content opportunities, especially when merchants focus on guidance as well as promotions.

Content that provides useful and actionable information can attract shoppers unfamiliar with a brand or its products.

The content should align closely with the products sold. A wine shop can explain pairings. A jewelry retailer can address how to choose materials or styles. A home goods boutique might describe how to set a table or create a Valentine’s Day dinner.

Consider the sample titles:

  • “How to Choose the Perfect Wine for Valentine’s Dinner,”
  • “Build a Thoughtful Valentine’s Gift Box,”
  • “How to Create a Romantic Valentine’s at Home,”
  • “Helpful Tips to Match Valentine’s Jewelry to Her Style.”

Here are a few articles from publishers:

Presidents’ Day

Two males in a factory-setting making apparel

Presidents’ Day content can focus on U.S. patriotism and domestic manufacturing.

Celebrated on February 16 in 2026, Presidents’ Day is a storytelling opportunity, more than a sales event. Ecommerce marketers can publish articles or videos that explain the holiday’s origins, its meaning, or the historical figures it honors, e.g., George Washington.

Patriotic holidays can celebrate domestic companies, such as brands with made-in-America products.

Here’s an example. Origin is an apparel brand in Farmington, Maine. President Dwight Eisenhower visited the city in June 1955, passing close to what is now Origin’s manufacturing facility. The company could recount Eisenhower’s visit and retell its own story in the process.

In 2026, Presidents’ Day has extra relevance. The United States is entering its semiquincentennial year, marking 250 years since independence. Celebrations will peak in July, yet February is not too early to publish 250th-themed content.

A Complete Guide

Female shop owner visiting with a male customer

A “complete” guide is akin to a store owner explaining her wares to an in-person shopper.

Content marketers are familiar with “complete guides” or “ultimate guides.” These are typically long, “pillar” articles that demonstrate topical authority.

The goal is usefulness, not brevity. A merchant that sells loose-leaf tea could publish a comprehensive guide to tea types, brewing methods, and storage. A cycling retailer could create a guide to bike maintenance or gear selection.

Over time, these guides become evergreen assets that support internal linking, featured snippets, and AI-generated summaries. They can be gold for optimizing for search engines, generative AI platforms, and answer engines, especially when updated annually.

Examples of guides include “Complete Guide to Loose-Leaf Tea” and “Ultimate Guide to Choosing Cookware.”

The idea is clear enough: Pick a product or category and be the authority.

Curated Newsletters

This idea aims to help businesses that struggle to produce content. Instead of composing or generating (and then editing) loads of articles, a company can mix product info with content from other publishers.

Put another way, curated newsletters allow ecommerce businesses to publish consistently without creating content from scratch. The idea is to select quality articles, videos, or social posts from trusted sources and add brief editorial context.

Home page for Better Kitchen Gear

The newsletter for Better Kitchen Gear, an affiliate marketing site, links to external recipes.

Consider an example from Better Kitchen Gear, an affiliate marketing site. Its email newsletter blends curated recipes with links to affiliate content. A recent issue on sourdough bread included summaries and links to recipes from the King Arthur Baking Company and cookbook author Alexandra Stafford.

Another link was to an original article titled “The Tools Behind Great Sourdough,” which included six products on Amazon.

Merchants could do much the same. For example, a golf accessories seller could publish a weekly newsletter featuring curated golf news and links to products.

American Heart Month

Photo of a female on a treadmill

American Heart Month is an opportunity for stores selling health or fitness products.

President Lyndon Johnson established American Heart Month in 1964 with a proclamation encouraging Americans to focus on cardiovascular health.

It occurs in February because of Valentine’s Day, reinforcing the symbolic connection between the heart and daily life. Since then, the month-long observance has promoted education about heart health, prevention, and sustainable lifestyle habits.

Ecommerce marketers promoting products in fitness, food, wellness, apparel, and home categories can focus content on everyday behaviors, routines, and product use that support an active, balanced lifestyle.

Imagine a content marketer for a fitness gear retailer. She wants to honor American Heart Month while promoting the company’s products. She decides on an article titled “5 Ways to Turn a Spare Room into a Cardio Studio.”

The Guardian: Google AI Overviews Gave Misleading Health Advice via @sejournal, @MattGSouthern

The Guardian published an investigation claiming health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting and says many examples were based on incomplete screenshots.

The Guardian said it tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google told The Guardian the “vast majority” of AI Overviews are factual and helpful.

What The Guardian Reported Finding

The Guardian said it tested a range of health queries and asked health organizations to review the AI-generated summaries. Several reviewers said the summaries included misleading or incorrect guidance.

One example involved pancreatic cancer. Anna Jewell, director of support, research and influencing at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was “completely incorrect.” She added that following that guidance “could be really dangerous and jeopardise a person’s chances of being well enough to have treatment.”

The reporting also highlighted mental health queries. Stephen Buckley, head of information at Mind, said some AI summaries for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could lead people to avoid seeking help.”

The Guardian cited a cancer screening example too. Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said a pap test being listed as a test for vaginal cancer was “completely wrong information.”

Sophie Randall, director of the Patient Information Forum, said the examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.”

The Guardian also reported that repeating the same search could produce different AI summaries at different times, pulling from different sources.

Google’s Response

Google disputed both the examples and the conclusions.

A spokesperson told The Guardian that many of the health examples shared were “incomplete screenshots,” but from what the company could assess they linked “to well-known, reputable sources and recommend seeking out expert advice.”

Google told The Guardian the “vast majority” of AI Overviews are “factual and helpful,” and that it “continuously” makes quality improvements. The company also argued that AI Overviews’ accuracy is “on a par” with other Search features, including featured snippets.

Google added that when AI Overviews misinterpret web content or miss context, it will take action under its policies.

The Broader Accuracy Context

This investigation lands in the middle of a debate that’s been running since AI Overviews expanded in 2024.

During the initial rollout, AI Overviews drew attention for bizarre results, including suggestions involving glue on pizza and eating rocks. Google later said it would reduce the scope of queries that trigger AI-written summaries and refine how the feature works.

I covered that launch, and the early accuracy problems quickly became part of the public narrative around AI summaries. The question then was whether the issues were edge cases or something more structural.

More recently, data from Ahrefs suggests medical YMYL queries are more likely than average to trigger AI Overviews. In its analysis of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That’s more than double the overall baseline rate in the dataset.

Separate research on medical Q&A in LLMs has pointed to citation-support gaps in AI-generated answers. One evaluation framework, SourceCheckup, found that many responses were not fully supported by the sources they cited, even when systems provided links.

Why This Matters

AI Overviews appear above ranked results. When the topic is health, errors carry more weight.

Publishers have spent years investing in documented medical expertise to meet. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

The Guardian’s reporting also highlights a practical problem. The same query can produce different summaries at different times, making it harder to verify what you saw by running the search again.

Looking Ahead

Google has previously adjusted AI Overviews after viral criticism. Its response to The Guardian indicates it expects AI Overviews to be judged like other Search features, not held to a separate standard.

State Of AI Search Optimization 2026 via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Every year, after the winter holidays, I spend a few days ramping up by gathering the context from last year and reminding myself of where my clients are at. I want to use the opportunity to share my understanding of where we are with AI Search, so you can quickly get back into the swing of things.

As a reminder, the vibe around ChatGPT turned a bit sour at the end of 2025:

  • Google released the superior Gemini 3, causing Sam Altman to announce a Code Red (ironically, three years after Google did the same at the launch of ChatGPT 3.5).
  • OpenAI made a series of circular investments that raised eyebrows and questions about how to finance them.
  • ChatGPT, which sends the majority of all LLMs, reaches at most 4% of the current organic (mostly Google) referral traffic.

Most of all, we still don’t know the value of a mention in an AI response. However, the topic of AI and LLMs couldn’t be more important because the Google user experience is turning from a list of results to a definitive answer.

A big “thank you” to Dan Petrovic and Andrea Volpini for reviewing my draft and adding meaningful concepts.

AI Search Optimization
Image Credit: Kevin Indig

Retrieved → Cited → Trusted

Optimizing for AI search visibility follows a pipeline similar to the classic “crawl, index, rank” for search engines:

  1. Retrieval systems decide which pages enter the candidate set.
  2. The model selects which sources to cite.
  3. Users decide which citation to trust and act on.

Caveats:

  1. A lot of the recommendations overlap strongly with common SEO best practices. Same tactics, new game.
  2. I don’t pretend to have an exhaustive list of everything that works.
  3. Controversial factors like schema or llms.txt are not included.

Consideration: Getting Into The Candidate Pool

Before any content enters the model’s consideration (grounding) set, it must be crawled, indexed, and fetchable within milliseconds during real-time search.

The factors that drive consideration are:

  • Selection Rate and Primary Bias.
  • Server response time.
  • Metadata relevance.
  • Product feeds (in ecommerce).

1. Selection Rate And Primary Bias

  • Definition: Primary bias measures the brand-attribute associations a model holds before grounding in live search results. Selection Rate measures how frequently the model chooses your content from the retrieval candidate pool.
  • Why it matters: LLMs are biased by training data. Models develop confidence scores for brand-attribute relationships (e.g., “cheap,” “durable,” “fast”) independent of real-time retrieval. These pre-existing associations influence citation likelihood even when your content enters the candidate pool.
  • Goal: Understand which attributes the model associates with your brand and how confident it is in your brand as an entity. Systematically strengthen those associations through targeted on-page and off-page campaigns.

2. Server Response Time

  • Definition: The time between a crawler request and the server’s first byte of response data (TTFB = Time To First Byte).
  • Why it matters: When models need web results for reasoning answers (RAG), they need to retrieve the content like a search engine crawler. Even though retrieval is mostly index-based, faster servers help with rendering, agentic workflows, and freshness, and compound query fan-out. LLM retrieval operates under tight latency budgets during real-time search. Slow responses prevent pages from entering the candidate pool because they miss the retrieval window. Consistently slow response times trigger crawl rate limiting.
  • Goal: Maintain server response times <200ms>. Sites with <1s load times receive> 3x more Googlebot requests than sites >3s. For LLM crawlers (GPTBot, Google-Extended), retrieval windows are even tighter than traditional search.

3. Metadata Relevance

  • Definition: Title tags, meta descriptions, and URL structure that LLMs parse when evaluating page relevance during live retrieval.
  • Why it matters: Before picking content to form AI answers, LLMs parse titles for topical relevance, descriptions as document summaries, and URLs as context clues for page relevance and trustworthiness.
  • Goal: Include target concepts in titles and descriptions (!) to match user prompt language. Create keyword-descriptive URLs, potentially even including the current year to signal freshness.

4. Product Feed Availability (Ecommerce)

  • Definition: Structured product catalogs submitted directly to LLM platforms with real-time inventory, pricing, and attribute data.
  • Why it matters: Direct feeds bypass traditional retrieval constraints and enable LLMs to answer transactional shopping queries (”where can I buy,” “best price for”) with accurate, current information.
  • Goal: Submit merchant-controlled product feeds to ChatGPT’s merchant program (chatgpt.com/merchants) in JSON, CSV, TSV, or XML format with complete attributes (title, price, images, reviews, availability, specs). Implement ACP (Agentic Commerce Protocol) for agentic shopping.

Relevance: Being Selected For Citation

The Attribution Crisis in LLM Search Results” (Strauss et al., 2025) reports low citation rates even when models access relevant sources.

  • 24% of ChatGPT (4o) responses are generated without explicitly fetching any online content.
  • Gemini provides no clickable citation in 92% of answers.
  • Perplexity visits about 10 relevant pages per query but cites only three to four.

Models can only cite sources that enter the context window. Pre-training mentions often go unattributed. Live retrieval adds a URL, which enables attribution.

5. Content Structure

  • Definition: The semantic HTML hierarchy, formatting elements (tables, lists, FAQs), and fact density that make pages machine-readable.
  • Why it matters: LLMs extract and cite specific passages. Clear structure makes pages easier to parse and excerpt. Since prompts average 5x the length of keywords, structured content answering multi-part questions outperforms single-keyword pages.
  • Goal: Use semantic HTML with clear H-tag hierarchies, tables for comparisons, and lists for enumeration. Increase fact and concept density to maximize snippet contribution probability.

6. FAQ Coverage

  • Definition: Question-and-answer sections that mirror the conversational phrasing users employ in LLM prompts.
  • Why it matters: FAQ formats align with how users query LLMs (”How do I…,” “What’s the difference between…”). This structural and linguistic match increases citation and mention likelihood compared to keyword-optimized content.
  • Goal: Build FAQ libraries from real customer questions (support tickets, sales calls, community forums) that capture emerging prompt patterns. Monitor FAQ freshness through lastReviewed or DateModified schema.

7. Content Freshness

  • Definition: Recency of content updates as measured by “last updated” timestamps and actual content changes.
  • Why it matters: LLMs parse last-updated metadata to assess source recency and prioritize recent information as more accurate and relevant.
  • Goal: Update content within the past three months for maximum performance. Over 70% of pages cited by ChatGPT were updated within 12 months, but content updated in the last three months performs best across all intents.

8. Third-Party Mentions (”Webutation”)

  • Definition: Brand mentions, reviews, and citations on external domains (publishers, review sites, news outlets) rather than owned properties.
  • Why it matters: LLMs weigh external validation more heavily than self-promotion the closer user intent comes to a purchase decision. Third-party content provides independent verification of claims and establishes category relevance through co-mentions with recognized authorities. They increase the entitithood inside large context graphs.
  • Goal: 85% of brand mentions in AI search for high purchase intent prompts come from third-party sources. Earn contextual backlinks from authoritative domains and maintain complete profiles on category review platforms.

9. Organic Search Position

  • Definition: Page ranking in traditional search engine results pages (SERPs) for relevant queries.
  • Why it matters: Many LLMs use search engines as retrieval sources. Higher organic rankings increase the probability of entering the LLM’s candidate pool and receiving citations.
  • Goal: Rank in Google’s top 10 for fan-out query variations around your core topics, not just head terms. Since LLM prompts are conversational and varied, pages ranking for many long-tail and question-based variations have higher citation probability. Pages in the top 10 show a strong correlation (~0.65) with LLM mentions, and 76% of AI Overview citations pull from these positions. Caveat: Correlation varies by LLM. For example, overlap is high for AI Overviews but low for ChatGPT.

User Selection: Earning Trust And Action

Trust is critical because we’re dealing with a single answer in AI search, not a list of search results. Optimizing for trust is similar to optimizing for click-through rates in classic search, just that it takes longer and is harder to measure.

10. Demonstrated Expertise

  • Definition: Visible credentials, certifications, bylines, and verifiable proof points that establish author and brand authority.
  • Why it matters: AI search delivers single answers rather than ranked lists. Users who click through require stronger trust signals before taking action because they’re validating a definitive claim.
  • Goal: Display author credentials, industry certifications, and verifiable proof (customer logos, case study metrics, third-party test results, awards) prominently. Support marketing claims with evidence.

11. User-Generated Content Presence

  • Definition: Brand representation in community-driven platforms (Reddit, YouTube, forums) where users share experiences and opinions.
  • Why it matters: Users validate synthetic AI answers against human experience. When AI Overviews appear, clicks on Reddit and YouTube grow from 18% to 30% because users seek social proof.
  • Goal: Build positive presence in category-relevant subreddits, YouTube, and forums. YouTube and Reddit are consistently in the top 3 most cited domains across LLMs.

From Choice To Conviction

Search is moving from abundance to synthesis. For two decades, Google’s ranked list gave users a choice. AI search delivers a single answer that compresses multiple sources into one definitive response.

The mechanics differ from early 2000s SEO:

  • Retrieval windows replace crawl budgets.
  • Selection rate replaces PageRank.
  • Third-party validation replaces anchor text.

The strategic imperative is identical: earn visibility in the interface where users search. Traditional SEO remains foundational, but AI visibility demands different content strategies:

  • Conversational query coverage matters more than head-term rankings.
  • External validation matters more than owned content.
  • Structure matters more than keyword density.

Brands that build systematic optimization programs now will compound advantages as LLM traffic scales. The shift from ranked lists to definitive answers is irreversible.


Featured Image: Paulo Bobita/Search Engine Journal

AI-Generated Content Isn’t The Problem, Your Strategy Is

“If AI can write, why are we still paying writers?” For any CMO or senior manager on a budget, you’ve probably already had a version of this conversation. It’s a seductive idea. After all, humans are expensive and can take hours or even days to write a single article. So, why not replace them with clever machines and watch the costs go down while productivity goes up?

It’s understandable. Buffeted by years of high inflation, high interest rates, and disrupted supply chains, organizations around the world are cutting costs wherever they can. These days, instead of “cost cutting,” CFOs and executive teams prefer the term “cost transformation,” a new jargon for the same old problem.

Whatever you call it, marketing is one department that is definitely feeling the impact. According to Gartner, in 2020, the average marketing budget was 11% of overall company revenue. By 2023, this had fallen to 9.1%. Today, the average budget is 7.7%.

Of course, some organizations will have made these cuts under the assumption that AI makes larger teams and larger budgets unnecessary. I’ve already seen some companies slash their content teams to the bone; no doubt believing that all you need is a few people capable of crafting a decent prompt. Yet a different Gartner study found that 59% of CMOs say they lack the budget to execute their 2025 strategy. I guess they didn’t get the memo.

Meanwhile, some other organizations refuse to let AI near their content at all, for a variety of reasons. They might have concerns over quality control, data privacy, complexity, and so on. Or perhaps they’re hanging onto the belief that this AI thing is a fad or a bubble, and they don’t want to implement something that might come crashing down at any moment.

Both camps likely believe they’ve adopted the correct, rational, financially prudent approach to AI. Both are dangerously wrong. AI might not be the solution, but it’s also not the problem.

Beeching’s Axe

Spanish philosopher George Santayana once wrote: “Those who cannot remember the past are condemned to repeat it.” With that in mind, let me share a cautionary tale.

In the 1960s, British Railways (later British Rail) made one of the most short-sighted decisions in transport history. With the railway network hemorrhaging money, the Conservative Government appointed Dr. Richard Beeching, a physicist from ICI with no transport experience, as the new chairman of the British Transport Commission, tasked with cutting costs and making the railways profitable.

Beeching’s solution was simple; do away with all unprofitable routes, identified by assessing the passenger numbers and operational costs of each route in isolation. Between 1963 and 1970, Beeching’s cost-cutting axe led to the closure of 2,363 stations and over 5,000 miles of track (~30% of the rail network), with the loss of 67,700 jobs.

Decades later, the country is spending billions rebuilding some of those same routes. As it turned out, many of those “unprofitable” routes were vital not only to the health of the wider rail network, but also to the communities in those regions in ways that Beeching’s team of bean counters simply didn’t have the imagination to value.

I’m telling you this because, right now, a lot of businesses are carrying out their own version of the Beeching cuts.

The Data-Led Trap

There’s a crucial distinction between being data-led and data-informed. Understanding this could be the difference between implementing a sound content production strategy and repeating Beeching’s catastrophe.

Data-led thinking treats the available data as the complete picture. It looks for a pattern and adopts it as an undeniable truth that points towards a clear course of action. “AI generates content for a fraction of our current costs. Therefore, we should replace the writers.”

Data-informed thinking sets out to understand what might be behind the pattern, extrapolate what’s missing from the picture, and stress-test the conclusions. The data becomes a starting point for inquiry, not an endpoint for decisions. “What value isn’t captured in this data? What would replacing our writers with AI actually mean for the effectiveness of our content when our competitors can do the exact the same thing with the exact same tools?”

That last question is the real challenge facing companies considering AI-generated content, but the answer won’t be found in a spreadsheet. If you can use AI to generate your content with minimal human input, so can everyone else. Very soon, everyone is generating similar content on similar topics to target the same audiences, with recycled information and reheated “insights” drawn from the same online sources.

Why would ChatGPT somehow generate a better blog post for you than for anyone else asking for 1,200 words on the same topic? It wouldn’t. You need to add your own secret sauce.

There is no competitive advantage to be gained by relying on AI-generated content alone. None.

AI-generated content is not a silver bullet. It’s the minimum benchmark your content needs to significantly exceed if your brand and your content is to have any chance of standing out in today’s noisy online marketplace.

Unfortunately, while organizations know they need to have content, far too many senior decision-makers don’t fully understand why, never mind all the things an effective content strategy needs to accomplish.

Content Isn’t A Cost, It’s An Infrastructure

Marketing content is often looked down upon as somehow easier or less worthy than other forms of writing. Yet it arguably has the hardest job of all. Every article, ebook, LinkedIn post, brochure, and landing page has to tick off a veritable to-do list of strategic requirements.

Of course, your content needs to have something to say. It must work on an informational level, backed by solid research and journalism. However, each asset or article also has a strategic role to play: attracting audiences, nurturing prospects, or converting customers, while aligning with the brand’s carefully mapped out messaging at every stage.

Your content must build authority, earn trust, and demonstrate expertise. It must be memorable enough to aid brand awareness and recall, and distinctive enough to differentiate the brand from its competitors. It must be structured for search engines with the right entities, topics, and relationships, without losing the attention of busy humans who can click away at any second. Ideally, it should also include a couple of quote-worthy lines or interesting stats capable of attracting attention when the content is distributed on social media.

ChatGPT or Claude can certainly string a bunch of convincing sentences together. But if you think they can spin all those other plates for you at the same time, and to the same standard as a skilled content creator, you’re going to be disappointed. No matter how detailed and nuanced your prompt, something will always be missing. You’re still asking AI to synthesize something brilliant by recycling what’s already out there.

Which brings me to the most ironic part of this discussion. With the rapid adoption of AI-mediated search, your content now needs to become a source that large language models will confidently cite in responses to relevant queries.

Expecting AI to create content likely to be cited by AI is like watching a dog chasing its tail: futile and frustrating. If AI provided the information and insights contained in your content, it already has better, more authoritative sources. Why would AI cite content that contains little if any fresh information or insight?

If your goal is to increase your brand’s visibility in AI responses, then your content needs to offer what can’t easily be found elsewhere.

The Limitations Of Online Knowledge

Despite appearances, AI cannot think. It cannot understand, in the sense we usually mean it. As it currently stands, it cannot reason. It certainly cannot imagine. Words like these have emerged as common euphemisms for how AI generates responses, but they also set the wrong expectations.

AI also cannot use information that isn’t already available and crawlable online. While we like to think that somehow the internet is a massive store of the entirety of human knowledge, the reality is that it’s not even close.

So much of the world we live in simply cannot be captured as structured, digitized information. While AI can tell you when and where the next local collectables market is on, it can’t tell you which dealer has that hard-to-find comic you’ve been chasing for years. That’s the kind of information you can only find out by digging through lots of comic boxes on the day.

And then there are cultural histories and localized experiences that exist more in verbal traditions than in history books. AI can tell me plenty of stuff about the First World War. But if I ask it about the Iranian famine during WW1, it’s going to struggle because it’s not that well documented outside of Iranian history books. Most of my knowledge of the famine comes almost entirely from stories my great grandma told my mother, who then passed them on to me, like how she had to survive on just one almond per day. But you won’t find her stories in any book.

How can AI draw upon the wealth of personal experience and memories we all have? The greatest source of knowledge is human. It’s us. It’s always us.

But while AI can’t do your thinking for you, it can still help in many other ways.

→ Read More: Can You Use AI To Write For YMYL Sites? (Read The Evidence Before You Do)

You Still Need A Brain Behind The Bot

Let me be clear: I use AI every day. My team uses AI every day. You should, too. The problem isn’t the tool. The problem is treating the tool as a strategy, and an efficiency or cost reduction strategy at that. Of course, it isn’t only marketing teams hoping to reduce costs and boost productivity with generative AI. Another industry has already discovered that AI doesn’t actually replace anything.

A recent survey conducted by the Australian Financial Review (AFR) found that most law firms reported using AI tools. However, far from reducing headcount, 70% of surveyed firms increased their hiring of lawyers to vet, review, and sign off on AI-generated outputs.

This isn’t a failure in their AI strategy, because the strategy was never about reducing headcount. They’re using AI tools as digital assistants (research, drafting, document handling, etc.) to free up more time and headspace for the kinds of strategic and insightful thinking that generates real business value.

Similarly, AI isn’t a like-for-like replacement for your writers, designers, and other content creators. It’s a force multiplier for them, helping your team reduce the drudgery that can so often get in the way of the real work.

  • Summarizing complex information.
  • Transcribing interviews.
  • Creating outlines.
  • Drafting related content like social media posts.
  • Checking your content against the brand style guide to catch inconsistencies.

Some writers might even use AI to generate a very rough first draft of an article to get past that blank page. The key is to treat that copy as a starting point, not the finished article.

All these tasks are massive time-savers for content creators, freeing up more of their mental bandwidth for the high-value work AI simply can’t do as well.

AI can only synthesize content from existing information. It cannot create new knowledge or come up with fresh ideas. It cannot interview subject matter experts within your business to draw out hidden wisdom and insights. It cannot draw upon personal experiences or perspectives to make your content truly yours.

AI is also riddled with algorithmic biases, potentially skewing your content and your messaging without you even realizing. For example, the majority of AI training data is in the English language, creating a huge linguistic and cultural bias. It might require an experienced and knowledgeable eye to spot the subtle hallucinations or distortions.

While AI can certainly accelerate execution, you still need skilled, experienced creatives to do the real thinking and crafting.

You Don’t Know What You Have, Until It’s Gone

Until Beeching closed the line in 1969, the route between Edinburgh and Carlisle was a vital transport artery for the Scottish Borders. On paper, the line was unprofitable, at least according to Beeching’s simplistic methodology. However, the closure had massive knock-on effects, reducing access to jobs, education, and social services, as well as impacting tourism. Meanwhile, forcing people onto buses or into cars placed greater strain on other transport infrastructures.

While Beeching might have solved one narrowly defined problem, he had undermined the broader purpose of British Railways: the mobility of people in all parts of Great Britain. In effect, Beeching had shifted the consequences and cost pressures elsewhere.

The route was partially reopened in 2015 as The Borders Railway, costing an estimated £300 million to reinstate just 30 miles of line with seven stations.

Beeching’s cuts illustrate the folly of evaluating infrastructure (or content strategy) purely on narrow, short-term financial metrics.

Organizations that cut their teams in favor of AI are likely to find it isn’t so easy to reverse course and undo the damage a few years from now. Replacing your writers with AI risks eroding the connective tissue that characterizes your content ecosystem and anchors long-term performance: authority, context, nuance, trust, and brand identity.

Experienced content creators aren’t going to wait around for organizations to realize their true value. If enough of them leave the industry, and with fewer opportunities available for the next generation of creators to gain the necessary skills and experience, the talent pool is likely to shrink massively.

As with the Beeching cuts, rebuilding your content team is likely to cost you far more in the long term than you saved in the short term, particularly when you factor in the months or years of low-performing content in the meantime.

Know What You’re Cutting Before You Wield The Axe

According to your spreadsheet, AI-generated content may well be cheaper to produce. But the effectiveness of your content strategy doesn’t hinge on whether you can publish more for less. This isn’t a case of any old content will do.

So, beware of falling into the Beeching trap. Your content workflows might only seem “loss-making” on paper because the metrics you’re looking at don’t adequately capture all the ways your content delivers strategic value to your business.

Content is not a cost center. It never was. Content is the infrastructure of your brand’s discoverability, which makes it more important than ever in the AI era.

This isn’t a debate about “human vs. AI content.” It’s about equipping skilled people with the tools to help them create work worthy of being found, cited, and trusted.

So, before you start swinging the axe, ask yourself: Are you cutting waste, or are you dismantling the very system that makes your brand visible and credible in the first place?

More Resources:


Featured Image: IM Imagery/Shutterstock

Google’s Recommender System Breakthrough Detects Semantic Intent via @sejournal, @martinibuster

Google published a research paper about helping recommender systems understand what users mean when they interact with them. Their goal with this new approach is to overcome the limitations inherent in the current state-of-the-art recommender systems in order to get a finer, detailed understanding of what users want to read, listen to, or watch at the level of the individual.

Personalized Semantics

Recommender systems predict what a user would like to read or watch next. YouTube, Google Discover, and Google News are examples of recommender systems for recommending content to users. Other kinds of recommender systems are shopping recommendations.

Recommender systems generally work by collecting data about the kinds of things a user clicks on, rates, buys, and watches and then using that data to suggest more content that aligns with a user’s preferences.

The researchers referred to those kinds of signals as primitive user feedback because they’re not so good at recommendations based on an individual’s subjective judgment about what’s funny, cute, or boring.

The intuition behind the research is that the rise of LLMs presents an opportunity to leverage natural language interactions to better understand what a user wants through identifying semantic intent.

The researchers explain:

“Interactive recommender systems have emerged as a promising paradigm to overcome the limitations of the primitive user feedback used by traditional recommender systems (e.g., clicks, item consumption, ratings). They allow users to express intent, preferences, constraints, and contexts in a richer fashion, often using natural language (including faceted search and dialogue).

Yet more research is needed to find the most effective ways to use this feedback. One challenge is inferring a user’s semantic intent from the open-ended terms or attributes often used to describe a desired item. This is critical for recommender systems that wish to support users in their everyday, intuitive use of natural language to refine recommendation results.”

The Soft Attributes Challenge

The researchers explained that hard attributes are something that recommender systems can understand because they are objective ground truths like “genre, artist, director.” What they had problems with were other kinds of attributes called “soft attributes” that are subjective and for which they couldn’t be matched with movies, content, or product items.

The research paper states the following characteristics of soft attributes:

  • “There is no definitive “ground truth” source associating such soft attributes with items
  • The attributes themselves may have imprecise interpretations
  • And they may be subjective in nature (i.e., different users may interpret them differently)”

The problem of soft attributes is the problem that the researchers set out to solve and why the research paper is called Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors.

Novel Use Of Concept Activation Vectors (CAVs)

Concept Activation Vectors (CAVs) are a way to probe AI models to understand the mathematical representations (vectors) the models use internally. They provide a way for humans to connect those internal vectors to concepts.

So the standard direction of the CAV is interpreting the model. What the researchers did was to change that direction so that the goal is now to interpret the users, translating subjective soft attributes into mathematical representations for recommender systems. The researchers discovered that adapting CAVs to interpret users enabled vector representations that helped AI models detect subtle intent and subjective human judgments that are personalized to an individual.

As they write:

“We demonstrate … that our CAV representation not only accurately interprets users’ subjective semantics, but can also be used to improve recommendations through interactive item critiquing.”

For example, the model can learn that users mean different things by “funny” and be better able to leverage those personalized semantics when making recommendations.

The problem the researchers are solving is figuring out how to bridge the semantic gap between how humans speak and how recommender systems “think.”

Humans think in concepts, using vague or subjective descriptions (called soft attributes).

Recommender systems “think” in math: They operate on vectors (lists of numbers) in a high-dimensional “embedding space”.

The problem then becomes making the subjective human speech less ambiguous but without having to modify or retrain the recommender system with all the nuances. The CAVs do that heavy lifting.

The researchers explain:

“…we infer the semantics of soft attributes using the representation learned by the recommender system model itself.”

They list four advantages of their approach:

“(1) The recommender system’s model capacity is directed to predicting user-item preferences without further trying to predict additional side information (e.g., tags), which often does not improve recommender system performance.

(2) The recommender system model can easily accommodate new attributes without retraining should new sources of tags, keywords or phrases emerge from which to derive new soft attributes.

(3) Our approach offers a means to test whether specific soft attributes are relevant to predicting user preferences. Thus, we are able focus attention on attributes most relevant to capturing a user’s intent (e.g., when explaining recommendations, eliciting preferences, or suggesting critiques).

(4) One can learn soft attribute/tag semantics with relatively small amounts of labelled data, in the spirit of pre-training and few-shot learning.”

They then provide a high-level explanation of how the system works:

“At a high-level, our approach works as follows. we assume we are given:

(i) a collaborative filtering-style model (e.g.,probabilistic matrix factorization or dual encoder) which embeds items and users in a latent space based on user-item ratings; and

(ii) a (small) set of tags (i.e., soft attribute labels) provided by a subset of users for a subset of items.

We develop methods that associate with each item the degree to which it exhibits a soft attribute, thus determining that attribute’s semantics. We do this by applying concept activation vectors (CAVs) —a recent method developed for interpretability of machine-learned models—to the collaborative filtering model to detect whether it learned a representation of the attribute.

The projection of this CAV in embedding space provides a (local) directional semantics for the attribute that can then be applied to items (and users). Moreover, the technique can be used to identify the subjective nature of an attribute, specifically, whether different users have different meanings (or tag senses) in mind when using that tag. Such a personalized semantics for subjective attributes can be vital to the sound interpretation of a user’s true intent when trying to assess her preferences.”

Does This System Work?

One of the interesting findings is that their test of an artificial tag (odd year) showed that the systems accuracy rate was barely above a random selection, which corroborated their hypothesis that “CAVs are useful for identifying preference related attributes/tags.”

They also found that using CAVs in recommender systems were useful for understanding “critiquing-based” user behavior and improved those kinds of recommender systems.

The researchers listed four benefits:

“(i) using a collaborative filtering representation to identify attributes of greatest relevance to the recommendation task;

(ii) distinguishing objective and subjective tag usage;

(iii) identifying personalized, user-specific semantics for subjective attributes; and

(iv) relating attribute semantics to preference representations, thus allowing interactions using soft attributes/tags in example critiquing and other forms of preference elicitation.”

They found that their approach improved recommendations for situations where discovery of soft attributes are important. Using this approach for situations in which hard attributes are more the norm, such as in product shopping, is a future area of study to see if soft attributes would aid in making product recommendations.

Takeaways

The research paper was published in 2024 and I had to dig around to actually find it, which may explain why it generally went unnoticed in the search marketing community.

Google tested some of this approach with an algorithm called WALS (Weighted Alternating Least Squares), actual production code that is a product in Google Cloud for developers.

Two notes in a footnote and in the appendix explain:

“CAVs on MovieLens20M data with linear attributes use embeddings that were learned (via WALS) using internal production code, which is not releasable.”

…The linear embeddings were learned (via WALS, Appendix A.3.1) using internal production code, which is not releasable.”

“Production code” refers to software that is currently running in Google’s user-facing products, in this case Google Cloud. It’s likely not the underlying engine for Google Discover, however it’s important to note because it shows how easily it can be integrated into an existing recommender system.

They tested this system using the MovieLens20M dataset, which is a public dataset of 20 million ratings, with some of the tests done with Google’s proprietary recommendation engine (WALS). This lends credibility to the inference that this code can be used on a live system without having to retrain or modify them.

The takeaway that I see in this research paper is that this makes it possible for recommender systems to leverage semantic data about soft attributes. Google Discover is regarded by Google as a subset of search, and search patterns are some of the data that the system uses to surface content. Google doesn’t say whether they are using this kind of method, but given the positive results, it is possible that this approach could be used in Google’s recommender systems. If that’s the case, then that means Google’s recommendations may be more responsive to users’ subjective semantics.

The research paper credits Google Research (60% of the credits), and also Amazon, Midjourney, and Meta AI.

The PDF is available here:

Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors

Featured Image by Shutterstock/Here

Reddit Introduces Max Campaigns, Its New Automated Campaign Type via @sejournal, @brookeosmundson

Reddit is rolling out Max campaigns, a new automated campaign type now available in beta for traffic and conversion objectives.

The launch comes as Reddit continues to see strong advertiser momentum, supported by rising daily active users and rapid growth in conversion activity.

While automation is now standard across most paid media platforms, Reddit is positioning Max campaigns as a way to simplify campaign management without asking advertisers to operate with limited visibility into performance or audience behavior.

How Reddit Max Campaigns Work

Max campaigns are designed to reduce setup complexity and ongoing management by automating several decisions advertisers typically make manually.

This includes the following, all within guardrails defined by the advertiser:

  • Audience targeting
  • Creative selection and rotation, placements
  • Budget allocation

The system is powered by Reddit Community Intelligence™, which draws from more than 23 billion posts and comments to help predict the value of each ad impression in real time. These signals allow campaigns to adjust delivery dynamically as performance data changes, rather than relying on static rules or frequent manual intervention.

Max campaigns also introduce optional creative automation tools. Advertisers can generate headline suggestions based on trending Reddit language, automatically adapt images into Reddit-friendly thumbnails, and soon will be able to use AI-based video cropping to more easily reuse video assets from other platforms.

In the announcement, Reddit reports that more than 600 advertisers participated in alpha testing. Across 17 split tests conducted between June and August 2025, advertisers saw an average 17% lower cost per acquisition and 27% more conversions compared to business-as-usual campaigns.

In one example, Brooks Running reported a 37% decrease in cost per click and 27% more clicks over a 21-day campaign without making manual changes.

Why This Matters For Advertisers

Platforms like Google and Meta have spent the last several years pushing advertisers toward AI-driven campaign types that consolidate targeting, creative, and bidding into a single system. Performance Max, Advantage+, and similar offerings have become the default recommendation for scaling efficiency.

Reddit’s Max campaigns follow that same directional shift, but with a notable difference in emphasis. Where Google and Meta largely optimize toward outcomes while abstracting audience detail, Reddit is attempting to pair automation with clearer audience context.

On Google and Meta, advertisers often evaluate AI campaigns based on aggregate performance metrics alone, with limited insight into who is driving results beyond high-level breakdowns. Reddit is positioning Max campaigns as a way to automate delivery while still helping advertisers understand which types of users are engaging, what they care about, and how conversations influence response.

Top Audience Personas reflect this approach. Instead of relying solely on predefined segments or modeled interests, Reddit uses community and conversation signals to surface patterns in how real users engage with ads. These insights are not meant to replace targeting decisions, but to inform creative strategy, messaging, and where Reddit fits within a broader media mix.

For advertisers who have grown cautious of automation that prioritizes efficiency at the expense of understanding, this added layer of insight may be the differentiator.

What Advertisers Should Do Next

Max campaigns are now available in beta for traffic and conversion objectives to select advertisers, with wider access expected over the coming months. Top Audience Persona reporting is scheduled to roll out shortly after.

For advertisers already running Reddit campaigns, this is best treated as a controlled test. Running Max campaigns alongside existing setups can help clarify where automation improves efficiency and where hands-on input, especially around creative and community fit, still matters.

Advertisers coming from Performance Max or Advantage+ should expect familiar mechanics, but different signals. Reddit’s value is tied to conversation and context, so creative testing and message alignment will likely play a larger role than pure audience tuning.

As with any beta, things will change. The near-term opportunity is not just performance lift, but learning how Reddit’s version of automation behaves and where it fits alongside other AI-led campaigns in a broader media mix.

What’s next for AI in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again. 

How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least). 

So what’s coming in 2026? Here are our big bets for the next 12 months. 

More Silicon Valley products will be built on Chinese LLMs

The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts. 

It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.

Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.

As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.

One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.

Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3. 

Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.

Caiwei Chen

The US will face another year of regulatory tug-of-war

T​​he battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.

Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails. 

In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year. 

AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.

The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.

Michelle Kim

Chatbots will change the way we shop

Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.

But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey. 

Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions. 

Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet. 

Rhiannon Williams

An LLM will make an important new discovery

I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.

We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.

Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.

And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.

There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.

Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.    

Will Douglas Heaven

Legal fights heat up

For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.

The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?

In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).

At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.

No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).

James O’Donnell

The Download: Kenya’s Great Carbon Valley, and the AI terms that were everywhere in 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change

In June last year, startup Octavia Carbon began running a high-stakes test in the small town of Gilgil in south-central Kenya. It’s harnessing some of the excess energy generated by vast clouds of steam under the Earth’s surface to power prototypes of a machine that promises to remove carbon dioxide from the air in a manner that the company says is efficient, affordable, and—crucially—scalable.

The company’s long-term vision is undoubtedly ambitious—it wants to prove that direct air capture (DAC), as the process is known, can be a powerful tool to help the world keep temperatures from rising to ever more dangerous levels. 

But DAC is also a controversial technology, unproven at scale and wildly expensive to operate. On top of that, Kenya’s Maasai people have plenty of reasons to distrust energy companies. Read the full story.

Diana Kruzman

This article is also part of the Big Story series: MIT Technology Review’s most important, ambitious reporting. The stories in the series take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. Our writers have taken a look back over the AI terms that dominated the year, for better or worse. Read the full list.

MIT Technology Review’s most popular stories of 2025

2025 was a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more.

As the new year begins, we wanted to give you a chance to revisit some of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Washington’s battle to break up Big Tech is in peril
A string of judges have opted not to force them to spin off key assets. (FT $)
+ Here’s some of the major tech litigation we can expect in the next 12 months. (Reuters)

2 Disinformation about the US invasion of Venezuela is rife on social media
And the biggest platforms don’t appear to be doing much about it. (Wired $)
+ Trump shared a picture of captured president Maduro on Truth Social. (NYT $)

3 Here’s what we know about Big Tech’s ties to the Israeli military
AI is central to its military operations, and giant US firms have stepped up to help. (The Guardian)

4 Alibaba’s AI tool is detecting cancer cases in China
PANDA is adept at spotting pancreatic cancer, which is typically tough to identify. (NYT $)
+ How hospitals became an AI testbed. (WSJ $)
+ A medical portal in New Zealand was hacked into last week. (Reuters)

5 This Discord community supports people recovering from AI-fueled delusions
They say reconnecting with fellow humans is an important step forward. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

6 Californians can now demand data brokers delete their personal information 
Thanks to a new tool—but there’s a catch. (TechCrunch)
+ This California lawmaker wants to ban AI from kids’ toys. (Fast Company $)

7 Chinese peptides are flooding into Silicon Valley
The unproven drugs promise to heal injuries, improve focus and reduce appetite—and American tech workers are hooked. (NYT $)

8 Alaska’s court system built an AI assistant to navigate probate
But the project has been plagued by delays and setbacks. (NBC News)
+ Inside Amsterdam’s high-stakes experiment to create fair welfare AI. (MIT Technology Review)

9 These ghostly particles could upend how we think about the universe
The standard model of particle physics may have a crack in it. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)

10 Sick of the same old social media apps?
Give these alternative platforms a go. (Insider $)

Quote of the day

“Just an unbelievable amount of pollution.”

—Sharon Wilson, a former oil and gas worker who tracks methane releases, tells the Guardian what a thermal imaging camera pointed at xAI’s Colossus datacentre has revealed.

One more thing

How aging clocks can help us understand why we age—and if we can reverse it

Wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging, might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active.

Over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. And what they’ve found is changing our understanding of aging itself. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You heard it here first: 2026 is the year of cabbage (yes, cabbage.)
+ Darts is bigger than ever. So why are we still waiting for the first great darts video game? 🎯
+ This year’s CES is already off to a bang, courtesy of an essential, cutting-edge vibrating knife.
+ At least one good thing came out of that Stranger Things finale—streams of Prince’s excellent back catalog have soared.