The Download: the worst technology of 2025, and Sam Altman’s AI hype

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The 8 worst technology flops of 2025

Welcome to our annual list of the worst, least successful, and simply dumbest technologies of the year.

We like to think there’s a lesson in every technological misadventure. But when technology becomes dependent on power, sometimes the takeaway is simpler: it would have been better to stay away.

Regrets—2025 had a few. Here are some of the more notable ones.

—Antonio Regalado

A brief history of Sam Altman’s hype

Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it.

For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. Throughout, Altman’s words have set the agenda. What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man.

To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology. His own words trace how we arrived here. Read the full story.

—James O’Donnell

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package here.

Can AI really help us discover new materials?

One of my favorite stories in the Hype Correction package comes from my colleague David Rotman, who took a hard look at AI for materials research. AI could transform the process of discovering new materials—innovation that could be especially useful in the world of climate tech, which needs new batteries, semiconductors, magnets, and more.

But the field still needs to prove it can make materials that are actually novel and useful. Can AI really supercharge materials research? And what would that look like? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China built a chip-making machine to rival the West’s supremacy 
Suggesting China is far closer to achieving semiconductor independence than we previously believed. (Reuters)
+ China’s chip boom is creating a new class of AI-era billionaires. (Insider $)

2 NASA finally has a new boss
It’s billionaire astronaut Jared Isaacman, a close ally of Elon Musk. (Insider $)
+ But will Isaacman lead the US back to the Moon before China? (BBC)
+ Trump previously pulled his nomination, before reselecting Isaacman last month. (The Verge)

3 The parents of a teenage sextortion victim are suing Meta
Murray Dowey took his own life after being tricked into sending intimate pictures to an overseas criminal gang. (The Guardian)
+ It’s believed that the gang is based in West Africa. (BBC)

4 US and Chinese satellites are jostling in orbit
In fact, these clashes are so common that officials have given it a name—”dogfighting.” (WP $)
+ How to fight a war in space (and get away with it) (MIT Technology Review)

5 It’s not just AI that’s trapped in a bubble right now
Labubus, anyone? (Bloomberg $)
+ What even is the AI bubble? (MIT Technology Review)

6 Elon Musk’s Texan school isn’t operating as a school
Instead, it’s a “licensed child care program” with just a handful of enrolled kids. (NYT $)

7 US Border Patrol is building a network of small drones
In a bid to expand its covert surveillance powers. (Wired $)
+ This giant microwave may change the future of war. (MIT Technology Review)

8 This spoon makes low-salt foods taste better
By driving the food’s sodium ions straight to the diner’s tongue. (IEEE Spectrum)

9 AI cannot be trusted to run an office vending machine
Though the lucky Wall Street Journal staffer who walked away with a free PlayStation may beg to differ. (WSJ $)

10 Physicists have 3D-printed a Cheistmas tree from ice 🎄
No refrigeration kit required. (Ars Technica

Quote of the day

“It will be mentioned less and less in the same way that Microsoft Office isn’t mentioned in job postings anymore.”

—Marc Cenedella, founder and CEO of careers platform Ladders, tells Insider why employers will increasingly expect new hires to be fully au fait with AI.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ A fragrance company is trying to recapture the scent of extinct flowers, wow. 
+ Seattle’s Sauna Festival sounds right up my street.
+ Switzerland has built what’s essentially a theme park dedicated to Saint Bernards
+ I fear I’ll never get over this tale of director supremo James Cameron giving a drowning rat CPR to save its life 🐀

Take our quiz on the year in health and biotechnology

In just a couple of weeks, we’ll be bidding farewell to 2025. And what a year it has been! Artificial intelligence is being incorporated into more aspects of our lives, weight-loss drugs have expanded in scope, and there have been some real “omg” biotech stories from the fields of gene therapy, IVF, neurotech, and more.   

As always, the team at MIT Technology Review has been putting together our 2026 list of breakthrough technologies. That will be published in the new year (watch this space). In the meantime, my colleague Antonio Regalado has compiled his traditional list of the year’s worst technologies.

I’m inviting you to put your own memory to the test. Just how closely have you been paying attention to the Checkup emails that have been landing in your inbox this year?!

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

China figured out how to sell EVs. Now it has to bury their batteries.

In August 2025, Wang Lei decided it was finally time to say goodbye to his electric vehicle.

Wang, who is 39, had bought the car in 2016, when EVs still felt experimental in Beijing. It was a compact Chinese brand. The subsidies were good, and the salesman talked about “supporting domestic innovation.” At the time, only a few people around him were driving on batteries. He liked being early.

But now, the car’s range had started to shrink as the battery’s health declined. He could have replaced the battery, but the warranty had expired; the cost and trouble no longer felt worth it. He also wanted an upgrade, so selling became the obvious choice.

His vague plans turned into action after he started seeing ads on Douyin from local battery recyclers. He asked around at a few recycling places, and the highest offer came from a smaller shop on the outskirts of town. He added the contact on WeChat, and the next day someone drove over to pick up his car. He got paid 8,000 yuan. With the additional automobile scrappage subsidy offered by the Chinese government, Wang ultimately pocketed about 28,000 yuan.

Wang is part of a much larger trend. In the past decade, China has seen an EV boom, thanks in part to government support. Buying an electric car has gone from a novel decision to a routine one; by late 2025, nearly 60% of new cars sold were electric or plug-in hybrids.

But as the batteries in China’s first wave of EVs reach the end of their useful life, early owners are starting to retire their cars, and the country is now under pressure to figure out what to do with those aging components.

The issue is putting strain on China’s still-developing battery recycling industry and has given rise to a gray market that often cuts corners on safety and environmental standards. National regulators and commercial players are also stepping in, building out formal recycling networks and take-back programs, but so far these efforts have struggled to keep pace with the flood of batteries coming off the road.

Like the batteries in our phones and laptops, those in EVs today are mostly lithium-ion packs. Their capacity drops a little every year, making the car slower to charge, shorter in range, and more prone to safety issues. Three professionals who work in EV retail and battery recycling told MIT Technology Review that a battery is often considered to be ready to retire from a car after its capacity has degraded to under 80%. The research institution EVtank estimates that the year’s total volume of retired EV batteries in China will come in at 820,000 tons, with annual totals climbing toward 1 million tons by 2030. 

In China, this growing pile of aging batteries is starting to test a recycling ecosystem that is still far from fully built out but is rapidly growing. By the end of November 2025, China had close to 180,000 enterprises involved in battery recycling, and more than 30,000 of them had been registered since January 2025. Over 60% of the firms were founded within the past three years. This does not even include the unregulated gray market of small workshops.

Typically, one of two things happens when an EV’s battery is retired. One is called cascade utilization, in which usable battery packs are tested and repurposed for slower applications like energy storage or low-speed vehicles. The other is full recycling: Cells are dismantled and processed to recover metals such as lithium, nickel, cobalt, and manganese, which are then reused to manufacture new batteries. Both these processes, if done properly, take significant upfront investment that is often not available to small players. 

But smaller, illicit battery recycling centers can offer higher prices to consumers because they ignore costs that formal recyclers can’t avoid, like environmental protection, fire safety, wastewater treatment, compliance, and taxes, according to the three battery recycling professionals MIT Technology Review spoke to.

“They [workers] crack them open, rearrange the cells into new packs, and repackage them to sell,” says Gary Lin, a battery recycling worker who worked in several unlicensed shops from 2022 to 2024. Sometimes, the refurbished batteries are even sold as “new” to buyers, he says. When the batteries are too old or damaged, workers simply crush them and sell them by weight to rare-metal extractors. “It’s all done in a very brute-force way. The wastewater used to soak the batteries is often just dumped straight into the sewer,” he says. 

This poorly managed battery waste can release toxic substances, contaminate water and soil, and create risks of fire and explosion. That is why the Chinese government has been trying to steer batteries into certified facilities. Since 2018, China’s Ministry of Industry and Information Technology has issued five “white lists” of approved power-battery recyclers, now totaling 156 companies. Despite this, formal recycling rates remain low compared with the rapidly growing volume of waste batteries.

China is not only the world’s largest EV market; it has also become the main global manufacturing hub for EVs and the batteries that power them. In 2024, the country accounted for more than 70% of global electric-car production and more than half of global EV sales, and firms like CATL and BYD together control close to half of global EV battery output, according to a report by the International Energy Agency. These companies are stepping in to offer solutions to customers wishing to offload their old batteries. Through their dealers and 4S stores, many carmakers now offer take-back schemes or opportunities to trade in old batteries for discount when owners scrap a vehicle or buy a new one. 

BYD runs its own recycling operations that process thousands of end-of-life packs a year and has launched dedicated programs with specialist recyclers to recover materials from its batteries. Geely has built a “circular manufacturing” system that combines disassembly of scrapped vehicles, cascade use of power batteries, and high recovery rates for metals and other materials.

CATL, China’s biggest EV maker, has created one of the industry’s most developed recycling systems through its subsidiary Brunp, with more than 240 collection depots, an annual disposal capacity of about 270,000 tons of waste batteries, and metal recovery rates above 99% for nickel, cobalt, and manganese. 

“No one is better equipped to handle these batteries than the companies that make them,” says Alex Li, a battery engineer based in Shanghai. That’s because they already understand the chemistry, the supply chain, and the uses the recovered materials can be put to next. Carmakers and battery makers “need to create a closed loop eventually,” he says.

But not every consumer can receive that support from the maker of their EV, because many of those manufacturers have ceased to exist. In the past five years, over 400 smaller EV brands and startups have gone bankrupt as the price war made it hard to stay afloat, leaving only 100 active brands today. 

Analysts expect many more used batteries to hit the market in the coming years, as the first big wave of EVs bought under generous subsidies reach retirement age. Li says, “China is going to need to move much faster toward a comprehensive end-of-life system for EV batteries—one that can trace, reuse and recycle them at scale, instead of leaving so many to disappear into the gray market.”

Ecommerce Marketing amid AI ‘Slop’

“Slop” is the word of the year for 2025, according to the human editors of the Merriam-Webster dictionary.

“We define slop as ‘digital content of low quality that is produced usually in quantity by means of artificial intelligence.’ All that stuff dumped on our screens, captured in just four letters: the English language came through again,” the editors wrote.

The folks at Merriam-Webster may have reacted to technology that likely challenges their livelihood. Yet while it’s a problem for ecommerce marketing, mediocre content is not new.

Generative AI did not introduce slop so much as streamline its speed and quantity. Recognizing this distinction is key to creating content that delivers a return on investment.

“Slop” is Merriam-Webster’s word of the year for 2025.

Before AI

Long before genAI, content mills mastered the art of large-scale, low-cost production.

These word-factories relied on vast pools of underpaid writers, rigid templates, and keyword-driven briefs to publish thousands of articles quickly. Editorial oversight was minimal. Speed mattered more than accuracy, originality, or even usefulness.

For ecommerce brands, this often meant competing for search engine rankings against threadbare “best of” lists, affiliate bait, and generic product pages written by folks who had never seen the products they described. These pages existed to capture search traffic, not to help shoppers.

So long as search engines rewarded keywords and recency, low-cost content arbitrage was profitable, despite frequent algorithm updates aimed at combating it.

AI Amplification

Generative AI dramatically lowers the cost of content. What once required thousands of writers now consists of simple prompts, scripts, and publishing pipelines.

AI agents replaced underpaid writers.

The output even looks better. AI-generated content is readable, structured, and confident. It rarely reads like the keyword-stuffed search-engine bait of the early 2010s. That polish makes it hard for shoppers to distinguish genuine expertise from synthetic fluency.

Unfortunately, AI content can be wrong. Large language models hallucinate and make errors in logic. They are biased.

Compared to human-made versions, AI content is often clearer yet more fallible. The difference is not so much the quality of the prose as its relative trustworthiness and the thinking behind it.

Using AI

Nonetheless, marketers still aim to attract, engage, and retain readers. The need is not to avoid AI-generated content, but rather to use it well.

AI should not replace human thinking, but instead research, clarify, and facilitate it, such as:

  • Research and first drafts. AI can research and generate a starting point, not a final asset. Humans — merchandisers, marketers, experts — shape the final output through experience, nuance, and learning.
  • Clarity and purpose. Is the goal education, engagement, or conversion? AI performs best when guided by intent rather than vague prompts.
  • Facilitate human context and insights. This includes common customer questions, product comparisons, usage notes, and merchandising expertise. No model can scrape direct human knowledge.

For example, an ecommerce team might use AI to draft a buying guide for cordless drills. A product manager could then refine it based on real-world catalog constraints, such as in-stock models, warranty differences, and customer feedback. The AI provides structure and speed. The human provides judgment.

The same approach applies to product descriptions, FAQs, and category pages. AI accelerates first drafts and variations, but humans ensure claims are accurate, benefits are correct, and language aligns with brand voice. This hybrid workflow produces content that scales without sacrificing trust.

It’s not slop. It’s AI output guided by humans. And it might be the best way to create marketing content.

Not All Slop

The web survived keyword stuffing, article spinning, and content farms. It will survive AI slop, too.

The lesson is clear for ecommerce marketers: AI changes the tools, not the fundamentals. Genuine content that helps shoppers decide will outperform the mass-produced alternative.

The winners will be the most useful marketers, not the loudest.

Who Benefits When The Line Between SEO And GEO Is Blurred via @sejournal, @DuaneForrester

The search industry is entering a transition that many people still treat as a footnote. The systems consumers rely on are changing, and the way information is gathered, summarized, and delivered is changing with them. Yet the public messaging around what businesses should do sounds as familiar as ever. The narrative says the fundamentals are the same. The advice sounds the same. The expectations sound the same. The message is that SEO still covers everything that matters.

But the behavior of the consumer says otherwise. The way modern systems retrieve and present information says otherwise. And the incentives of the companies that shape those systems explain why the narrative has not kept up with reality.

This is not a story about conflict. It is not about calling out any company or naming any platform. It is about understanding why continuity messaging persists and why businesses cannot afford to take it at face value. The shift from a click-driven model to an answer-driven model is measurable, visible, and documented. The only question is who benefits when the line between SEO and GEO stays blurry, and who loses when it does.

Image Credit: Duane Forrester

The Shift Is Already Visible In The Data

Let’s start with some data. Certainly not all the data, but some, at least. Bain and Company published research showing that about 80% of consumers who use search now rely on AI-written summaries for at least 40% of their queries. They also found that organic traffic across many categories has fallen by 15-25% because of this shift.

Pew Research analyzed how people behave when AI summaries appear on the results page. Their findings show that people click traditional links in about eight percent of visits when an AI summary is present. When the summary is absent, that number rises to roughly fifteen percent.

Ahrefs published a study showing that when AI summaries appear, the click-through rate of the top organic result drops by about 34%.

Seer Interactive measured outcomes across thousands of queries and found a 61% decline in organic click-through on informational queries that surfaced an AI summary. Paid click-through dropped by 68% for the same class of queries.

BrightEdge expanded the picture. They compared outputs across multiple AI answer engines and found that different systems disagree with each other about brand mentions roughly 62% of the time.

These sources do not frame the shift as speculation. They show structural change. Consumers click less when AI summaries appear. They rely more on answer layers. They perform fewer traditional searches. And the systems producing those answers do not behave the same way.

Given this, why is the message still that nothing significant has changed and that existing SEO practices still cover the full scope of visibility work?

Continuity Is Not Accidental. It Is Incentivized

The answer lies in incentives. Established platforms rely on a steady stream of aligned content that fits their current systems and supports the development of the answer structures they use today. They need predictability in that supply. If businesses abruptly redirected their focus toward optimizing for environments outside the classic ranking model, the flow of content into traditional indexing systems would change. Telling the world that the best path forward is to keep improving content in the same ways they always have offers stability. It reduces confusion. It keeps expectations manageable. And it slows the need for new measurement frameworks that reveal how much the system has shifted away from click-based visibility.

Agencies and consultants also benefit when the line stays blurry. If GEO is described as nothing more than SEO with a different label, they can market the same playbooks with fewer operational changes. They do not need to retrain teams in retrieval-based behavior. They do not need to produce new deliverables or learn new data models. They can continue selling the same work, packaged for a new era, without changing the underlying skill set. For many firms, the incentives favor consistency rather than reinvention.

Tool vendors tied to traditional SEO signals benefit from the same continuity. If GEO is framed as the same as SEO, the pressure to rebuild their systems around vector retrieval, chunk inspection, citation tracking, and cross-engine output analysis decreases. Re-architecting tools to support answer era optimization is expensive. Downplaying the distinction buys time.

None of these incentives are wrong. They are normal. Every industry reacts this way when a shift threatens the established workflows, revenue models, and expectations. But these incentives explain why the message of continuity persists even when the data shows otherwise.

This Is Where SEO And GEO Genuinely Overlap

So, where does SEO end and GEO begin? The overlap is real. If your content is thin, outdated, or buried behind inaccessible structures, you will struggle everywhere. Technical fundamentals still matter. Clear writing still matters. Structured data still matters. Authority still matters. These are non-negotiable for both SEO and GEO.

But the differences are too large to ignore. SEO focuses on pages and rankings. GEO focuses on fragments and retrieval. SEO aims to earn the click. GEO aims to earn presence inside the answer the consumer sees. SEO tracks impressions and click-through. GEO tracks citations, mentions, and answer share. SEO studies snippets. GEO studies how different systems pull, blend, and frame information. SEO treats the page as the unit of value. GEO treats the block as the unit of value.

This Is Where The Work Begins To Diverge

Modern answer engines retrieve specific content blocks, synthesize them, and present the result in compressed form. They may cite a source. They may not. They may mention a brand directly. They may not. They may surface a recommendation from a third party that never appears in traditional analytics. They may pull from locations you do not control.

In this environment, the mechanics of visibility change. You now need to design content in discrete, self-contained blocks that can be safely lifted and reused. You need to make entity relationships, attributes, and actions machine-readable. You need to track how AI systems present your information across different platforms. You need to understand that retrieval behavior varies across systems and that answers diverge even when content remains the same. You also need metrics that describe visibility on surfaces where no click occurs.

Consumer Behavior Explains The Rest

Consumer behavior reinforces this need. Deloitte found that adoption of generative AI more than doubled year over year, and that 38% of consumers now use it for real tasks rather than experimentation.

Recent 2025 consumer data shows that many people already rely on generative AI tools to find and understand information, not just to generate content or complete tasks. A nationally representative survey of more than 5,000 U.S. adults, conducted in April 2025 and published in June 2025, found that consumers are using AI tools for everyday information needs, including answering questions, explaining topics, and summarizing complex material.

When people ask questions directly and trust the answer they receive, the role of the page shifts. The business still needs pages, but the consumer may never see them. The information is what matters. The structure is what matters. The clarity is what matters. The authority signal is what matters. The ability of the system to retrieve and use your content is what matters.

Traffic Is No Longer A Reliable Proxy For Influence

And humbly, I think we need to move past conversations like “this platform only sends one percent of my traffic, so it’s hard to justify the investment.” That framing assumes traffic is still the primary signal of influence. In an answer-driven environment, that assumption no longer holds. Consumers increasingly get what they need without ever visiting a site, even when that site’s information directly shaped the answer they trusted. A system may never deliver more than single-digit referral traffic, not because it lacks impact, but because consumer behavior has changed. The most meaningful new signals to watch are adoption, frequency of use, and the types of tasks people rely on each system for. Those metrics tell you where influence is forming, even when clicks never happen.

This is why businesses cannot treat SEO and GEO as interchangeable. The fundamentals overlap, but the goals do not. SEO helps you win in ranking environments. GEO helps you stay visible in answer environments. SEO prepares your site for discovery. GEO prepares your information for use. SEO earns the visit. GEO earns the recommendation.

When the line between SEO and GEO stays blurry, the incumbents benefit from stability. Agencies benefit from simplicity. Vendors benefit from delayed change. But the businesses relying on visibility lose clarity. They chase rankings that look strong while losing share in the answer layers their customers have a rapidly growing reliance on. They measure success by clicks even as those clicks decline. They optimize pages while the systems shaping decisions optimize information blocks.

The shift does not replace SEO. It adds to it. It builds on it. It requires everything SEO already demands, plus new work that reflects how information is retrieved and used in modern systems. Leaders need clear definitions so they can plan effectively. The teams doing the work need clear expectations so they can build the right skills. And executives need accurate metrics so they can make informed decisions. New metrics beyond the scope of established SEO-centric data points we operate with today.

Clarity, Not Comfort, Is The Real Advantage

Clarity is the unlock. Not alarm. Not hype. Not denial. Just clarity. The industry is moving toward answer-driven discovery. The companies that understand this will position themselves to win across environments, not just inside a ranking model that served the last decade well. Visibility now lives in multiple layers. The business that adapts to those layers will own its share of attention. The ones that rely on continuity messaging will fall behind without realizing it until the results flatten.

The sands are shifting. The work is changing. And the businesses willing to see the difference between SEO and GEO will be the ones ready for the environments consumers have growing trust in. At some point in our near future, I expect platforms to start sharing AI-related data with businesses. We already see the shift beginning with third-party tool providers, as many are leaning into this shift. Now we need the platforms themselves to share their first-party data with us. But until crucial questions around revenue generation, traffic delivery, and decision-making metrics are answered, we’ll be in flux.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Polinmrrr/Shutterstock

The User Journey Isn’t Linear Anymore: It’s Always On via @sejournal, @TaylorDanRW

For many years, organizations have relied on a familiar view of the customer journey. The idea that a user moves from awareness to consideration to decision in a neat and predictable line has shaped how brands communicate, measure, and invest.

The rise of AI has shown that this model no longer reflects how people actually behave, and the real journey is fluid, continuous, and influenced by many sources at once.

The shift is not a small adjustment; it is a fundamental change in how demand is created and how decisions are made.

Why The Linear Model No Longer Fits

The linear path breaks because AI now compresses what used to be separate stages into a single moment.

A person can ask for suggestions, comparisons, recommendations, suitability checks, and next steps within one interaction, and the technology responds by folding several layers of intent into one answer.

Users no longer need to progress step by step, as discovery, evaluation, and shortlisting can now happen together. The impact on how brands attract and retain attention is significant, as every touchpoint becomes a potential point of influence.

People enter the journey from many different places and at many other times.

Large language models (LLMs), marketplaces, social platforms, email, traditional search, and emerging assistants all serve as both starting points and accelerators, making the first touchpoint no longer predictable.

The real pattern resembles a series of loops rather than a line, as users move back and forth while refining their wants, and AI tools guide them through this exploration by shaping and clarifying their thinking.

A simple example shows how quickly the linear model breaks.

A person thinking about buying new running shoes might previously have searched for brands, compared prices, read reviews, visited a store, and only then made a decision.

Today, the same person can ask an AI assistant for recommendations based on their running style, previous injuries, preferred terrain, and budget, and receive tailored options in seconds.

The assistant can provide comparisons, highlight differences, explain fit considerations, and even suggest alternatives that the user had not considered.

The process jumps across multiple stages at once, without the user moving through a sequence of pages or channels. The journey becomes a loop of questions and refinements rather than a straight line.

The process is not a funnel; it is a dynamic system driven by intent that evolves with every interaction.

The Rise Of The Always-On Journey

The idea of an always-on journey captures this reality. Decisions are shaped gradually, through signals, prompts, and small moments of influence spread across multiple environments. There is no fixed beginning or end, only windows where your brand becomes relevant based on the user’s needs, context, and constraints.

AI widens these windows by introducing products and services during tasks that might seem unrelated, so discovery is not something brands can schedule or stage. It happens whenever the technology sees an opportunity to help the user progress.

This shift has also been accelerated by the way major technology companies are positioning AI as a core feature of their products. Smartphones, laptops, and operating systems now include assistants that are marketed as everyday companions capable of producing content, planning tasks, answering questions, and guiding decisions.

The advertising behind these features plays a significant role in shaping user behavior, as it encourages people to rely on AI in more situations and across a broader range of personal and professional tasks.

The journey towards adoption follows a simple path: from seeing to understanding, trusting, trying, using, and eventually scaling.

Image from author, December 2025

People see the feature highlighted in a product launch or advert, understand the benefit through demonstrations, begin to trust the technology when they see it used in credible scenarios, try it themselves, start using it regularly, and eventually scale it across more tasks. Each step is reinforced by the device ecosystem, which keeps the technology present and available throughout the day.

This pattern means the user is never far from an AI-driven touchpoint, which in turn keeps the journey active at all times. The more familiar users become with these tools, the more naturally they integrate them into decision-making. The result is a journey that does not pause between stages but remains in motion, shaped by continuous access to assistance, advice, and recommendations.

What Organizations Need To Do

Organizations need to adapt by treating every asset as a potential entry point. Product pages, support articles, category pages, guides, tools, videos, and reviews can all be surfaced first, so each one must stand on its own and communicate value without relying on the rest of the site or campaign.

This requires clarity, structure, and consistency, because users (and AI systems) will not follow the path you expect.

Brands also need to think in terms of anticipation rather than reaction. When a user is exploring options, they benefit from seeing comparisons, trade-offs, alternatives, and clear explanations of who a product is for and who it is not for.

These elements help people imagine how the product or service might fit into their situation, which strengthens trust and improves the likelihood of being included in the user’s shortlist, even if the journey restarts several times.

AI tools rely on machine-readable signals to understand a brand, and structured information now carries more weight than ever. Organizations that invest in clear product data, logical information architecture, and consistent descriptions make it easier for AI systems to explain their offer. This is not a technical exercise; it is a strategic requirement for visibility in an environment where users expect fast, accurate guidance.

Conclusion: Adapting To An Always-On Reality

Success will come from supporting the journey rather than trying to control it. People will continue to loop, reassess, and re-enter from new angles, yet they will respond well to brands that stay present, offer clarity, and help them navigate choices with confidence.

The customer journey never truly followed a line, and AI has simply revealed how dynamic it has always been.

Brands that recognize this shift and adapt their approach will build stronger connections and remain relevant in a market that no longer moves from stage to stage but operates continuously, in an always-on.

More Resources:


Featured Image: hmorena/Shutterstock

Coursera Acquiring Udemy via @sejournal, @martinibuster

Coursera has agreed to acquire Udemy in a stock-for-stock transaction that will combine two large online learning platforms with consumer and enterprise businesses.

Under the terms of the deal, each Udemy share will be exchanged for 0.800 Coursera shares. Following the transaction, Coursera shareholders will own approximately 59% of the combined company, while Udemy shareholders will own about 41%. The merged company will continue operating as Coursera, Inc., headquartered in Mountain View, California. Greg Hart will remain CEO, and Coursera’s Andrew Ng will serve as chairman. The companies expect the transaction to close in the second half of 2026, subject to shareholder approval and regulatory clearances.

Coursera’s platform is built around partnerships with universities, institutions, and industry organizations, with a focus on credentialed learning programs. Udemy operates an open marketplace of instructors and provides training programs used by enterprise customers. The combined company is expected to offer academic courses, professional skills training, and enterprise learning programs through a single platform.

The companies report a combined total of more than 270 million registered learners and nearly 19,000 enterprise customers. Coursera contributes institutional partnerships and credential-focused offerings, while Udemy contributes a large instructor marketplace and a broad enterprise customer base. Udemy generates a majority of its revenue outside North America, while Coursera generates a larger share of revenue in the United States.

If completed, the transaction will bring together institutional learning programs and an open instructor marketplace within a single company.

General reaction online was surprise, with Udemy instructor unsure about where he stood, writing on X:

“I can’t tell what this acquisition by Coursera means for my future as a Udemy instructor. Time will tell.
I will definitely keep on teaching – on one platform or another.

But learning that a brand that was THE main part of my professional life for the last 10 years will go away is really very, very sad.”

Read more at the Coursera website:

Coursera to combine with Udemy

Featured Image by Shutterstock/ShutterStockies

The Future Of Content In An AI World: Provenance & Trust In Information

When Emily Epstein shared her perspective on LinkedIn about how “people didn’t stop reading books when encyclopedias came out,” it sparked a conversation about the future of primary sources in an AI-driven world.

In this episode, Katie Morton, Editor-in-Chief of Search Engine Journal, and Emily Anne Epstein, Director of Content at Sigma, dig into her post and unpack what AI really means for publishers, content creators, and marketers now that AI tools present shortcuts to knowledge.

Their discussion highlights the importance of provenance, the layers involved in online knowledge acquisition, and the need for more transparent editorial standards.

If you’re a content creator, this episode can help you gain insight into how to provide value as the competition for attention becomes a competition for trust.

Watch the video or read the full transcript below:

Katie Morton: Hello, everybody. I’m Katie Morton, Editor-in-Chief of Search Engine Journal, and today I’m sitting down with Emily Anne Epstein, Director of Content at Sigma. Welcome, Emily.

Emily Ann Epstein: Thanks so much. I’m so excited to be here.

Katie: Me too. Thanks for chatting with me. So Emily wrote a really excellent post on LinkedIn that caught my attention. Emily, for our audience, would you mind summarizing that post for us?

Emily: So this should feel both shocking and non-shocking to everybody. But the idea is, people didn’t stop reading books when encyclopedias came out. And this is a response to the hysteria that’s going on with the way AI tools are functioning as summarizing devices for complicated and complex situations. And so the idea is, just because there’s a shortcut now to acquiring knowledge, it doesn’t mean we’re getting rid of the need for primary sources and original sources.

These two different types of knowledge acquisition exist together, and they layer on top of one another. You may start your book report with an encyclopedia or ChatGPT search, but what you find there doesn’t matter if you can’t back it up. You can’t just say in a book report, “I heard it in Encarta.” Where did the information come from? I think about the way this is going to transform search: There’s simply going to be layers now.

Maybe start your search with an AI tool, but you’ll need to finish somewhere else that organizes primary sources, provides deeper analysis, and even shows contradictions that go into creating knowledge.

Because a lot of what these synthesized summaries do is present a calm, “impartial” view of reality. But we all know that’s not true. All knowledge is biased in some way because it cannot be “all-containing.”

The Importance Of Provenance

Katie: I want to talk about something you mentioned in your LinkedIn post: provenance. What needs to happen, whether culturally, editorially, or socially, for “show me the source material” to become standard in AI-assisted search?

With Wikipedia or encyclopedias, ideally, people should still cite the original source, go deeper into the analysis, and be able to say, “Here’s where this information came from.” How do we get there so people aren’t just skimming surface-level summaries and taking them as gospel?

Emily: First, people need to use these tools, and there needs to be a reckoning with how reliable they are. Thinking about provenance means thinking about knowledge acquisition as triangulation. So, when I was a journalist, you have to balance hearsay, direct quotes, press releases, and social media.

You create your story from a variety of sources, so that way, you get something that’s in the middle and can explain multiple truths and realities. That comes from understanding that truth has never been linear, and reality is fracturing.

What AI does, even more advanced than that, is deliver personalized responses. People are prompting their models differently, so we’re all working from different sets of information and getting different answers. Once reality is fractured to that degree, knowing where something comes from – the provenance – becomes essential for context.

And triangulation won’t just be important for journalists; it’s going to be important for everyone because people make decisions based on the information that they receive.

If you get bad inputs, you’ll get bad outputs, make bad decisions, and that affects everything from your work to your housing. People will need to triangulate a better version of reality that is more accurate than what they’re getting from the first person or the first tool they asked.

Creators: Competing For Attention To Competing For Trust

Katie: So if AI becomes the top layer in how people access information – designed to hold attention within its own ecosystem – what does that mean for content creators and publishers? It feels like they’re creating a commodity that AI then repackages as its own.

How do you see that playing out for creators in terms of revenue and visibility?

Emily: Instead of competing for attention, creators and publishers will compete for trust. That means making editorial standards more transparent. They’re going to have to show the work that they’re doing. Because with most AI tools, you don’t see how they work, it’s a bit of a black box.

But if creators can serve as a “blockchain,” (a verifiable ledger of information sources) and they’re showing their sources and methods, that will be their value.

Think about photography. When it first came out, it was considered a science. People thought photos were pure fact. Then, darkroom techniques like dodging and burning or combining multiple exposures showed that photos could lie.

And when photography became an art form, people realized that the photographer’s role was to provide a filter. That’s where we are with AI. There are filters on every piece of information that we receive.

And those organizations that make their filter transparent are going to be more successful, and people will return to them because again, they’re getting better information. They know where it’s coming from, so they can make better decisions and live better lives.

AI Hallucinations & Deepfakes

Emily: It was a shocking moment in the history of photography. that people could lie with photographs. And that’s sort of where we are right now. Everybody is using AI, and we know there are hallucinations, but we have to understand that we cannot trust this tool, generally speaking, unless it shows its work.

Katie: And the risks are real. We’re already seeing AI voiceovers and video deepfakes mimicking creators often without their consent.

Inspiring People To Go Deeper

Katie: In your post, you ended with “people still doing the work of deciding what’s enough.” In an attention economy of speed and convenience, how do we help people go deeper?

Emily: The idea that people don’t want to go deeper flies in the face of Wikipedia holes. People start with summarized information, but then click a citation, keep going further, watch another show, keep digging.

People want more of what they want. If you give them a breadcrumb of fascinating information, they’ll want more or that. Knowledge acquisition has an emotional side. It gives you dopamine hits: “I found that, that’s for me.”

And as content marketers, we have to provide that value for people where they say, ‘Wow, I am smarter because of this information. I like this brand because this brand has invested in my intelligence and my betterment.’

And for content creators, that needs to be the gold star.

Wrapping Up

Katie: Right on. For those who want to follow your work, where can they find you?

Emily: I’m dialoging and writing my thoughts on AI out loud and in public on LinkedIn. Come join me, and let’s think out loud together.

Katie: Sounds great. And I’m always at searchenginejournal.com. Thank you so much, Emily, for taking the time today.

Emily: Thank you!

More Resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Google’s Robby Stein Names 5 SEO Factors For AI Mode via @sejournal, @martinibuster

Robby Stein, Vice President of Product for Google Search, recently sat down for an interview where he answered questions about how Google’s AI Mode handles quality, how Google evaluates helpfulness, and how it leverages its experience with search to identify which content is helpful, including metrics like clicks. He also outlined five quality SEO-related factors used for AI Mode.

How Google Controls Hallucinations

Stein answered a question about hallucinations, where an AI lies in its answers. He said that the quality systems within AI Mode are based on everything Google has learned about quality from 25 years of experience with classic search. The systems that determine what links to show and whether content is good are encoded within the model and are based on Google’s experience with classic search.

The interviewer asked:

“These models are non-deterministic and they hallucinate occasionally… how do you protect against that? How do you make sure the core experience of searching on Google remains consistent and high quality?”

Robby Stein answered:

“Yeah, I mean, the good news is this is not new. While AI and generative AI in this way is frontier, thinking about quality systems for information is something that’s been happening for 20, 25 years.

And so all of these AI systems are built on top of those. There’s an incredibly rigorous approach to understanding, for a given question, is this good information? Are these the right links? Are these the right things that a user would value?

What’s all the signals and information that are available to know what the best things are to show someone. That’s all encoded in the model and how the model’s reasoning and using Google search as a tool to find you information.

So it’s building on that history. It’s not starting from scratch because it’s able to say, oh, okay, Robbie wants to go on this trip and is looking up cool restaurants in some neighborhood.

What are the things that people who are doing that have been relying on on Google for all these years? We kind of know what those resources are we can show you right there. And so I think that helps a lot.

And then obviously the models, now that you release the constraint on layout, obviously the models over time have also become just better at instruction following as well. And so you can actually just define, hey, here are my primitives, here are my design guidelines. Don’t do this, do this.

And of course it makes mistakes at times, but I think just the quality of the model has gotten so strong that those are much less likely to happen now.”

Stein’s explanation makes clear that AI Mode is encoded with everything learned from Google’s classic search systems rather than a rebuild from scratch or a break from them. The risk of hallucinations is managed by grounding AI answers in the same relevance, trust, and usefulness signals that have underpin classic search for decades. Those signals continue to determine which sources are considered reliable and which information users have historically found valuable. Accuracy in AI search follows from that continuity, with model reasoning guided by longstanding search quality signals rather than operating independently of them.

How Google Evaluates Helpfulness In AI Mode

The next question is about the quality signals that Google uses within AI Mode. Robby Stein’s answer explains that the way AI Mode determines quality is very much the same as with classic search.

The interviewer asked:

“And Robbie, as search is evolving, as the results are changing and really, again, becoming dynamic, what signals are you looking at to know that the user is not only getting what they want, but that is the best experience possible for their search?”

Stein answered:

“Yeah, there’s a whole battery of things. I mean, we look at, like we really study helpfulness and if people find information helpful.

And you do that through evaluating the content kind of offline with real people. You do that online by looking at the actual responses themselves.

And are people giving us thumbs up and thumbs downs?

Are they appreciating the information that’s coming?

And then you kind of like, you know, are they using it more? Are they coming back? Are they voting with their feet because it’s valuable to you.

And so I think you kind of triangulate, any one of those things can lead you astray.

There’s lots of ways that, interestingly, in many products, if the product’s not working, you may also cause you to use it more.

In search, it’s an interesting thing.

We have a very specific metric that manages people trying to use it again and again for the same thing.

We know that’s a bad thing because it means that they can’t find it.

You got to be really careful.

I think that’s how we’re building on what we’ve learned in search, that we really feel good that the things that we’re shipping are being found useful by people.”

Stein’s answer shows that AI Mode evaluates success using the same core signals used for search quality, even as the interface becomes more dynamic. Usefulness is not inferred from a single engagement signal but from a combination of human evaluation, explicit feedback, and behavioral patterns over time.

Importantly, Stein notes that just because people use it a lot, presumably in a single session, that the increased usage alone is not treated as success, since repeated attempts to answer the same query indicate failure rather than satisfaction. The takeaway is that AI Mode’s success is judged by whether users are satisfied, and that it uses quality signals designed to detect friction and confusion as much as positive engagement. This carries over continuity from classic search rather than redefining what usefulness means.

Five Quality Signals For AI Search

Lastly, Stein answers a question about the ranking of AI generated content and if SEO best practices still help for ranking in AI. Stein’s answer includes five factors that are used for determining if a website meets their quality and helpfulness standards.

Stein answered:

“The core mechanic is the model takes your question and reasons about it, tries to understand what you’re trying to get out of this.

It then generates a fan-out of potentially dozens of queries that are being Googled under the hood. That’s approximating what information people have found helpful for those questions.

There’s a very strong association to the quality work we’ve done over 25 years.

Is this piece of content about this topic?

Has someone found it helpful for the given question?

That allows us to surface a broader diversity of content than traditional Search, because it’s doing research for you under the hood.

The short of it is the same things apply.

  1. Is your content directly answering the user’s question?
  2. Is it high quality?
  3. Does it load quickly?
  4. Is it original?
  5. Does it cite sources?

If people click on it, value it, and come back to it, that content will rank for a given question and it will rank in the AI world as well.”

Watch the interview starting about the one hour and twenty three minute mark:

Let’s Be Honest About The Ranking Power Of Links via @sejournal, @martinibuster

What link building should be trying to accomplish, in my opinion, is proving that a site is trustworthy and making sure the machine understands what topic your web pages fit into. The way to communicate trustworthiness is to be careful about what sites you obtain links from and to be super careful about what sites your site links out to.

Context Of Links Matter

Maybe it doesn’t have to be said but I’ll say it: It’s important now more than ever that the page your link is on has relevant content on it and that the context for your link is an exact match for the page that’s being linked to.

Outgoing Links Can Signal A Site Is Poisoned

Also make sure that the outgoing links are to legitimate sites, not to sites that are low quality or in problematic neighborhoods. If those kinds of links are anywhere on the site it’s best to consider the entire site poisoned and ignore it.

The reason I say to consider the site poisoned is the link distance ranking algorithm concept where inbound links tell a story about how trustworthy a site is. Low quality outbound links are a signal that something’s wrong with the site. It’s possible that a site like that will have its ability to pass PageRank removed.

Reduced Link Graph

This is how the Reduced Link Graph works, where the spammy sites are kicked out of the link graph and only the legit sites are kept for ranking purposes and link propagation. The link graph can be thought of as a map of the internet with websites connected to each other by links. When you kick out the spammy sites that’s called the reduced link graph.

Search engines are at a point where they can rank websites based on the content alone. Links still matter but the content itself is now the highest level ranking factor. I suspect that in general the link signal isn’t very healthy right now. Less people are blogging across all topics. Some topics have a healthy blogging ecosystem but in general there aren’t professors blogging about technology in the classroom and there aren’t HR executives sharing workplace insights and so on like there used to be ten or fifteen years ago.

Links for Inclusion

I’m of the opinion that links increasingly are useful for determining if a site is legit, high quality, and trustworthy, deeming it worthy for consideration in the search results. In order to stay in the SERPs it’s important to think about the outbound links on your site and the sites you obtain links from. Think in terms of reduced link graphs, with spammy sites stuck on the outside within their own spammy cliques and the non-spam on the inside within the trusted Reduced Link Graph.

In my opinion, you must be in the trusted Reduced Link Graph in order to stay in play.

Is Link Building Over?

Link building is definitely not over. There’s still important. What needs to change is how links are acquired. The age of blasting out emails at scale are over. There aren’t enough legitimate websites to make that worthwhile. It’s better to be selective and targeted about which sites you get a (free) link from.

Something else that’s becoming increasingly important is citations, other sites talking about your site. An interesting thing right now is that sponsored articles, sometimes known as native advertising, will get cited in AI search engines, including Google AI Overviews and AI Mode. This is a great way to get a citation in a way that will not hurt your rankings as long as the sponsored article is clearly labeled as sponsored and the outbound links are nofollowed.

Takeaways

  • Links As Trust And Context Signals, Not Drivers Of Ranking
    Links increasingly function to confirm that a site is legitimate and topically aligned, rather than to directly push rankings through volume or anchor text manipulation as in the old days.
  • The Reduced Link Graph Matters
    Search engines filter out spammy or low-quality sites, leaving a smaller trusted network where links and associations still count. Being outside this trusted graph puts sites at risk of exclusion.
  • Content Matters, Links Qualify
    Search engines can rank many pages based on content alone, but links can still act as a gatekeeper for credibility and inclusion, especially for competitive topics.
  • Outbound Links Are A Risk Signal
    Linking out to low-quality or problematic sites can damage a site’s perceived trustworthiness and its ability to pass value.
  • Traditional Link Building Is Obsolete
    Scaled outreach, anchor text strategies, and chasing volume are ineffective in an AI-driven search environment.
  • Citations Are Rising In Importance
    Mentions and discussions of a website can cause a site to rank better in AI search engines
  • Sponsored Articles
    Sponsored articles that are properly labeled as sponsored content and containing nofollowed links are increasingly surfaced in AI search features and contribute to visibility.

Link building is still relevant, but not in the way it used to be. Its function now is likely more about establishing whether a site is legitimate and clearly associated with a real topic area, not to push rankings through volume, anchors, or scale. Focusing on clean outbound links, selective relationships with trusted sites, and credible citations keeps a site inside the trusted reduced link graph, which is the condition that allows strong content to compete and appear in both traditional search results and AI-driven search surfaces.

Featured Image by Shutterstock/AYO Production