Inventor recalls eye imaging breakthrough

If you’ve been to an eye doctor and had an image taken of the inside of your eye, chances are good it was done with optical coherence tomography (OCT)—a technology invented by clinician-scientist David Huang ’85, SM ’89, PhD ’93, and now used in 40 million procedures per year. 

OCT is a noninvasive technique used to produce detailed images of complicated biological tissues such as the retina and the plaques that can build up in coronary arteries. It maps the time-of-flight of light waves reflected from tissue and paints a high-resolution picture of internal structures. 

“It uses infrared light that’s barely visible compared to the bright flash of fundus photography [another common method of eye imaging] and provides a lot more information—three-dimensional rather than two-dimensional information—at higher resolution,” Huang says. The discovery earned him and his co-inventors slots in the National Inventors Hall of Fame in 2025 as well as the Lasker Award and the National Medals of Technology and Innovation in 2023.

Huang didn’t expect to change the paradigm of eye imaging when he began studying electrical engineering as an undergraduate at MIT, but he was interested in using an engineering mindset to contribute to medical advancements. That, he thought, could be his way to follow in the footsteps of his father, who was a family practitioner. 

OCT emerged from his work as an MD-PhD student in the Harvard-MIT Program in Health Sciences and Technology. While studying ultrafast lasers at MIT under James Fujimoto ’79, SM ’81, PhD ’84, the Elihu Thomson Professor of Electrical Engineering, Huang was tasked with using the lasers to improve various ophthalmological tasks, including measuring the thickness of the cornea and retina. 

Huang thought an approach known as interferometry, which could measure the time of flight down to one quadrillionth of a second, could improve thickness measurements to micrometer resolution. Huang’s experiments revealed that the technique was able to detect very faint signals arising from fine internal structures within the retina. Fujimoto and Huang realized the potential for inventing a new type of imaging and enlisted the help of Eric Swanson, SM ’84, who was using interferometry for intersatellite communications at Lincoln Laboratory, to develop an OCT machine for biological applications. Huang tested the new machine on several types of tissues accessed through Harvard Medical School and found it particularly successful in imaging retinal and coronary artery samples. He and his colleagues published their initial findings in Science in 1991, establishing OCT as a new imaging modality.

“Because of our ability to form collaborations with medical doctors and the more advanced technologies that were easily accessible at Lincoln Lab and MIT, we were able to make this new imaging technology take off when other people who were exploring around the same area were not able to demonstrate imaging results,” he says. 

After the groundbreaking invention, Huang finished his academic and medical training as an ophthalmologist while Fujimoto and Swanson formed a startup company to ensure that the device got into medical offices. 

Over the next decades, Huang has continued to refine OCT for various applications. Today, as the director of research at Oregon Health and Science University’s Casey Eye Institute, he leads research groups exploring new ways to use OCT in techniques such as OCT angiography (imaging blood flow down to the capillary level) and OCT optoretinography (mapping the light response in retinal photoreceptor cells). 

In addition to conducting research, he also sees patients and is the cofounder of GoCheck Kids, a digital platform for pediatric eye screening. 

Huang credits his knack for innovation to his position at the nexus of diverse fields. “It’s hard for a pure medical doctor or a pure laser engineer to realize that there is an opportunity to invent a new device that solves a real problem in the clinic,” he says. “But it’s really easy when you have knowledge on both sides.” 

Roundtables: Unveiling The 10 Things That Matter in AI Right Now

Listen to the session or watch below

Watch a special edition of Roundtables simulcast live from EmTech AI, MIT Technology Review’s signature conference for AI leadership. Subscribers got an exclusive first look at a new list capturing 10 key technologies, emerging trends, bold ideas, and powerful movements in AI that you need to know about in 2026.

Speakers: Grace Huckins, AI reporter, hosted this session as Amy Nordrum and Niall Firth, executive editors, unveiled the list onstage.

Recorded on April 21, 2026

Related Stories:

B2B Ecommerce Powers Africa Retail

Consumer-focused ecommerce in Africa faces the challenge of high customer acquisition costs and complex residential delivery.

Yet in Sub-Saharan Africa, approximately 90% of consumer spending remains anchored in physical retail: mom-and-pop shops, neighborhood kiosks, and market stalls.

Consequently, ecommerce is shifting toward B2B distributors that serve these retailers directly. These platforms are moving beyond delivery apps into core supply chain infrastructure, taking on inventory sourcing and trade credit.

Retail Aggregation

Photo of warehouse worker pulling a pallet jack

Nigeria-based TradeDepot is a prominent B2B distributor. Image: TradeDepot.

In many African cities — Lagos, Nairobi, Cairo — consumers make frequent, small-value, in-person transactions. Supplying these sellers in bulk lowers overall restock costs.

In Lagos, for instance, where gridlock can reduce a B2C courier’s daily capacity, a B2B truck delivering to a concentrated retail node can move five times the volume in a single trip.

For example, Nigeria’s TradeDepot uses a sophisticated pre-selling model in which its fleet moves only across a specific cluster of shops. This ensures that every truck leaving the warehouse has a guaranteed high-density route.

Working Capital

Banks struggle to lend to small physical shops owing to little visibility into daily cash flow and inventory turnover. B2B distributors, by contrast, capture this data with every SKU delivered.

With this visibility, distributors can offer revolving inventory credit themselves.

For example, B2B distributors MaxAB and Wasoko (merged in 2024) collectively serve over 450,000 African merchants. In Egypt, the company’s finance arm generates over $180 million in annual turnover, outpacing its core ecommerce division. With repayment rates reportedly above 99%, the distributor becomes the acquisition channel, and working capital becomes the product.

Visibility

B2B distributors are changing how demand is understood and controlled.

Historically, fast-moving consumer goods brands sold into wholesale networks and lost granular visibility once products left the warehouse.

Distributors such as MaxAB-Wasoko provide SKU-level visibility at the point of retail. A brand manager at, say, Unilever or Nestlé can now see what is selling and where.

This real-time data enables brands to adjust pricing and allocate inventory precisely, bypassing the friction typically absorbed by intermediaries.

For Brands

  • Prioritize infrastructure. Focus on distributors that own the last mile — the distance between the warehouse and the retail shelf.
  • Look beyond the goods. Physical goods are primarily a vehicle for data acquisition in a 2-5% margin environment. Long-term profitability may lie in embedded finance.
  • Solve for continuity. A small-shop retailer’s primary threat is the stock-out. Brands that guarantee inventory availability will win over those competing on price alone.
ChatGPT Ads Now Offer CPC Bidding Between $3 And $5: Report via @sejournal, @MattGSouthern

Digiday reports that an early version of ChatGPT’s ads manager, available to a subset of pilot advertisers, now shows cost-per-click bids ranging from $3 to $5, based on screenshots reviewed and verified by the publication.

Until now, advertisers in the pilot have paid on a CPM basis, meaning a flat rate per 1,000 impressions served. CPC pricing lets buyers pay only when a user clicks. Digiday reported the option is available to marketers already testing advertising in the pilot, not as a broad rollout. OpenAI didn’t respond to Digiday’s request for comment.

Pricing Has Been Falling Since Launch

The CPC addition follows a drop in ChatGPT ad pricing since the pilot launched on February 9, 2026.

CPMs have fallen from $60 at launch to as low as $25 in some cases, per Digiday’s earlier reporting. Digiday also reported the minimum spend commitment has fallen from $250,000 at launch to $50,000, alongside the quiet release of a self-serve ads manager that gives a subset of pilot advertisers the ability to monitor impressions and clicks in real time.

What CPC Pricing Means For Buyers

CPM and CPC pricing serve different advertiser bases. Brand advertisers tend to plan around CPM. Performance marketers, who account for the majority of online ad spend, prefer to pay for clicks rather than impressions.

Adding CPC bidding opens the channel to a buyer category that has largely sat out the pilot. Nicole Greene, VP analyst at Gartner, told Digiday that the pricing change lets advertisers directly compare their results on OpenAI with those on other major platforms.

What ChatGPT clicks are worth depends on where they land relative to existing channels. According to ad agency Adthena (cited by Digiday), Meta CPCs run three to five times cheaper than Google Search, not because Meta’s inventory is worse, but because the intent behind those clicks is different. Social platform users tend to browse without a specific goal, while search users typically have one in mind.

The pricing drops ChatGPT into the same intent-and-value debate advertisers already face when comparing social clicks with search clicks.

Why This Matters

CPC bidding moves ChatGPT advertising into a territory where performance marketers can plan campaigns and compare costs directly against Google and Meta. Combined with the lower minimum spend, the channel is accessible to a wider buyer base than the enterprise tier that defined its launch.

SEJ’s Brooke Osmundson covered the implications for paid media teams in her analysis of whether ChatGPT Ads warrant real budget yet.

A CPM-only enterprise pilot has, in roughly 10 weeks, become a self-serve channel with a $50,000 minimum, lower CPMs, and now CPC pricing visible to a subset of advertisers. Each step down has opened the channel to a different category of buyer.

Looking Ahead

Paid media teams running search and social campaigns should compare ChatGPT’s clicks for intent quality and conversions. Measurement tools are limited and inconsistent, so teams must plan proxy measurement until OpenAI’s reporting improves.

OpenAI is hiring its first advertising marketing science leader, per Digiday. Until that role is filled, advertisers will be evaluating ChatGPT clicks largely on faith.

Google Ads Makes Call Recording Default For AI Lead Calls via @sejournal, @MattGSouthern

Google Ads has enabled call recording by default for eligible call flows associated with AI-qualified call leads, with exceptions for prior opt-outs and certain sensitive verticals.

A new Google support page describes the feature, which uses AI to evaluate phone conversations instead of relying on call duration alone to count conversions.

What Changed

Google Ads previously classified a phone call as a conversion primarily based on its duration. Google’s documentation says the new system analyzes call recordings to identify signals of intent, such as a caller asking about specific services, scheduling a consultation, or indicating readiness to purchase.

Google describes the classification as tiered.

  • Primary signal, call recording. If recording is on, AI evaluates the conversation and only qualified calls count as conversions.
  • Secondary signal, call duration. If a call can’t be recorded, duration determines whether it counts.
  • Tertiary signal, ad interaction. If no Google forwarding number is available, ad interaction data is used.

Call Details reports now include an AI-generated summary of each call and hashtags such as “#HighIntent” or “#ConsultationScheduled.”

Call Recording Defaults And Exceptions

Google’s settings page says call recording will remain off for advertisers who have already turned it off and for accounts Google has identified as operating in healthcare or financial services.

Advertisers in those categories can manually enable recording at any time, according to Google.

To turn recording off, advertisers can go to Admin > Account settings > Call ads > Call recording and select Off.

Where It Works

Call recording and AI-qualified conversions are currently limited to calls in which both the calling and receiving phone numbers are in the United States or Canada. Calls must route through a Google Forwarding Number, which requires call reporting to be enabled at the account level.

Only calls to call ads, call assets, and calls from website visits are eligible. Calls from location assets are not supported at this time.

Privacy And Compliance

Google’s settings page says callers will hear an automated message at the start of the call notifying them the conversation is being recorded for quality purposes. Advertisers agree to the Call Ads Supplemental Terms when using the feature and acknowledge they have given notice to employees or other parties who may participate in calls.

Google also says that recordings are used to evaluate lead quality, monitor spam and fraud, and improve the accuracy of conversion reporting.

Advertisers using call recording should review whether Google’s automated notification complies with their own legal obligations regarding recorded calls.

Why This Matters

Advertisers that don’t plan to use AI-qualified call leads are still producing recordings Google analyzes for lead quality, spam, and fraud, unless they turn recording off.

Smart Bidding now optimizes against AI-classified qualified calls when recording is on, and falls back to call duration when it isn’t.

Looking Ahead

Advertisers who prefer call duration as the primary signal can turn recording off in account settings. The duration threshold itself can be adjusted under Goals > Summary > Phone call leads > AI-qualified call leads.


Featured Image: El editorial/Shutterstock

The Ghost Citation Problem via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

When an AI answers a question using your content, it usually cites you with a source link. What it doesn’t do, 62% of the time, is say your name. The link is there. The brand mention is not. This is what I like to call a ghost citation: the AI using your content doesn’t mention you in the answer.

This week, I’m sharing:

  • Why being cited and being mentioned are two different outcomes that require different strategies.
  • Which LLMs name brands vs. which treat them as anonymous source material.
  • The query format and content type that produce 30x more brand mentions.

A note from Kevin: I’m a big fan of HubSpot’s Marketing Against the Grain. I had Kieran, one of the co-hosts, on my Tech Bound podcast back in 2023. Now, they launched a newsletter with smart experiments, fresh perspectives, and practical lessons on what’s working right now. So, I thought I would give a friendly shoutout: Check it out.

This analysis draws on 3,981 domains across 115 prompts, 14 countries, and four AI search engines (ChatGPT, Google AI Overviews, Gemini, AI Mode), using data from the Semrush AI Toolkit. Every appearance is tagged as “cited” (source link present) and/or “mentioned” (brand name appears in the answer text). The gap between those two states is the ghost citation problem.

1. 62% Of Your Brand’s LLM Citations Are Functionally Invisible

Most brands assume being cited means being seen. The data says otherwise.

Image Credit: Kevin Indig

74.9% of domains were cited, and 38.3% mentioned. 61.7% of citations are ghost citations: the domain gets a source link but zero name recognition in the answer text.

Only 13.2% of appearances convert into both a citation and a mention. Not a single domain was cited, but not mentioned at all, or vice versa.

2. Every LLM Shows A Different Behavior

The four AI engines treat citations and mentions in fundamentally different ways:

  • Gemini names brands in 83.7% of appearances, but only generates a citation link 21.4% of the time. It operates more like a conversationalist drawing on brand knowledge.
  • ChatGPT is the opposite: It cites 87.0% of the time but mentions brands in only 20.7% of answers, functioning more like an academic paper with footnotes.
  • Google AI Overviews (AIOs) sit in the middle but lean toward citation.
  • Google’s AI Mode offers about 17% more brand mentions than ChatGPT in its outputs, but also functions closer to an academic paper than its Gemini sibling.

For brands, this means Gemini visibility and ChatGPT visibility are not the same thing. (This data set showed clear evidence that there wasn’t much overlap with ChatGPT citations/mentions and Gemini citation/mentions for the same prompts.) Optimizing for one does not help with the other. There is no single “AI visibility metric.” There are at least 4 different behavioral systems running in parallel.

Image Credit: Kevin Indig

3. Strong Brands Get Named In The Text

A clear pattern emerges among domains appearing three or more times: Content aggregators and academic sources are cited repeatedly but almost never mentioned.

  • Medium.com was cited 16 times for the same prompts across three different engines and named zero times.
  • Wikipedia.org was cited 27 times and mentioned in only two answers, both times for the same conversational query (“What is the most dangerous creature in the world?”).
  • Wired.com, sciencedirect.com, harvard.edu: same pattern.

Consumer brands with strong public identity get mentioned in the output at near 100%. The AI doesn’t feel the need to cite. Instead, it mentions consumer brands outright. It knows the data about the brands came from somewhere, but doesn’t feel the need to explicitly say so to users. For publishers whose value proposition is information authority, this is a structural problem.

*Mention rate above 100% means the brand is named in the answer text even when not cited as a source link – the engine references the brand by name without linking to it. For values in this data set over 100%, think about being cited 10x and mentioned 10x as = 100%. If a brand is mentioned 12x and cited 10x, that’s 120%.

Image Credit: Kevin Indig

4. LLMs Disagree On The Same Brand 22% Of The Time

454 prompt+domain combinations were tested across multiple engines. In 22% of those outputs (100 total), LLMs disagreed on whether to mention the brand:

  • Instagram.com was mentioned by ChatGPT and Gemini but only cited (not named) by Google.
  • Facebook.com was mentioned by Gemini in 3 out of 3 appearances.
  • Google AI cited Facebook 9 out of 9 times, but named it in only 1.
Image Credit: Kevin Indig

The same brand, the same query, but different engines and different outcomes. This matters for measurement: A brand can appear “visible” in one engine’s data while being completely anonymous in another. Aggregate AI visibility metrics mask this divergence.

5. In-Text Brand Mention Rates Vary By Geography

Controlling for the LLM, country-level differences in mention rates are meaningful:

  • India and Sweden show the highest mention rates (50%), suggesting more conversational or brand-forward query patterns in those markets.
  • Italy, Brazil, and the Netherlands show the lowest mention rates (18-22%), with very high citation rates (82-94%).
  • The UK and Canada are mid-range but above the global average.

*Note: the dataset uses localized prompts confirmed by Semrush, so language is not a confound.

Image Credit: Kevin Indig

Being Cited And Being Named Are Not The Same, And Require A Different Approach

From this analysis, four takeaways stood out to me the most for brands and their content strategies:

1. Being cited means an AI is drawing on your content. Being mentioned means it is naming you. We don’t yet know enough about the implications of mentions and citations, but we can say for sure that there’s a system that decides when you’re cited vs. mentioned.

2. Your strategy must be LLM-specific. A Gemini-first strategy is different from a ChatGPT-first strategy. Any AI visibility report that aggregates across LLMs is misleading.

3. Comparative content gets brands named. Informational content feeds the machine anonymously. If the goal is brand mentions, not just citations, focus your content strategy toward evaluation, comparison, and recommendation.

4. Prompt format matters. Brands should map not just which topics they want to appear in, but specifically which phrasing patterns produce mentions vs. ghost citations. Short conversational queries and long structured queries behave like different products.

Methodology

Data source: Semrush AI Toolkit: 3,981 domain appearances across 115 prompts, 14 countries, and four AI search engines (ChatGPT, Google AI Overviews, Gemini, Google).

Every row in the dataset represents a domain that appeared in an AI answer. Each appearance is tagged as “cited” (the domain appears as a source link) and/or “mentioned” (the brand name appears in the answer text). The gap between those two states is what this analysis calls a ghost citation: the AI used your content but did not say your name.


Featured Image: Roman Samborskyi/Shutterstock; Paulo Bobita/Search Engine Journal

What’s The Biggest Technical SEO Blind Spot From Over-Relying On Tools? – Ask An SEO via @sejournal, @HelenPollitt1

We are fortunate to have a wide range of SEO tools available, designed to help us understand how our websites might be crawled, indexed, used, and ranked. They often have a similar interface of bold charts, color-coded alerts, and a score that sums up the “health” of your website. For those of us high-achievers who love to be graded.

But these tools can be a curse as well as a blessing, so today’s question is a really important one:

“What’s the biggest technical SEO blind spot caused by SEOs over-relying on tools instead of raw data?”

It’s the false sense of completeness. The belief that the tool is showing you the full picture, when in reality, you’re only seeing a representative model of it.

Everything else, mis-prioritization, conflicting insights, and misguided fixes all flow from that single issue.

Why Technical SEO Tools “Feel Complete” But Aren’t

Technical SEO programs are a critical part of an SEO’s toolkit. They provide insight into how a website is functioning as well as how it may be perceived by users and search bots.

A Snippet In Time Of The State Of Your Website

With a lot of the tools currently on the market, you are presented with a snapshot of the website at the point you set the crawler or report to run. This is helpful for spot-checking issues and fixes. It can be highly beneficial in spotting technical issues that could cause problems in the future, before they have made an impact.

However, they don’t necessarily show how issues have developed over time, or what might be the root cause.

Prioritized List Of Issues

The tools often help to cut through the noise of data by providing prioritized lists of issues. They may even give you a checklist of items to address. This can be very helpful for marketers who haven’t got much experience in SEO and need a hand knowing where to start.

All of these give the illusion that the tool is showing a complete picture of how a search engine perceives your site. But it’s far from accurate.

What’s Missing From Technical SEO Tools

Every tool is constricted in some way. They apply their own crawl limits, assumptions about site structure, prioritization algorithms, and data sampling or aggregation.

Even when tools integrate with each other, they are still stitching together partial views.

By contrast, raw data shows what actually happened, not what could happen or what a tool infers.

In technical SEO, raw data can include:

Without these, you are often diagnosing a simulation of your site and not the real thing.

Joined Up Data

These tools will often only report on data from their own crawl findings. Sometimes it is possible to link tools together, so your crawler can ingest information from Google Search Console, or your keyword tracking tool uses information from Google Analytics. However, they are largely independent of each other.

This means you may well be missing critical information about your website by only looking at one of two of the tools. For a holistic understanding of a website’s potential or actual performance, multiple data sets may be needed.

For example, looking at a crawling tool will not necessarily give you clarity over how the website is currently being crawled by the search engines, just how it potentially could be crawled. For more accurate crawl data, you would need to look at the server log files.

Non-Comparable Metrics

The reverse of this issue is that using too many of these tools in parallel can lead to confusing perspectives on what is going well or not with the website. What do you do if the tools provide conflicting priorities? Or the number of issues doesn’t match up?

Looking at the data through the lens of the tool means there can be an extra layer added to the data that makes it not comparable. For example, sampling could be occurring, or a different prioritization algorithm used. This might result in two tools giving conflicting results or recommendations.

Some Tools Give Simulations Rather Than Actual Data

The other potential pitfall is that, sometimes, the data provided through these reports is simulated rather than actual data. Simulated “lab” data is not the same as actual bot or user data. This can lead to false assumptions and incorrect conclusions being drawn.

In this context, “simulated” doesn’t mean the data is fabricated. It means the tool is recreating conditions to estimate how a page might behave, rather than measuring what actually did happen.

A common example of lab vs. real data is found in speed tests. Tools like Lighthouse simulate page load performance under controlled conditions.

For example, a Lighthouse mobile test runs under throttled network conditions simulating a slow 4G connection. That lab result might show an LCP of 4.5s. But CrUX field data, reflecting real users across all their devices and connections, might show a 75th percentile LCP of 2.8s, because many of your actual visitors are on faster connections.

The lab result is helpful for debugging, but it doesn’t reflect the distribution of real user experiences in real-world scenarios.

Why This Is Important

Understanding the difference between the false sense of completeness shown through tools, and the actual experience of users and bots through raw data can be critical.

As an example, a crawler could flag 200 pages with missing meta descriptions. It suggests you address these missing meta descriptions as a matter of urgency.

Looking at server logs reveals something different. Googlebot only crawls 50 of those pages. The remaining 150 are effectively undiscovered due to poor internal linking. GSC data shows impressions are concentrated on a small subset of the URLs.

If you follow the tool, you spend time writing 200 meta descriptions.

If you follow the raw data, you fix internal linking, thereby unlocking crawlability for 150 pages that currently don’t have visibility in the search engines at all.

The Risk Of This Completeness Blind Spot

The “completeness” blind spot, caused by over-reliance on technical tools, causes a lot of knock-on effects. Through the false sense of completeness, key aspects are overlooked. As a result, time and effort are misguided.

Losing Your Industry Context

Tools often make recommendations without the context of your industry or organization. When SEOs rely too much on the tools and not the data, they may not put on this additional contextual overlay that is important for a high-performing technical SEO strategy.

Optimizing For The Tool, Not Users

When following the recommendations of a tool rather than looking at the raw data itself, there can be a tendency to optimize for the “green tick” of the tool, and not what’s best for users. For example, any tool that provides a scoring system for technical health can lead SEOs to make changes to the site purely so the score goes up, even if it is actually detrimental to users or their search visibility.

Ignoring The Best Way Forward By Following The Tool

For complex situations that take a nuanced approach, there is a risk that overly relying on tools rather than the raw data can lead to SEOs ignoring the complexity of a situation in favor of following the tools’ recommendations. Think of times when you have needed to ignore a tool’s alerts or recommendations because following them would lead to pages on your site being indexed that shouldn’t, or pages being crawlable that you would rather not be. Without the overall context of your strategy for the site, tools cannot possibly know when a “noindex” is good or bad. Therefore, they tend to report in a very black-and-white manner, which can go against what is best for your site.

Final Thought

Overall, there is a very real risk that by accessing all of your technical SEO data only through tools, you may well be nudged towards taking actions that are not beneficial for your overall SEO goals at best, or at worst, you may be doing harm to your site.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

The Yoast Perspective 2026: 7 things we learned from the SEO industry 

SEO in 2026 is expanding, not changing. Traditional search still matters, but now SEO also includes AI-driven discovery, social platforms, and chatbots. The principles are the same, like clarity, structure, authority, and relevance, but the platforms are multiplying. We surveyed 59 SEOs to see how they’re handling these changes.

Table of contents

Some have less than a year of experience. Others have been in the field for over a decade. Their answers show an industry figuring things out. A few are ahead of the curve, but most are still catching up.

The best SEOs aren’t just reacting to AI. They’re using it to strengthen what already works: technical foundations, high-quality content, and real authority. Others are stuck debating whether SEO should even keep its name. 

Here’s what stood out, and where Yoast fits into the conversation of what SEO means in 2026.  

You can find the full results, with more questions and deeper insights from Yoast’s principal SEOs, Carolyn Shelby and Alex Moss, in a downloadable PDF. Sign up below!

Download the PDF report now

Enter your email address below. We’ll send you a download link to the full Yoast Perspective PDF report. Check your inbox, as it’ll arrive in minutes.

Privacy policy

1. SEO isn’t dying, but evolving 

51% of respondents consider SEO to be “evolving”. 33% say it’s “thriving”. Only 10% think it’s “declining”. 

This is an interesting divide, but it’s not random. In the results, those with 10+ years of experience say SEO is thriving, while newcomers say it is not. It might be that experts know the landscape better and see change as a constant. 

Alex Moss’s take: “SEO has always adapted to changes in the SERP, and now it’s adapting again. The traditional SERP is gone, but SEO isn’t.” 

Carolyn Shelby’s take: “SEO is evolving, but not because its fundamentals are breaking. The interfaces between users and information are changing. Search is no longer confined to ten blue links, but the need for structured, relevant, trustworthy content hasn’t diminished.” 

The Yoast Perspective: We think SEO isn’t going anywhere, but there are changes happening. Traditional search from Google and Bing still drives traffic, but AI-driven discovery from LLM-powered assistants shapes perception and discovery. Therefore, the best SEOs don’t choose sides in this fight; they are mastering both directions. 

2. Keep the name Search Engine Optimization 

39% say SEO should be relabeled “Search Everywhere Optimization”. Only 32% want to keep “Search Engine Optimization”. 

Big support for relabeling SEO, and even among veterans, 41% prefer Search Everywhere Optimization. Of course, this doesn’t mean that we should do this. 

Alex Moss’s take: “The term ‘SEO’ will stay. The role will widen to include AI and other disciplines, but the name doesn’t need to change.” 

Carolyn Shelby’s take: “The term ‘SEO’ still holds shared meaning, credibility, and market recognition. There’s no strong evidence that rebranding the discipline itself is necessary or beneficial. Responses favoring ‘Search Everywhere Optimization’ reflect where SEO outcomes now surface, not a fundamentally different practice.” 

The Yoast Perspective: We at Yoast don’t think the term SEO is broken. Yes, there is a lot of change happening, especially in search, with AI overviews, chatbots, and social media platforms, but what about the core SEO work? You still have to focus on technical foundations, content quality, brand building, and authority.  

‘Search Everywhere Optimization’ might describe where SEO happens, but it doesn’t change what SEO is. The name ‘SEO’ still works, but we just need to explain how it applies to AI and social platforms. 

seo label graph showing 28.6% saying search everywhere optimization

3. Good SEO is LLM optimization 

64% agree LLM optimization is essentially the same as traditional SEO. 59% aren’t even actively optimizing for LLMs. 

You might call this laziness, but you could also call it efficiency. It oftentimes comes down to the same thing. 

There’s also the 9% who strongly disagree with this statement. These respondents say LLMs prioritize synthesis over rankings, so focusing on structured data and brand mentions makes more sense for them. Of course, they are not wrong, but they don’t contradict what others have said. LLMs don’t require new tactics; they just reward the same SEO principles more strictly.

Alex Moss’s take: “If you’re undertaking good SEO, you’re already optimizing well for LLMs. The tactics don’t change—just the audience.” 

Carolyn Shelby’s take: “The same practices that make content discoverable and trustworthy for search engines also make it usable for LLMs. The confusion arises when people treat LLMs as a completely separate system. In reality, LLM visibility rewards clarity, relevance, and authority—all long-standing SEO principles.” 

LLM optimization isn’t a separate discipline because it’s SEO for AI. The same principles apply: clarity, structure, and authority. The difference? AI systems are less forgiving of mediocre content, so the bar for quality is higher. 

llm optimization is the same as traditional seo graph showing 51.8% agree

4. Rankings still matter, but not like they used to 

52% say rankings are “equally important” as before. 30% say they’re “less important”. 

This is a sensible shift. Google’s AI overviews and other zero-click results mean visibility does not equal traffic. For AI systems, rankings are still an authority signal.  

Alex Moss’s take: “Traditional rankings are still important because agents still search the web to ingest information. If you aren’t visible there, it’s less likely an agent will identify and select you into their responses.” 

Carolyn Shelby’s take: “Rankings still matter, but they are no longer the end goal. They are a proxy for visibility, not a guarantee of impact.” 

The Yoast Perspective: We need to stop obsessing over ranking number one, so start tracking visibility and presence. Check whether you are cited in AI-driven answers, and try to be mentioned in industry discussions. AI visibility and citations are the new rankings.  

how important are rankings as a kpi in 2026 graph showing 51.9% saying equally important

5. Organic traffic is still king, but for how long? 

55% say “organic traffic” is their top metric. Yet 49% cite “reducing organic clicks” as their biggest challenge. 

We see this as the great paradox of 2026. Traffic is down, but the value of that traffic could be up. You might get less traffic, but the clicks that do happen have a better intent.  

Carolyn Shelby’s take: “As AI reduces the need for some visits, success looks like being represented correctly rather than merely visited. Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source, even when users don’t click.” 

Our advice:

  • Work on AI visibility, as this is the new SEO metric. Just as rankings show your visibility in traditional search, citations in AI overviews show your authority in AI-driven discovery. Track it alongside rankings and traffic 
  • Keep an eye on branded search volume to learn whether people are looking for you by name 
  • Monitor citations to see if others are referencing your content online 
what are the most important seo metrics in 2026 graph showing 54.5% choose organic traffic

6. Content saturation is a big threat 

39% say “competing with AI-generated content” is their top challenge. Only 4% cite a “talent gap.” 

We know AI can write bad content. But it’s a bigger challenge when AI writes good enough content at scale. This will flood the web with noise, making it hard to penetrate. 

Alex Moss’s take: “AI-generated content is artificial. Humans connect with stories, not regurgitated lists.” 

Carolyn Shelby’s take: “AI doesn’t change what good content is, but just raises the bar. Mediocrity doesn’t just rank lower; it disappears.” 

Our advice: 

  • Focus on building your EEAT, because AI can’t fake real-world expertise and authority 
  • Prioritize quality over quantity, as a single great piece of content can beat ten average ones 
  • Use AI, but be careful and always use it as a tool, not as a replacement 
biggest challenges in seo in 2026 graph showing 49% choose reducing organic clicks

7. Most SEOs are ignoring a fast-growing search channel 

Traditional search (Google/Bing) is still #1. But TikTok search ranks #5, lower than Amazon. 

This might be something of a blind spot for many. Younger generations use TikTok and other video platforms for entertainment, recommendations, tutorials, and even B2B advice.  

Alex Moss’s take: “Social platforms influence how LLMs perceive freshness and authority. Ignoring them means missing out on signals that AI systems value.”

Carolyn Shelby’s take: “You don’t need to rank on TikTok, but you do need to be discoverable there. LLMs scrape social platforms for real-world signals.”

The Yoast Perspective: SEO now includes social platforms like TikTok. You don’t need to rank there, but you do need to be discoverable, because LLMs scrape these platforms for fresh, authoritative content. A great video channel can boost your authority in AI responses.  

Our advice: 

  • Repurpose content for video platforms like TikTok and YouTube  
  • Check brand mentions in these platforms 
  • Improve your video SEO in general 
which search channels are you prioritizing most in 2026 graph showing traditional search engines at number one

What Yoast’s experts really think 

The data shows trends, but the real wisdom comes from Yoast’s SEO leaders, Carolyn Shelby and Alex Moss. Here is a small peek at the insights they share about the various debates:

On “Search Everywhere Optimization”:  

Alex: “The term ‘SEO’ will stay. The role will widen, but the name doesn’t need to change.”

Carolyn: “Rebranding risks fragmenting understanding. ‘SEO’ is already well-established outside the industry.” 

On the future of SEO metrics: 

Alex: “As we move from being seen to being selected, visits don’t hold the same value they used to. The business goal should be the most important metric.”

Carolyn: “Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source.” 

On rankings vs. influence:  

Alex: “Rankings still matter because agents search the web to ingest information.”

Carolyn: “Rankings are a proxy for visibility, not a guarantee of impact. Focus on presence.” 

On the role of SEOs in 2026: 

Alex: “100% all three: marketers, brand builders, and SEO specialists. Brand and marketing have become intertwined with SEO as our role expands.”

Carolyn: “A blended mindset is essential. SEO can’t operate in isolation from brand, product, or communications.” 

Do you want to read the full story? 

These insights are just a small taster for you. In the full Yoast SEO report, you’ll find much more:  

  • Includes the full answers to all 25 questions 
  • In-depth commentary from Yoast’s SEO experts, Carolyn Shelby and Alex Moss 
  • Learn which metrics really matter in 2026  
  • Why backlinks are losing ground to citations 

Sign up and download it right away!

Google Adds New Tasked-Based Search Features via @sejournal, @martinibuster

Google introduced new features for Search that continues its evolution into a more task-oriented tool, enabling users to launch AI agents directly from AI Mode and complete more tasks. This is a trend that all SEOs and online businesses need to be aware of.

Rose Yao, Product leader in Search, posted about the new features on X. The first tool is a toggle that enables users to track hotel prices directly from the search bar.

Yao explained:

“To help you save $$, today we launched hotel price tracking on Search! Use the new tracking toggle to get an email if prices drop for your dream hotel. Available now, globally”

An accompanying official blog post further explained the new tool:

“You can already track hotel prices at the city level, and launching today, you can now track prices for individual hotels, too. To get started on desktop, head to Search and look up a specific hotel by name, then tap the new price tracking toggle. On mobile, you’ll find the price tracking option under the Prices tab after you search. Either way, you’ll get an email alert if rates change significantly during your chosen dates, so you can jump on those price drops and snag a great deal.”

Agentic Search From AI Mode

Google’s CEO, Sundar Pichai, recently shared that the future of search was tasked-based with a reliance on AI agents that can complete tasks for users. This announcement brings Google search closer to that paradigm by introducing agentic search directly from AI Mode. This new feature launches an AI agent from AI Mode that will call local stores.

Yao explained:

“Agentic calling in AI Mode for finding last-minute travel gear.

When you just need that *one thing* before you leave but don’t know who’s got it in stock, you can ask AI Mode to save you the stress. Just search for what you need “near me” and Google AI will call local stores directly to get the details you need.”

This feature has been available on Google Search since November 2025 but it’s now rolling out to AI Mode.

Canvas Tool

AI Mode in Search has a Canvas tool that can accomplish planning tasks for users. The official blog post describes it:

“AI Mode in Search can transform your scattered research into a cohesive travel plan. Just head to AI Mode, select the Canvas tool from the plus (+) menu and describe your ideal trip. AI Mode will craft a custom itinerary in the Canvas side panel, including options for flights and hotels, as well as local attractions laid out on a map.”

The results can be further refined by the user. Travel planning with the Canvas tool is currently only available in the United States.

Three Featured Travel Tools

Those are the three travel-related features that Yao announced on X. The official blog post lists seven features related to travel, not all of which are new. For example, saving a boarding pass to Google Wallet is not a new feature.

Google’s Seven Travel Related Search Features

  1. Build a custom trip plan with AI Mode in Search
  2. Save money with hotel price tracking on Search
  3. Let Google take the hassle out of booking restaurants
  4. Ask Google to call nearby stores for last-minute shopping
  5. Translate and communicate with confidence
  6. Ask Maps for the best stops on your summer trips
  7. Make airport travel easier with Google Wallet

Transformation Of Search Continues

The main takeaways are:

  • Search is on a path toward becoming task oriented
  • Features like hotel tracking, AI calling, and Canvas show Google handling real-world actions, not just queries
  • Sundar Pichai’s “task-based” vision is already live in product features, not theoretical
  • AI Mode acts as an execution layer, turning search into a tool that does things on behalf of users
  • Local intent is becoming more actionable, with AI directly interacting with businesses
  • The traditional “ten blue links” model is being replaced by an interface that organizes and completes workflows
  • Visibility in search is increasingly tied to whether your business can be used by these systems, not just found

Google Search is becoming less about answering queries and is becoming more about helping users with their every day tasks. In that mode, it changes the role of a website from a destination into a data source and service endpoint.

For marketers, that creates an opportunity for helping businesses be aware of these changes and be ready for them.

If AI agents are calling stores, tracking prices, and assembling plans, then the winners are not just the best-ranked pages but the ones that are use accurately structured HTML elements as well as Schema.org structured markup.  The winners are the businesses whose data is structured, accessible, and actionable enough for those agents to use.

What this means:

  • Treat product availability, pricing, hours, and inventory as critical inputs, not just content
  • Ensure local listings, structured data, and third-party integrations are accurate and consistent

Google Search is transforming into a tasked-based user interface. Tasked-based Agentic Search is not hype, it’s something real and these new features are a part of that transformation. The old ten blue links paradigm is steadily fading away and what’s replacing it is the concept of search as an interface for navigating the modern world.

Read more about Google’s tasked-based agentic search. On a related note, research based on 68 million AI crawler visits show what successful websites do to drive better AI search performance to local business sites.

Featured Image by Shutterstock/Sergio Reis

Chinese tech workers are starting to train their AI doubles–and pushing back

Tech workers in China are being instructed by their bosses to train AI agents to replace them—and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. 

Earlier this month a GitHub project called Colleague Skill, which claimed workers could use it to “distill” their colleagues’ skills and personality traits and replicate them with an AI agent, went viral on Chinese social media. Though the project was created as a spoof, it struck a nerve among tech workers, a number of whom told MIT Technology Review that their bosses are encouraging them to document their workflows in order to automate specific tasks and processes using AI agent tools like OpenClaw or Claude Code. 

To set up Colleague Skill, a user names the coworker whose tasks they want to replicate and adds basic profile details. The tool then automatically imports chat history and files from Lark and DingTalk, both popular workplace apps in China, and generates reusable manuals describing that coworker’s duties—and even their unique quirks—for an AI agent to replicate. 

Colleague Skill was created by Tianyi Zhou, who works as an engineer at the Shanghai Artificial Intelligence Laboratory. Earlier this week he told Chinese outlet Southern Metropolis Daily that the project was started as a stunt, prompted by AI-related layoffs and by the growing tendency of companies to ask employees to automate themselves. He didn’t respond to requests for further comment.

Internet users have found humor in the idea behind the tool, joking about automating their coworkers before themselves. However, Colleague Skill’s virality has sparked a lot of debate about workers’ dignity and individuality in the age of AI.

After seeing Colleague Skill on social media, Amber Li, 27, a tech worker in Shanghai, used it to recreate a former coworker as a personal experiment. Within minutes, the tool created a file detailing how that person did their job. “It is surprisingly good,” Li says. “It even captures the person’s little quirks, like how they react and their punctuation habits.” With this skill, Li can use an AI agent as a new “coworker” that helps debug her code and replies instantly. It felt uncanny and uncomfortable, Li says. 

Even so,  replacing coworkers with agents could become a norm. Since OpenClaw became a national craze, bosses in China have been pushing tech workers to experiment with agents. 

Although AI agents can take control of your computer, read and summarize news, reply to emails, and book restaurant reservations for you, tech workers on the ground say their utility has so far proven to be limited in business contexts. Asking employees to make manuals describing the minutiae of their day-to-day jobs the way Colleague Skill does is one way to help bridge that gap. 

Hancheng Cao, an assistant professor at Emory University who studies AI and work, believes that companies have good reasons to push employees to create work blueprints like these, beyond simply following a trend. “Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment,” he says.

To employees, though, making agents or even blueprints for them can feel strange and alienating. One software engineer, who spoke with MIT Technology Review anonymously because of concerns about their job security, trained an AI (not Colleague Skill) on their workflow and found that the process felt reductive—as if their work had been flattened into modules in a way that made them easier to replace. On social media, workers have turned to bleak humor to express similar feelings. In one comment on Rednote, a user wrote that “a cold farewell can be turned into warm tokens,” quipping that if they use Colleague Skill to distill their coworkers into tasks first, they themselves might survive a little longer.

The push for creating agents has also spurred clever countermeasures. Irritated by the idea of reducing a person to a skill, Koki Xu, 26 an AI product manager in Beijing, published an “anti-distillation” skill on GitHub on April 4. The tool, which took Xu about an hour to build, is designed to sabotage the process of creating workflows for agents. Users can choose between light, medium, and heavy sabotage modes depending on how closely their boss is observing the process, and the agent rewrites the material into generic, non-actionable language that would produce a less useful AI stand-in. A video Xu posted about the project went viral, drawing more than 5 million likes across platforms.

Xu told MIT Technology Review that she has been following the Colleague Skill trend from the start and that it has made her think about alienation, disempowerment, and broader implications for labor. “I originally wanted to write an op-ed, but decided it would be more useful to make something that pushes back against it,” she says.

Xu, who has undergraduate and master’s degrees in law, said the trend also raises legal questions. While a company may be able to argue that work chat histories and materials created on a work laptop are corporate property, a skill like this can also capture elements of personality, tone, and judgment, making ownership much less clear. She said she hopes Colleague Skill prompts more discussion about how to protect workers’ dignity and identity in the age of AI. “I believe it’s important to keep up with these trends so we (employees) can participate in shaping how they are used,” she says. Xu herself is an avid AI adopter, with seven OpenClaw agents set up across her personal and work devices.

Li, the tech worker in Shanghai, says her company has not yet found a way to replace actual workers with AI tools, largely because they remain unreliable and require constant supervision. “I don’t feel like my job is immediately at risk,” she says. “But I do feel that my value is being cheapened, and I don’t know what to do about it.”