What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

<div data-chronoton-summary="

  • Nobel-winning protein prediction AlphaFold creator John Jumper reflects on five years since the AI system revolutionized protein structure prediction. The DeepMind tool can determine protein shapes to atomic precision in hours instead of months.
  • Unexpected applications emerge Scientists have found creative “off-label” uses for AlphaFold, from studying honeybee disease resistance to accelerating synthetic protein design. Some researchers even use it as a search engine, testing thousands of potential protein interactions to find matches that would be impractical to verify in labs.
  • Future fusion with language models Jumper, at 39 the youngest chemistry Nobel laureate in 75 years, now aims to combine AlphaFold’s specialized capabilities with the broad reasoning of large language models. “I’ll be shocked if we don’t see more and more LLM impact on science,” he says, while avoiding the pressure of another Nobel-worthy breakthrough.

” data-chronoton-post-id=”1128322″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months.

AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry.

It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out.

“It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.”

AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science.

Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.”

A hard problem

Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard.

Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one.

Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle.

But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.”

They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.”

What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.”

Any projects stand out in particular? 

Honeybee science

Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’”

He also highlights a few examples of what he calls off-label uses of AlphaFold“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.”

Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can.

Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.    

“Basically, if AlphaFold confidently agrees with the structure you were trying to design [and] then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper.

Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab.

“This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.”

Five years on

When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on.

Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.”

But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time.

Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.”

“It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.”

Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing.

It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.”

New wave  

AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.  

Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.

AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.”

Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom.

“Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says.

With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.”

Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.”

At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.”

In other words, they want to make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?”

What’s next?

Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.  

“We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?”

That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.    

Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.”

Jumper was 39 when he won his Nobel Prize. What’s next for him?

“It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.” 

He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.

Eileen Guo writes:

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 

It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. 

Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 

But tellingly, one area the laws fail to address is user privacy.

This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.

After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.” 

Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023: 

“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.) 

All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place

So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe? 

Melissa Heikkilä replies:

Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids. 

In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything. 

Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable. 

This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave. 

Because people generally like answers that are agreeable, such responses are weighted more heavily in training. 

AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 

After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features. 

AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 

This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before. 

By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed. 

We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models. 

Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.

We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.

Eileen responds:

I think the comparison between AI companions and social media is both apt and concerning. 

As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.

Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.

And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.

In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening. 

Further reading 

FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges

Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy 

In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.

Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

How Google’s Web Guide Helps SEO

Google’s Web Guide is an experiment launched in July 2025 that uses AI to organize a user’s search results. To try the feature, enable it in Google Labs.

Unlike a fan-out (which guesses what additional information is helpful in a searcher), Web Guide analyzes the content of top-ranking pages and groups them by topic.

AI then summarizes each category, providing an overview of the pages.

Perhaps unintentionally, Web Guide is handy for search engine optimization by revealing Google’s understanding of keywords.

Targeted queries

Organic search results order web pages by ranking signals. Yet searchers cannot easily discern the pages’ content type or topics without visiting each one. Web Guide provides a summary, thus implying how Google interprets a query.

For example, Web Guide groups the search results for “how to build a website” by the following topics:

  • “Comprehensive guides to building a website”
  • “Building websites with no-code builders”
  • “Creating websites with Google Sites”
  • “Website building with Squarespace”
  • “Building websites with Wix”
  • “Building websites with Canva”
  • “Website development with HTML, CSS, and JavaScript”
  • “Learning web development: courses & tutorials”
  • “Choosing website builders”
  • “Community advice on website building (Reddit threads and forums)”
  • “Understanding domain names and hosting”
  • “Web design principles and best practices”

Creators looking to search-optimize an article or course on website building can use the list for topics to include.

Web Guide can also identify competitors. For example, searching “waterproof sneakers” in Web Guide generates a section of the best-known brands:

Google search results page for “Waterproof Sneaker,” showing branded results from Nike, Adidas, and On. Each result includes a headline about waterproof shoes and brief descriptions referencing materials like GORE-TEX and RAIN.RDY. A product photo appears next to the Adidas listing.

Web Guide can identify competitors, as shown in this example for “waterproof sneakers.”

It also reveals alternative keywords to target, such as “water resistant” and “water shoes”:

Water-Resistant and Water Shoes

Some sneakers offer water resistance or are designed as full water shoes, with specific technologies like HDry® membrane providing complete waterproofing and breathability, while others prioritize quick drying.

Brand search

Searching for a brand name in Web Guide provides insight into what Google knows about the company and the URLs that impact its understanding. For example, searching “home chef” in Web Guide generates a separate section for the prices of that service. AI summarizes each ranking page.

Web Guide results also help brands ensure off-site consistency and identify which user-generated content to monitor. For example, brands that change pricing can use Web Guide to find a list of URLs to update.

Google search results page for “Home Chef Pricing & Plans,” displaying links from Home Chef, MiumMium, YouTube, and Reddit. Listings highlight meal costs starting around $9.99 per serving, weekly cost estimates, and comparisons of Home Chef pricing versus grocery stores. A small profile image accompanies one of the results.

Searching for “home chef” in Web Guide returns a section on pages that address the service’s prices.

Competitors

Queries in Web Guide reveal its preference among competitors. Take “Home Chef” and “Green Chef,” for example. Searching “home chef vs green chef” reveals Web Guide’s AI prefers the latter:

Green Chef typically comes out ahead due to its organic ingredients, health-conscious dietary plans, and sustainability efforts, whereas Home Chef offers greater affordability, customization, and convenience with quick-prep meals.

The URLs listed below the initial summary are also AI-summarized, offering a list of publications and authors to contact for clarifications or enhancements.

Google search results page for “home chef vs green chef,” summarizing comparison content. Featured results from meal websites and review sites discuss differences in meal plans, pricing, ingredients, and dietary options between Home Chef and Green Chef. A food photo is shown next to one of the listings.

Queries in Web Guide reveal Google’s preference for top competitors, such as this comparison of “Home Chef” and “Green Chef.”

Web Guide may or may not become public. Many such Google Labs experiments never do. While aimed at consumers, it implicitly helps search optimizers by revealing how Google’s AI interprets a query or understands a brand.

ChatGPT Adds Shopping Research For Product Discovery via @sejournal, @MattGSouthern

OpenAI launched shopping research in ChatGPT, a feature that creates personalized buyer’s guides by researching products across the web. The tool is rolling out today on mobile and web for logged-in users on Free, Go, Plus, and Pro plans.

The company is offering nearly unlimited usage through the holidays.

What’s New

Shopping research works differently from standard ChatGPT responses. Users describe what they need, answer clarifying questions about budget and preferences, and receive a buyer’s guide after a few minutes.

The feature pulls information including price, availability, reviews, specs, and images from across the web. You can guide the research by marking products as “Not interested” or “More like this” as options appear.

OpenAI’s announcement states:

“Shopping research is built for that deeper kind of decision-making. It turns product discovery into a conversation: asking smart questions to understand what you care about, pulling accurate, up-to-date details from high-quality sources, and bringing options back to you to refine the results.”

The company says the tool performs best in categories like electronics, beauty, home and garden, kitchen and appliances, and sports and outdoor.

Technical Details

Shopping research is powered by a shopping-specialized GPT-5 mini variant post-trained on GPT-5-Thinking-mini.

OpenAI’s internal evaluation shows shopping research reached 52% product accuracy on multi-constraint queries, compared with 37% for ChatGPT Search.

Product accuracy measures how well responses meet user requirements for attributes like price, color, material, and specs. The company designed the system to update and refine results in real time based on user feedback.

Privacy & Data Sharing

OpenAI states that user chats are never shared with retailers. Results are organic and based on publicly available retail sites.

Merchants who want to appear in shopping research results can follow an allowlisting process through OpenAI.

Limitations

OpenAI acknowledges the feature isn’t perfect. The model may make mistakes about product details like price and availability. The company encourages users to visit merchant sites for the most accurate information.

Why This Matters

This feature pulls more of the product comparison journey into one place.

As shopping research handles more of the “which one should I buy?” work inside ChatGPT, some of that early-stage discovery could happen without a traditional search click.

For retailers and affiliate publishers, that raises the stakes for inclusion in these results. Visibility may depend on how well your products and pages are represented in OpenAI’s shopping system and allowlisting process.

Looking Ahead

Shopping research in ChatGPT is now available to logged-in users starting today. OpenAI plans to add direct purchasing through ChatGPT for merchants participating in Instant Checkout, though no timeline was provided.


Featured Image: Koshiro K/Shutterstock

What Optmyzr’s Three-Year Study Reveals About Seasonality Adjustments During BFCM via @sejournal, @brookeosmundson

Every Q4, the same message shows up in our accounts:

“Use seasonality adjustments to get ready for Black Friday and Cyber Monday.”

On paper, it sounds reasonable. You expect conversion rates to rise, so you give Smart Bidding a heads up and tell it to bid more aggressively during the peak.

Optmyzr’s latest study puts a pretty big dent in that narrative.

Over three BFCM cycles from 2022 through 2024, Fred Vallaeys and the Optmyzr team analyzed performance for up to 6,000 advertisers per year, split into two cohorts: those who used seasonality bid adjustments and those who did not.

The question was simple: do these adjustments actually help during Black Friday and Cyber Monday, or are we just making Google bid higher for no meaningful gain?

Based on the data, seasonality adjustments often hurt efficiency and rarely deliver the breakthrough many advertisers expect.

Below is a breakdown of the study and what it means for PPC managers heading into peak season.

Key Findings from Optmyzr’s BFCM Seasonality Study

The study compared performance across three BFCM periods (2022–2024), defined as the Wednesday before Black Friday through the Wednesday after Cyber Monday. Each year’s results were then measured against a pre-BFCM baseline.

The accounts were grouped into:

  • Advertisers who did not use seasonality bid adjustments
  • Advertisers who did apply them

Across all three years, consistent patterns emerged from their study.

#1: Smart Bidding already adjusts for BFCM without manual prompts

For advertisers who skipped seasonality adjustments, Smart Bidding still responded to the conversion rate spike:

  • 2022: Conversion rate up 17.5%
  • 2023: Conversion rate up 11.9%
  • 2024: Conversion rate up 7.5%

In other words, the algorithm did exactly what it was designed to do. It detected higher intent and increased bids without needing an external nudge.

#2: Seasonality adjustments inflated CPCs far more than necessary

Seasonality adjustments tell Google’s system to raise bids based on your predicted conversion rate increase.

Optmyzr notes that:

When you apply a seasonality adjustment, you are effectively telling Google: ‘I expect conversion rate to increase by X%. Raise bids immediately by X%.

And Smart Bidding acts as if you’re exactly right. It usually doesn’t soften that prediction or test into it.

The study showed that this is why CPCs climbed much faster for advertisers who used adjustments:

CPC inflation (no adjustment vs. with adjustment)

  • 2022: +17% vs. +36.7%
  • 2023: +16% vs. +32%
  • 2024: +17% vs. +34%

Adjustments consistently doubled CPC inflation, even though Smart Bidding was already raising bids based on real-time conversion signals.

#3: ROAS dropped for advertisers using seasonality adjustments

When CPC increases outpace conversion rate increases, ROAS inevitably suffers.

ROAS change (no adjustment vs. with adjustment)

  • 2022: -2% vs. -17%
  • 2023: -1.5% vs. -10%
  • 2024: +5.7% vs. -15.7%

The “no adjustment” group maintained stable ROAS, even improving in 2024. The “with adjustment” group saw steep declines every year.

Why Do Seasonality Adjustments Struggle During BFCM?

Optmyzr explains this dynamic as a precision issue.

When you apply a seasonality adjustment, you are making a specific prediction about the conversion lift. If you estimate the lift at +40% and the real lift ends up being +32–35%, that gap translates directly into overbidding.

Fred Vallaeys writes:

Smart Bidding takes this literally. It does not hedge your bet. It assumes you have perfect foresight.

That’s the core problem.

Black Friday and Cyber Monday are also in the category of highly predictable retail events. Google has years of historical BFCM data to model expected shifts. As a result, Optmyzr concludes:

Seasonality adjustments work best when Google cannot anticipate the spike.

BFCM is not one of those situations. It’s practically encoded into Google’s models.

The Trade-Off: More Revenue, Lower Efficiency

The study did show that advertisers using seasonality adjustments often drove higher revenue growth:

Revenue growth (no adjustment vs. with adjustment)

  • 2022: +25% vs. +50.5%
  • 2023: +30.3% vs. +52.8%
  • 2024: +33.8% vs. 39.9%

In 2022 and 2023, the incremental revenue jump was significant. But again, those gains came with notable ROAS declines.

This supports a practical interpretation:

  • If your brand’s priority is aggressive market share capture, top-line revenue or inventory liquidation, seasonality adjustments can deliver more volume.
  • If your brand’s priority is profitable performance, adjustments tend to work against that goal during BFCM.

When Seasonality Adjustments Do Make Sense

In the study, Optmyzr made it very clear: seasonality adjustments themselves aren’t the problem. The misuse of them is.

They work well in scenarios where you genuinely have more insight into the spike than the platforms do, such as:

  • A short flash sale
  • A new one-time promotion with no historical precedent
  • A large, concentrated email push
  • Niche events with little global relevance

Situations where they may not make the most sense:

  • Black Friday and Cyber Monday (supported by their data study)
  • Christmas shopping windows
  • Valentine’s Day for gift categories

These events are already modeled extensively by Google’s bidding systems.

What Should PPC Managers Do With This Data?

If you’re looking to make some changes into your PPC accounts this holiday season, here’s a few ways to apply these findings in a practical way.

#1: Default to not using seasonality adjustments for BFCM

For the majority of advertisers, letting Smart Bidding handle the conversion rate spike naturally leads to steadier ROAS and fewer surprises.

The data supports this approach across three consecutive years.

#2: If leadership insists on volume, be explicit about the trade-off

You can lean on Optmzyr’s findings to set expectations, not just express an opinion.

For example:

  • “Optmyzr’s three-year analysis shows that seasonality adjustments can increase revenue but typically reduce ROAS by 10-17 percentage points.”
  • “We can use them if revenue volume is the priority, but we will need to prepare for much lower cost efficiency.”

These examples keep the conversation focused on the business, not just the tactical levers you pull.

#3: Spend your energy on guardrails, not the predictions

In the study, Optmzyr reminds advertisers that trusting the algorithm doesn’t mean blindly letting it run without any oversight.

Instead of guessing the exact uplift, your value during peak season come from:

  • Smart budget pacing
  • Hourly monitoring (with automated alerts, of course!)
  • Bid caps when necessary
  • Audience and device segmentation checks
  • Creative and offer readiness

These are some of the key areas where human judgment beats prediction.

Final Thoughts On Optmyzr’s Study

Optmyzr’s study doesn’t argue that seasonality bid adjustments are bad. What it does argue is that context is everything.

For predictable, high-volume retail events like BFCM, Google’s bidding systems already have the signal they need. Adding your own forecast often leads to overshooting, inflated CPCs, and unnecessary efficiency loss.

For unique or brand-specific spikes, adjustments remain valuable.

This research gives PPC managers something we rarely get during BFCM: solid data to support a more measured, less reactive approach. If nothing else, it gives you the backup you need the next time someone asks:

“Should we turn on seasonality adjustments this Black Friday?”

Your answer can be confident, data-driven, and clear.

Google’s Mueller Questions Need For LLM-Only Markdown Pages via @sejournal, @MattGSouthern

Google Search Advocate John Mueller has pushed back on the idea of building separate Markdown or JSON pages just for large language models (LLMs), saying he doesn’t see why LLMs would need pages that no one else sees.

The discussion started when Lily Ray asked on Bluesky about “creating separate markdown / JSON pages for LLMs and serving those URLs to bots,” and whether Google could share its perspective.

Ray asked:

Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots. Can you share Googleʼs perspective on this?

The question draws attention to a developing trend where publishers create “shadow” copies of important in formats that are easier for AI systems to understand.

There’s a more active discussion on this topic happening on X.

What Mueller Said About LLM-Only Pages

Mueller replied that he isn’t aware of anything on Google’s side that would call for this kind of setup.

He notes that LLMs have worked with regular web pages from the beginning:

I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?

When Ray followed up about whether a separate format might help “expedite getting key points across to LLMs quickly,” Mueller argued that if file formats made a meaningful difference, you would likely hear that directly from the companies running those systems.

Mueller added:

If those creating and running these systems knew they could create better responses from sites with specific file formats, I expect they would be very vocal about that. AI companies aren’t really known for being shy.

He said some pages may still work better for AI systems than others, but he doesn’t think that comes down to HTML versus Markdown:

That said I can imagine some pages working better for users and some better for AI systems, but I doubt that’s due to the file format, and it’s definitely not generalizable to everything. (Excluding JS which still seems hard for many of these systems).”

Taken together, Mueller’s comments suggest that, from Google’s point of view, you don’t need to create bot-only Markdown or JSON clones of existing pages just to be understood by LLMs.

How Structured Data Fits In

Other individuals in the thread drew a line between speculative “shadow” formats and cases where AI platforms have clearly defined feed requirements.

A reply from Matt Wright pointed to OpenAI’s eCommerce product feeds as an example where JSON schemas matter.

In that context, a defined spec governs how ChatGPT ingests and displays product data. Wright explains:

Interestingly, the OpenAI eCommerce product feeds are live: JSON schemas appear to have a key role in AI search already.

That example supports the idea that structured feeds and schemas are most important when a platform publishes a spec and asks you to use it.

Additionally, Wright points to a thread on LinkedIn where Chris Long observed that “editorial sites using product schemas, tend to get included in ChatGPT citations.”

Why This Matters

If you’re questioning whether to build “LLM-optimized” Markdown or JSON versions of your content, this exchange can help steer you back to the basics.

Mueller’s comments reinforce that LLMs have long been able to read and parse standard HTML.

For most sites, it’s more productive to keep improving speed, readability, and content structure on the pages you already have, and to implement schema where there’s clear platform guidance.

At the same time, the Bluesky thread shows that AI-specific formats are starting to emerge in narrow areas such as product feeds. Those are worth tracking, but they’re tied to explicit integrations, not a blanket rule that markdown is better for LLMs.

Looking Ahead

The conversation highlights how fast AI-driven search changes are turning into technical requests for SEO and dev teams, often before there is documentation to support them.

Until LLM providers publish more concrete guidelines, this thread points you back to work you can justify today: keep your HTML clean, reduce unnecessary JavaScript where it makes content hard to parse, and use structured data where platforms have clearly documented schemas.


Featured Image: Roman Samborskyi/Shutterstock

EU Plan To Simplify GDPR Targets AI Training And Cookie Consent via @sejournal, @MattGSouthern

The European Commission has proposed a “Digital Omnibus” package that would relax parts of the GDPR, the AI Act, and Europe’s cookie rules in the name of competitiveness and simplification.

If you work with EU traffic or rely on European data for analytics, advertising, or AI features, it’s worth tracking this proposal even though nothing has changed in law yet.

What The Digital Omnibus Would Change

The Digital Omnibus would revise several laws at once.

On AI, the proposal would push back stricter rules for high-risk systems from August 2026 to December 2027. It would also lighten documentation and reporting obligations for some systems and move more oversight to the EU AI Office.

Regarding data protection, the Commission aims to clarify when information is no longer considered ‘personal,’ making it easier to share and reuse anonymized and pseudonymized datasets, especially for AI training.

Privacy group noyb says this new wording isn’t just about clarifying the rules. They believe the proposal introduces a more subjective approach, hinging on what a controller claims it can or plans to do. Noyb warns this change could exclude parts of the adtech and data-broker industry from GDPR protections.

Cookies, Consent, And Browser Signals

The cookie section is likely to be the most visible change for your day-to-day work if the proposal moves forward.

The Commission wants to cut “banner fatigue” by exempting some non-risk cookies from consent pop-ups and shifting more control into browser-level settings that apply across sites.

In practice, that would mean fewer consent banners for low-risk uses, such as certain analytics or strictly functional storage, once categories are defined.

The proposal would also require websites to respect standardized, machine-readable privacy signals from browsers when those standards exist.

AI Training & Data Rights

One of the most contested pieces of the Digital Omnibus is how it treats data used to train AI systems.

The package would allow companies including Google, Meta, and OpenAI to use Europeans’ personal data to train AI models under a broadened legal basis.

Privacy groups have argued that this kind of training should rely on explicit opt-in consent, rather than the more flexible approach they see in the proposal.

Noyb warns that long-running behavioral data, such as social media histories, could be used to train AI systems with only an opt-out model that is difficult for people to exercise in practice.

Why This Matters

This proposal is worth keeping on your radar if you’re responsible for analytics, consent, or AI-driven products that reach EU users.

Over time, you might observe smaller, browser-driven consent experiences for EU traffic, along with a different compliance approach for AI features that depend on behavioral data.

For now, nothing in your cookie banners, GA4 setup, or AI workflows needs to change solely because of the Digital Omnibus.

Looking Ahead

The Digital Omnibus is an early signal that the EU is re-balancing its digital rulebook around AI and competitiveness, not privacy and enforcement alone.

Key items to monitor include Parliament’s amendments to AI training and data language, cookie and browser-signal provisions for CMPs and browsers, and changes to AI training and consent for EU users.


Featured Image: HJBC/Shutterstock

The Behaviors And Mindset Of Marketers Who Win With Performance Max via @sejournal, @MenachemAni

Performance Max (like the more upper-funnel Demand Gen) is different enough from other Google Ads campaigns that it requires a different approach, even if the underlying search behavior and marketing principles are the same as they’ve always been.

For what it’s worth, Performance Max is typically not the first campaign to launch in any account. We typically start with Search and/or Shopping before layering on Performance Max when it makes sense, e.g., testing and scaling.

But when the time comes to make it work, it takes a specific mindset. And if your Google Ads methods and principles are still stuck in 2015, you’re not going to get very far.

Here’s how to tailor your approach and become a mentality monster for Performance Max.

Performance Max At Its Most Basic Level

A strong mindset for modern PPC begins with knowledge and education. If you still don’t understand the fundamental differences between Performance Max and legacy campaign types (like Search and Shopping), that’s step one.

The TL;DR is simple: Performance Max is driven by algorithms, not inputs or controls. There’s a certain degree of surrendering to the system that goes with it, and trying to exert control when there’s none to claim will only end up with a large chunk of wasted spend.

If you think you can be the exception to the rule and force Performance Max into traditional campaign structures, all you’ll do is choke the algorithm and spend money on poor-quality conversions. This has a compounding effect where the system then believes those are valid conversions and will try to bring you more of the same.

Here are five core truths to keep in mind:

1. You don’t control targeting. Performance Max simply does not go where you tell it to. At best, you can provide initial direction in the form of audience signals. But it will eventually start to make its own decisions about which channels to show your ads on and which audiences to pursue. Even keywords are more about guidance than a guideline to be followed strictly.

2. You don’t decide which headlines get paired with which creatives. With Performance Max, you’ll still need to build all the pieces of your ads: responsive search ads, video and static creatives, product feeds with robust descriptors, and so on. But how those get mixed and matched isn’t up to you. Google’s system will test different combinations with different audiences before settling on what works best.

3. You don’t get full visibility into every query or placement. There’s no question that Performance Max is capable of delivering great results. If you want that, then you simply have to accept that you must give up a certain degree of visibility into where your ads show and why. You may not like it, but this campaign only works when you set things up properly and trust the system (while still supervising and verifying its output).

4. Data, not content, is king. Performance Max runs on data, and Google expects you to provide far more data than it will. Accounts with more conversion data will perform better because Google has more user signals to decode. With clearer first-party inputs, Performance Max is more likely to deliver the conversions you want. The clearer your audience signals are, the easier it is to quickly move out of the learning phase. And a more complete and accurate product feed will go a long way in getting your products in front of people who want them.

5. That being said, reporting is getting better but can still be frustrating. We only recently got access to things like asset group reporting, search terms reports and negative keywords for Performance Max. It’s far more visibility than we had a few years ago, but Google is still some distance off the ideal balance. I’d advise you to make peace with the fact that reporting won’t be perfect and attribution will be even murkier than usual.

Fortunately, there’s plenty that you can control. Those factors just happen to be broader marketing principles and strategic direction:

  • Positioning, offer, and messaging strategy.
  • Quality and depth of your product feed.
  • Strength of your audience signals.
  • Depth of your first-party data inputs, e.g., conversion tracking, customer lists, data feeds.
  • Relevance of your ad copy, creatives, and landing pages.
  • Bidding strategy and goals.
  • Campaign and asset group structure at a high level.

Screenshot from X (Twitter), November 2025

Read more: Should Advertisers Rethink The ‘For Vs. Against’ Stance On Performance Max?

Traits Of PPC Managers Who Struggle With Performance Max

I see PPC managers every day who are so set in their ways that all they can do is complain about some part of Google’s machine learning. While it’s perfectly fine to stick with Search and Shopping, what’s not okay is bringing that mindset to Performance Max and expecting results anyway. And there are some behaviors that show up most frequently.

  • They require granular control over everything. Wanting to dictate exactly how the system should operate is a red flag when managing Performance Max. These managers have a natural distrust of all things machine learning and want to deploy perfect Exact Match keywords, complicated manual bidding strategies, and specific traffic sculpting techniques.
  • They believe their experience is a guarantee of success. But they don’t put in the effort to stay up-to-date on market and technological developments. These are typically old school marketers (like me) who haven’t kept up with the modern pace of Google Ads or feel entitled to success because of their tenure (unlike me).
  • They specialize in Google Ads account management and little else. Modern PPC demands that account managers have a basic level of skill in areas like copywriting, landing page theory, conversion rate optimization, product feed management, market and audience research, and offer positioning. People who refuse to treat Google Ads as one piece of a wider marketing puzzle are learning this the hard way.
  • They don’t have the diamond hands needed to trust their strategy. “Eyes on, hands off” is our approach. People who push back at the first sign of below-average output tend to make changes that reset the learning period, which only delays Google’s ability to start delivering good conversions. Since it can take three to six weeks (in my experience) to get to a good position with Performance Max, you need to know when not to make changes. Get early buy-in from clients (and the budget needed to ride it out) as you work through this early period.
  • They take a “set it and forget it” approach to automation and machine learning. Part of exiting the learning period in Performance Max quickly is keeping an eye on early results and providing data inputs so the system learns what you want more/less of. Don’t just ride out the post-launch period without tracking what Google’s bringing to the plate.
  • They expect the system to magically understand what the client wants. One of the toughest parts of modern PPC is persuading clients to provide access to data that Google needs in order to understand what success looks like on the business side. The flipside is that without this input, Google will simply make guesses until it finds something you like. This is especially true for lead-gen brands like plumbers and contractors.

Quick disclaimer: Some industries require a granular level of control, either due to regulatory and compliance mandates or because Google simply doesn’t have enough search and user volume to make informed decisions in that niche. Accounts operating in areas like pharmaceuticals, legal services, and similar niches need a higher level of control than mass market verticals like apparel or beverages.

The PPC Manager Who Wins With Performance Max

Algorithmic campaigns aren’t suitable for every account. Sometimes, it’s just better to stick to Search and Shopping. But when there is an opportunity to scale with Performance Max, there’s a specific type of person you want in charge of the process.

  • They know where they’re more useful. Marketers who are willing to hand over control of ad operations to the system are able to focus on impactful areas where machines still struggle to create differentiated output: creative, ad copy, landing pages, and their UX, strategy, data sourcing and interpretation, etc.
  • They accept that they’re only as good as their last campaign. Good PPC managers in the modern era don’t just treat Performance Max as its own campaign. They understand that just because one campaign worked a certain way doesn’t mean the next one will, too. What you want is someone who’s ready and willing to learn with every new project and iteration.
  • They understand the value of data and how to source it. Marketers who focus on building an ecosystem of data inputs and learning get better results with Performance Max because they give Google more information to base its decisions on. Someone who knows where to find those and how to convince clients that they’re mission-critical is worth their weight in gold.
  • They know how to stick to the plan. When you put in work only for a campaign to return poor results in the first week, it’s tempting to burn everything down and try something new. Marketers who build a plan for those first weeks and stick to it have the patience and confidence needed to eventually get Performance Max to a position of power.
  • They excel at client communication. A lead-gen client that refuses to share its customer data is never going to get good results from Performance Max. Good marketers can see that and will recommend traditional Search instead of creating additional friction by pushing for CRM access. Another underrated trait is proactively setting expectations with clients and communicating with them throughout the campaign.
Screenshot from X (Twitter), November 2025

PPC-Adjacent Skills To Develop For Performance Max Success

With Google Ads demanding a more holistic marketing approach, so much of your success with Performance Max begins outside of the ad account. With the system taking over much of the button-pushing that we used to do, here’s where you should be upskilling in order to cement your future in PPC.

Why I’m Bullish: Performance Max Is The Start Of The Future

Added balance between machine learning and human control is Google telling us that we only have one choice: learn to work together on these algorithmic campaigns. Performance Max has changed significantly from when it was first released, and so has Google’s attitude.

Newer features in Performance Max, like negative keywords and improved reports, help refine campaigns and offer advertisers more of what we’ve been asking for. But this can be dangerous if you don’t make the right decisions – you might see that video ads are not performing as well and remove them, only to find that their role is to push certain conversions down the line.

As it stands, Performance Max today is perfectly viable for virtually any type of business – a far cry from its early use case being limited to big-budget ecommerce and retail (how viable it is for a specific business still depends on factors such as budget, expertise, risk tolerance, and data availability).

So, while you may not necessarily need it today or every day, you should be adapting to this new direction if your top priority is to protect your business, career, and clients.

More Resources:


Featured Image: Master1305/Shutterstock

The Founder-Led Growth Loop: How To Amplify And Measure Executive Voice For Real ROI via @sejournal, @purnavirji

In this series (here and here), I’ve covered why founder-led marketing works and the systems you need to stay consistent, based on the playbook I co-authored for LinkedIn (my employer).

You’ve built the content engine and the operational frameworks to avoid burnout. Now comes the final, most critical part: proving it works.

Your founder provides the authentic voice. Your job as the marketer is to amplify that voice to the entire market and build the measurement framework that proves to the board, “This is working.”

This is how you turn a content strategy into a scalable, predictable, full-funnel growth loop.

Part 1: Amplify What’s Already Working

Your founder’s organic content is resonating, but it’s only reaching their first-degree network. Why guess what might work when you can use data to amplify what’s already working?

This is the most efficient paid strategy you can run, because paid works better when it’s built on trust. Our playbook data shows that startups whose directors post actively already generate 33% more leads through their paid campaigns.

Your secret weapon is Thought Leader Ads (TLAs).

TLAs are a LinkedIn ad format that lets you promote posts from individuals – founders, employees, even customers – rather than just your company page. They look and feel like organic posts: authentic, human, and scroll-stopping.

In general, TLAs are a high-performing format resulting in 1.5x higher click-through-rates (CTRs), 30% more efficient cost-per-click (CPCs), and 2x follower growth.

Apply them to startups and the impact is even bigger:

  • 7.6x more engagement than any other paid ad format.
  • 5x higher video engagement with video TLAs than regular sponsored video ads.

This isn’t just a top-of-funnel awareness play. You can use TLAs to build a full-funnel machine:

  • Top-of-Funnel: Amplify your founder’s best “scar story” or “contrarian take” post to your entire Ideal Customer Profile.
  • Mid-Funnel: Retarget everyone who engaged with that TLA with a more direct offer, like a Conversation Ad or a Lead Gen Form for a webinar.
  • Bottom-of-Funnel: Add this engaged audience to your nurture sequences and track them as they become sales-qualified leads.

The foundation is your founder’s best organic posts. From there, you can plug them into a full-funnel paid strategy.

Part 2: Build The Measurement Framework

This strategy feels right, but you have to prove it.

The biggest challenge in founder-led marketing is that the most important metrics – trust, reputation, resonance – don’t show up on a simple dashboard. They show up in your deal velocity, your DMs, and the way people talk about you when you’re not in the room.

There are ways you can start to track these on LinkedIn. Let’s break it down.

First 90 Days: Track Leading Indicators

Validate whether your content is resonating before it drives pipeline:

  • Engagement quality: Comments from ideal customer profiles (ICPs), DMs received, reposts by peers.
  • Audience growth: Follower count, especially from target segments.
  • Conversation starters: Number of inbound messages or replies sparked by content.
  • Profile metrics: Track who’s viewing your profile after seeing your posts.

LinkedIn recently expanded its analytics for individual members, giving you more visibility into how your content performs. Under the “Analytics” tab, you can now track:

  • Profile views from a post.
  • Followers gained from a post.
  • Audience demographics (job title, industry, location).
  • Premium button clicks (if you have a custom CTA).

These metrics help you move beyond vanity metrics to start measuring resonance – what’s landing, with whom, and why.

What not to do: Obsess over engagement metrics, delete underperforming posts, or let your founder compare themself to established thought leaders. These habits will drain motivation before your systems are strong enough to carry them through the dip.

Next 90 Days: Track Momentum

Track how your content is influencing relationships and reputation:

  • Prospect mentions: Train your sales team to log every time a prospect mentions your founder’s content during calls.
  • Dark social mentions: Track when your content gets shared in private peer networks like Slack groups or email threads.
  • Content-influenced deals: Create a CRM field to tag every prospect who mentions your posts.

Scott Albro, TOPO founder, does this in Salesforce by creating a “content-influenced” deal stage and tagging every prospect who mentions posts, comments, or competitor reactions. Then he measures deal velocity and pipeline.

Irina Novoselsky, CEO of Hootsuite, shared her results in the playbook: “I just did the math on my daily LinkedIn commitment over the last 3 months—10M+ impressions generated. But most importantly, 37% of our monthly leads are influenced by my social presence.”

Her team saw measurable business impact:

  • Executive presence was mentioned more frequently in sales calls in Q1 2025 than in all of 2024.
  • Deals closed faster when buyers referenced her content.
  • Enterprise opportunities influenced by her social presence had higher ACV.

Kacie Jenkins, former SVP of Marketing at Sendoso, found that when a prospect followed one of their Director+ executives on LinkedIn, they saw 11% higher win rates and 120% larger closed-won deal sizes.

Peep Laja, CEO of Wynter, tracks self-reported attribution: “About 80% of people signing up for Wynter or scheduling a demo say they found me on LinkedIn.”

6 Months Onwards: Business Impact Metrics

Track your lagging indicators:

  • Increasing inbound pipeline: Gal Aga’s rule is “if 20%+ of your pipeline mentions your content, you’ve won”.
  • Increasing deal velocity: Deals with content-influenced leads close faster due to pre-established trust
  • Attracting talent: Job applicants cite your posts.
  • Owning your category: You’re increasingly referenced in industry conversations.

Connect The Paid Loop

This final step connects amplification and measurement. How do you prove your TLA spend is driving revenue?

Use LinkedIn’s Conversions API (CAPI) to connect your CRM and website data directly to LinkedIn. This gives you visibility into offline actions and helps you attribute pipeline.

LinkedIn’s revenue attribution tools let you measure impact at the business, campaign, and company level. One tech company using revenue attribution found 36% higher win rates and 37% shorter deal cycles.

Startup advisor Canberk Beker sums it up: “When founders connect their organic presence to paid strategy – and measure both direct and influenced pipeline – they see outsized ROI. We’ve proven that TLAs lift demo requests and drive cross-channel conversions.”

Your Role As The Growth Multiplier

A founder-led strategy is a game-changer for sales and marketing.

Your founder’s job is to be the authentic voice. Your job as the marketer is to build the machine around them.

By connecting an authentic organic strategy with a high-powered amplification lever and a sophisticated measurement framework, you create a complete growth loop.

This is the modern marketing engine, one that builds trust at scale and proves its impact on the bottom line.

All data, quotes, and examples cited above without a source link are taken from the “Founder-Led Sales and Marketing Never Ends” playbook.

More Resources:


Featured Image: eamesBot/Shutterstock

Canonical URLs: definitive guide to canonical tags 

Imagine telling someone that www.mysite.com/blog/myarticle and www.mysite.com/myarticle are actually the same page. To you, they’re the same, but to Google, even a small difference in the URL makes them separate pages. That is where the canonical tag steps in. In this guide, we will walk you through what a canonical URL is, how URL canonicalization works, when to use it, and which mistakes to avoid so that search engines always understand your preferred page version.

Table of contents

What is a canonical URL?

A canonical URL is the main, preferred, or official version of a webpage that you want search engines like Google to crawl and index. It helps search engines determine which version of a page to treat as the primary one when multiple URLs lead to similar or duplicate content. As a result, it avoids duplicate content and protects your SEO ranking signals.

All of the following URLs can show the same page, but you should set only one as the canonical URL:

  • https://www.mysite.com/product/shoes
  • https://mysite.com/product/shoes?ref=instagram
  • https://m.mysite.com/product/shoes
  • https://www.mysite.com/product/shoes?color=black

What is a canonical tag?

A canonical tag (also called a rel="canonical" tag) is a small HTML snippet placed inside the section of a webpage to tell search engines which URL is the canonical or master version. It acts like a clear label saying, “Index this page, not the others.” This prevents duplicate content issues, consolidates ranking signals, and supports proper canonicalization across your site.

Here’s an example of a canonical tag in action:

This tag should be placed on any alternate or duplicate versions that point back to the main page you want indexed.

How does URL canonicalization work?

Canonicalization is the process of selecting the representative or canonical URL of a piece of content. From a group of identical or nearly identical URLs, this is the version that search engines treat as the main page for indexing and ranking.

Once you understand that, canonicalization becomes much easier to visualize. Think of it as a three-step workflow.

How the canonicalization process works

Here’s how the canonicalization works:

Search engines detect duplicate or similar URLs

Google groups URLs that return the same (or almost the same) content. These could come from:

  • URL parameters
  • HTTP vs. HTTPS versions
  • Desktop vs. mobile URLs
  • Filtered or sorted pages
  • Regional versions
  • Accidental duplicates like staging URLs

You signal which URL is canonical

You can guide search engines using canonical signals like:

  • The rel="canonical" tag
  • 301 redirects
  • Internal links pointing to one preferred version
  • Consistent hreflang usage
  • XML sitemaps listing the preferred URL
  • HTTPS over HTTP

The strongest and clearest hint is the canonical tag placed in the head of the page.

Google selects one canonical URL

Google uses your signals, along with its own evaluation, to determine the primary URL. While Google typically follows canonical tags, it may override them if it detects stronger signals such as redirects, internal linking patterns, or user behaviour.

Once Google settles on the canonical URL, search engines will:

  • Consolidate link equity into the canonical page
  • Index the canonical URL
  • Treat all non-canonical URLs as duplicates
  • Reduce crawl waste
  • Avoid showing similar pages in search results

Canonical tags are a hint, not a directive. Google may still distribute link equity differently if it deems the canonical tag unreliable.

Reasons why canonicalization happens

Canonicalization becomes necessary when different URLs lead to the same content. Some common reasons are:

Region variants

For example, you have one product page for the USA and one for the UK, like: https://example.com/product/shoes-us and https://example.com/product/shoes-uk.

If the content is almost identical, use one canonical link or a clear regional setup to avoid confusion.

Pro tip: For regional variants, combine canonical tags with hreflang to specify language/region targeting.

Device variants

When you serve separate URLs for mobile and desktop, such as: https://m.example.com/product/shoes and https://www.example.com/product/shoes.

Canonical tags help search engines understand which URL is the primary version.

Protocol variants

Sorting and filtering often create many URLs that show similar content, like:

https://example.com/shoes?sort=price or https://example.com/shoes?color=black&size=7

A single canonical URL, such as https://example.com/shoes, tells search engines which page should carry the main ranking signals.

Also read: Optimizing ecommerce product variations for SEO and conversions

Accidental variants

Maybe a staging or demo version of the site is left crawlable, or both https://example.com/page and https://example.com/page/ return the same content

Canonical tags and proper URL canonicalization help avoid these unintentional duplicates.

Some duplicate content on a site is normal. The goal of canonicalization in SEO is not to eliminate every duplicate, but to show search engines which URL you want them to treat as the primary one.

In practical aspects

In practice, canonicalization comes down to a few key things:

Placement

The canonical tag is placed in the head of the HTML, for example:

link rel="canonical" href="https://www.example.com/preferred-page" /

Each page should have at most one canonical tag, and it should point to the clean, preferred canonical URL.

Identification

Search engines examine several signals to determine the canonical version of a page. The rel="canonical" tag is important, but they also consider 301 redirects, internal links, sitemaps, hreflang, and whether the page is served on HTTPS. When these signals are consistent, it is easier for Google to pick the right canonicalized URL.

Crawling and indexing

Once search engines understand which URL is canonical, they primarily crawl and index that version, folding duplicates into it. Link equity and other signals are consolidated to the canonical page, which improves stability in rankings and makes your canonical tag SEO setup more effective.

The main rule for canonicalization is simple: if multiple URLs display the same content, choose one, make it your canonical URL, and clearly signal that choice with a proper canonical tag.

Google’s John Mueller puts it simply: ‘I recommend doing this kind of self-referential rel=canonical because it really makes it clear for us which page you want to have indexed or what this URL should be when it’s indexed.’

And that’s exactly why canonical tags matter; they tell search engines which version of a page is the real one. This keeps your SEO signals clean and prevents your site from competing with itself.

They’re important because they:

  • Avoid duplicate content issues: Canonical tags inform Google which URL should be indexed, preventing similar or duplicate pages from confusing crawlers or diluting rankings
  • Consolidate link equity: Canonicalization works similarly to internal linking; both are techniques used to direct authority to the page that matters most. Instead of splitting ranking signals across duplicate URLs, all information is consolidated into a single canonical URL
  • Improve crawl efficiency: Search engines don’t waste time crawling unnecessary duplicate pages, which helps them discover your important content faster
  • Enhance user experience: Users land on the correct, up-to-date version of your page, not a filtered, parameterized, or accidental duplicate

Canonical tags are useful in various everyday SEO scenarios. Here are the most common scenarios where you’ll want to use a rel=canonical tag to signal your preferred URL.

URL versions

If your page loads under multiple URL formats, with or without “www,” HTTP vs. HTTPS, and with or without a trailing slash, search engines may index each version separately. A canonical tag helps you standardize the preferred version so Google doesn’t treat them as separate pages.

Duplicate content

Ecommerce sites, blogs with tag archives, and category-driven pages often generate duplicate or near-duplicate content by design. If the same product or article appears under multiple URLs (filters, parameters, tracking codes, etc.), canonical tags help Google understand which canonical URL is the authoritative one. This prevents cannibalization and protects your canonical SEO setup.

Also read: Ecommerce SEO: how to rank higher & sell more online

Syndicated content

If your content is republished on partner sites or aggregators, always use a canonical tag that points back to your original version. This ensures your page retains the ranking signals, not the syndicated copy, and search engines know exactly where the content was originally published.

If syndication partners don’t honor your canonical tag, consider using noindex or negotiating link attribution.

Paginated pages

Long lists or multi-page articles often create a chain of URLs like /page/2/, /page/3/, and so on. These pages contribute to the same topic but shouldn’t be indexed individually. Adding canonical tags to the paginated sequence (typically pointing to page 1 or a “view-all” version) helps consolidate indexing and keeps rankings focused on the primary page.

Pro tip: For paginated content, use self-referencing canonicals (each page points to itself) unless you have a ‘view-all’ page that loads quickly and is crawlable.

Also read: Pagination & SEO: best practices

Site migrations

When you change domains, restructure URLs, or move from HTTP to HTTPS, using consistent canonical tags helps reinforce which pages replace the old ones. It signals to search engines which canonicalized URL should inherit ranking power. During migrations, canonical tags act as a safety net to prevent duplicate versions from competing with each other.

URL canonicalization is all about giving search engines a clear signal about which version of a page is the preferred or canonical URL. You can implement it in several simple steps.

Using the rel=”canonical” tag

The most common way (as shown multiple times in this blog post) to set a canonical URL is by adding a rel="canonical" tag in the head section of your page. It looks like this:

link rel="canonical" href="https://www.example.com/preferred-url"/

This tag tells search engines which URL should carry all ranking signals and appear in search results. Ensure that every duplicate or alternate version links to the same preferred URL, and that the canonical tag is consistent throughout the site.

You can also use rel="canonical" in HTTP headers for non-HTML content such as PDFs. This is helpful when you cannot place a tag in the page itself.

Pro tip: While supported for PDFs, Google may not always honor canonical HTTP headers. Use them in conjunction with other signals (e.g., sitemaps).

Also, ensure the canonical tag is as close to the top of the head section as possible so that search engines can see it early. Each page should have only one canonical tag, and it should always point to a clean, accessible URL. Avoid mixing signals. The canonical URL, your internal links, and your sitemap entries should all match.

Setting a preferred domain in Google Search Console

Google lets you choose whether you prefer your URLs to appear with or without www. Setting this preference helps reinforce your canonical signals and prevents search engines from treating www and non-www versions as different URLs.

To set your preferred domain, open your property in Google Search Console, go to Settings, and choose the version you want to treat as your primary domain.

Redirects (301 redirects)

A 301 redirect is one of the strongest signals you can send. It permanently informs browsers and search engines that one URL has been redirected to another and that the new URL should be considered the canonical URL.

Use 301 redirects when:

  • You merge duplicate URLs
  • You change your site structure
  • You migrate to HTTPS
  • You want to consolidate link equity from outdated pages

Of course, redirects replace the old URL, while canonical tags suggest a preference without removing the duplicate.

With Yoast SEO Premium, you can manage redirects effortlessly right inside your WordPress dashboard. The built-in redirect manager feature of the SEO plugin helps you avoid unnecessary 404s and prevents visitors from landing on dead ends, keeping your site structure clean and your user experience smooth.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Additional canonicalization techniques

There are a few more ways to support your canonical setup.

  • XML sitemaps: Always include only canonical URLs in your sitemap. This helps search engines understand which URLs you want indexed
  • Hreflang annotations: For multi-language or multi-region sites, hreflang tags help search engines serve the correct regional version while still respecting your canonical preference
  • Link HTTP headers: For files like PDFs or other non-HTML content, using a rel="canonical" HTTP header helps you specify the preferred URL server-side

Each of these methods reinforces your canonical signals. When you use them together, search engines have a much clearer understanding of your canonicalized URLs.

Implementing canonicalization in WordPress with Yoast

Manually adding a rel="canonical" tag to the head of every duplicate page can be fiddly and error prone. You need to edit templates or theme files, keep tags consistent with your sitemap and internal linking, and remember special cases, such as PDFs or paginated series. Modifying site code and HTML is risky when you have numerous pages or multiple editors working on the site.

Yoast SEO makes this easier and safer. The plugin automatically generates sensible canonical URL tags for all your pages and templates, eliminating the need for manual theme file edits or code additions. You can still override that choice on a page-by-page basis in the Yoast SEO sidebar: open the post or page, go to Advanced, and paste the full canonical URL in the Canonical URL field, then save.

  • Automatic coverage: Yoast automatically adds canonical tags to pages and archives by default, which helps prevent many common duplicate content issues
  • Manual override: For special cases, use the Yoast sidebar > Advanced > Canonical URL field to set a custom canonical. This accepts full URLs and updates when you save the post
  • Edge cases handled: Yoast will not output a canonical tag on pages set to noindex, and it follows best practices for paginated series and archives
  • Developer options: If you need custom behavior, you can filter the canonical output programmatically using the wpseo_canonical filter or use Yoast’s developer API
  • Cross-domain and non-HTML: Yoast supports cross-site canonicals, and you can use rel=”canonical” in HTTP headers for non-HTML files when needed

Both Yoast SEO and Yoast SEO Premium include canonical URL handling, and the Premium version adds extra automation and controls to streamline larger sites.

Must read: How to change the canonical URL in Yoast SEO for WordPress

rel=“canonical”: one URL to rule them all

Canonical URLs may seem like a small technical detail, but they play a huge role in helping search engines understand your site. When Google finds multiple URLs displaying the same content, it must select one version to index. If you do not guide that choice, Google will make the decision on its own, and that choice is not always the version you intended. That can lead to split ranking signals, wasted crawl activity, and frustrating drops in visibility.

Using canonical URLs gives you back that control. It tells search engines which page is the primary version, which ones are duplicates, and where all authority signals should be directed. From filtering URLs to regional variants to accidental duplicates that slip through the cracks, canonicals keep everything tidy and predictable.

The good news is that canonicalization does not have to be complicated. A simple rel=”canonical” tag, consistent URL handling, smart redirects, and clean sitemap signals are enough to prevent most issues. And if you are working in WordPress, Yoast SEO takes care of almost all of this automatically, so you can focus on creating content instead of wrestling with code.

At the end of the day, canonical URLs are about clarity. Show search engines the version that matters, remove the noise, and keep your authority consolidated in one place. When your signals are clear, your rankings have a solid foundation to grow.