AI chatbots can sway voters better than political advertisements

In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. 

A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. 

The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. 

“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. 

For the Nature paper, the researchers recruited more than 2,300 participants to engage in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either one of the top two candidates, was surprisingly persuasive, especially when discussing candidates’ policy platforms on issues such as the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris became slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. That was roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters 2.3 points toward Trump. 

In similar experiments conducted during the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. The chatbots shifted opposition voters’ attitudes by about 10 points.

Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. But the researchers found that the chatbots, which used a range of models including variants of GPT and DeepSeek, were more persuasive when they were instructed to use facts and evidence than when they were told not to do so. “People are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello, a psychologist at American University, who worked on the project. 

The catch is, some of the “evidence” and “facts” the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena—including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, says Costello.

In the other study published this week, in Science, an overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with nearly 77,000 participants from the UK on more than 700 political issues while varying factors like computational power, training techniques, and rhetorical strategies. 

The most effective way to make the models persuasive was to instruct them to pack their arguments with facts and evidence and then give them additional training by feeding them examples of persuasive conversations. In fact, the most persuasive model shifted participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist at the UK AI Security Institute, who worked on the project. 

But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false information—and no one is sure why. “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality,” says Hackenburg.

The chatbots’ persuasive power could have profound consequences for the future of democracy, the authors note. Political campaigns that use AI chatbots could shape public opinion in ways that compromise voters’ ability to make independent political judgments.

Still, the exact contours of the impact remain to be seen. “We’re not sure what future campaigns might look like and how they might incorporate these kinds of technologies,” says Andy Guess, a political scientist at Princeton University. Competing for voters’ attention is expensive and difficult, and getting them to engage in long political conversations with chatbots might be challenging. “Is this going to be the way that people inform themselves about politics, or is this going to be more of a niche activity?” he asks.

Even if chatbots do become a bigger part of elections, it’s not clear whether they’ll do more to  amplify truth or fiction. Usually, misinformation has an informational advantage in a campaign, so the emergence of electioneering AIs “might mean we’re headed for a disaster,” says Alex Coppock, a political scientist at Northwestern University. “But it’s also possible that means that now, correct information will also be scalable.”

And then the question is who will have the upper hand. “If everybody has their chatbots running around in the wild, does that mean that we’ll just persuade ourselves to a draw?” Coppock asks. But there are reasons to doubt a stalemate. Politicians’ access to the most persuasive models may not be evenly distributed. And voters across the political spectrum may have different levels of engagement with chatbots. “If supporters of one candidate or party are more tech savvy than the other,” the persuasive impacts might not balance out, says Guess.

As people turn to AI to help them navigate their lives, they may also start asking chatbots for voting advice whether campaigns prompt the interaction or not. That may be a troubling world for democracy, unless there are strong guardrails to keep the systems in check. Auditing and documenting the accuracy of LLM outputs in conversations about politics may be a first step.

The Rise of Global Tariffs, Explained

A new book from a noted economist seeks to explain the rise of global tariffs and trade barriers.

Branko Milanovic is a former lead economist at the World Bank and currently teaches at the City University of New York and the London School of Economics. He’s an authority on income inequality.

His latest book, “The Great Global Transformation: The United States, China, and the Remaking of the World Economic Order,” is forthcoming in March 2026. The U.K. edition, subtitled “National Market Liberalism in a Multipolar World” and released in June, made the Financial Times’ “Best books of 2025: Economics” list.

The book draws on the author’s original research to argue that the world’s two leading economies — the U.S. and China — have changed in opposite directions over the past 50 years, resulting in “the greatest reshuffling of incomes since the Industrial Revolution,” and that, in turn, is causing political changes that reverse the trend towards globalization.

Trade Wars

First, he demonstrates the relative changes in income between the U.S. and China, as well as Asia and the West generally. He illuminates the data with 35 charts and graphs. Then he analyzes past economic theories, showing they can’t explain whether trade promotes or reduces conflict between nations.

Cover of The Great Global Transformation

The Great Global Transformation

The theories, he says, failed to foresee that U.S. wage earners and investors would merge into a new labor class that combines high salaries with investment returns via equity stakes such as stock options.

Next, he provides an overview of China’s internal economic and political evolution, especially under Xi Jinping, its president. Milanovic sees China’s opening to private enterprise as different from that of the former communist regimes in Russia and Eastern Europe. He believes China will continue to stand apart on the international stage.

He believes these factors lead to increased nationalism and a retreat from globalization, resulting in trade wars, tariffs, sanctions, and border walls. In Milanovic’s view, “Economic coercion is now considered a normal part of the foreign policy toolkit.”

Despite specialized terminology, Milanovic’s writing is clear even for lay readers. He offers many provocative ideas in just under 200 pages, but no practical business guidance.

However, the book steps back from simplistic daily news bites and places them in the context of a bigger picture. Milanovic draws parallels in political trends across countries and points out that what seems like flip-flopping or policy reversals by Xi and others may be “course corrections” towards a long-term goal. He notes that U.S. nostalgia for post-war dominance is an exception, not the norm.

Merchants who sell or source internationally may find “The Great Global Transformation” fresh and useful as they navigate growth.

YouTube Title A/B Testing Rolls Out Globally To Creators via @sejournal, @MattGSouthern

YouTube is rolling out title A/B testing globally to all creators with access to advanced features, expanding the testing capability beyond the select group that had early access.

The announcement came via the platform’s Creator Insider channel, clarifying how the feature works and addressing common questions from creators.

Title A/B testing joins thumbnail testing in YouTube’s “Test and Compare” tool. You can test up to three titles, three thumbnails, or combinations of both on a single video.

How Title A/B Testing Works

The A/B testing tool compares performance across experiment variations over a set period of up to two weeks.

After testing concludes, YouTube notifies creators of the results. If there’s a clear winner, that option becomes the default shown to all viewers. If all options performed similarly, the first combination becomes the default.

You can override the automatic selection at any time through the metadata editor or YouTube analytics page.

Why YouTube Uses Watch Time Over CTR

YouTube optimizes test results based on watch time rather than click-through rate.

In the Creator Insider video, the company explained:

“We want to ensure that your A/B test experiment gets the highest viewer engagement, so we’re optimizing for overall watch time over other metrics like CTR. We believe that this metric will best inform our creators content strategy decisions and support their chances of success.”

Understanding Test Results

YouTube tests deliver one of three possible outcomes.

“Winner” means one version outperformed others at driving watch time per impression. YouTube believes this version will lead to better performance.

“Performed the same” indicates all options earned similar shares of watch time. While small differences may appear, they aren’t statistically meaningful. You can choose whichever option you prefer.

“Inconclusive” can occur when no clear performance difference exists between options, or when the video doesn’t generate enough impressions for a reliable comparison. Higher view counts increase the likelihood of a decisive result.

Screenshot from: YouTube.com/CreatorInsider, December 2025.

Impression Distribution & Viewer Experience

YouTube distributes impressions as evenly as possible across test variations, though identical distribution isn’t guaranteed.

During active tests, viewers consistently see the same title-thumbnail combination across their home feed, watch page, and other YouTube surfaces. This prevents confusion from seeing different versions of the same video.

YouTube addressed a common concern about tests making previously-watched videos appear new. The company notes that watch history and the red progress bar on thumbnails remain the reliable indicators of what you’ve already watched.

Why This Matters

Title testing gives you data to inform creative decisions. Combined with thumbnail testing, you can now optimize both elements that influence whether viewers click on your videos.

The watch time metric means successful titles attract viewers who actually engage with your content, not just those who click and leave.

Looking Ahead

Title A/B testing requires access to YouTube’s advanced features, which you can enable through account verification. The feature works on long-form videos and is currently desktop-only.

YouTube first announced title A/B testing alongside thumbnail testing at Made on YouTube in September.


Featured Image: FotoField/Shutterstock

2026 Marketing Forecast for PPC Leaders [Webinar] via @sejournal, @hethr_campbell

The strategies that worked in 2025 will not carry your campaigns through the new year.

Buyer behavior is evolving, budgets demand tighter discipline, and channels like calls, text, and voice agents are becoming essential conversion paths. As the marketing landscape shifts, the question is no longer whether you should adapt but how fast.

The Strategic Shifts Every Marketer Needs To Refine By Q2

Join Emily Popson, VP of Marketing at CallRail, for a clear and data-driven look at the five marketing priorities that will shape performance in 2026 and what PPC teams must adjust now to stay competitive.

You’ll Learn How To

  • Allocate marketing and advertising budgets in ways that drive measurable revenue
  • Use your audience’s real words to build stronger ads and landing pages
  • Create campaigns that meet buyers where they are in 2026
  • Evaluate text, call, and voice channels within your optimization mix
  • Build operational confidence that supports scale into Q2

Why Attend?

This session gives you a grounded view of what top-performing marketers are doing differently and where outdated assumptions are slowing teams down.

You will gain practical frameworks, real-world examples, and data-backed insights to refine your PPC strategy and prepare for the months ahead.

Register now to secure your seat and strengthen your 2026 marketing strategy.

🛑 Cannot make it live? Register anyway and the full recording will be sent to you after the event.

Google Maps Lets Users Post Reviews With Nicknames via @sejournal, @MattGSouthern

Google Maps now lets users leave business reviews under a custom nickname instead of their real name. The feature is part of a four-feature Maps update and is rolling out globally on Android, iOS, and desktop.

Local SEO agency Whitespark was among the first to document the change in detail, describing it as one of the more notable shifts to Google’s review system in years.

What Changed

Google’s support documentation outlines the new setting. Users can enable a custom display name and picture for posting through their Maps or Google profile. Once enabled, that identity appears on reviews, photos, videos, and Q&A posts across Maps.

The feature works retroactively. If you edit your nickname later, past contributions update to show the new name.

Whitespark notes that people have long created Google accounts with aliases. This is the first time Google has offered a dedicated posting identity separate from your main account profile and documented it officially.

How It Affects Spam Detection

Google’s blog post says its existing review protections remain in place. Reviews written under a nickname are still tied to an account and its history. Businesses can still report reviews they believe violate policies.

Whitespark calls this “pseudonymous rather than truly anonymous.” The public display name differs, but Google still sees the underlying account and contribution history.

Why This Matters

Expect to see more nicknames and illustration-based profile pictures in review feeds. Whitespark highlights industries like legal, medical, and financial services where clients often hesitate to post under their real name. This could increase review volume in those categories.

If you work with businesses in privacy-sensitive categories, you may want to update review request templates to mention the nickname option.

Looking Ahead

The nickname feature is live or rolling out for most users, though some local SEOs, such as Joy Hawkins, report they don’t yet see it in their own profiles.


Featured Image: Roman Samborskyi/Shutterstock

Google Adds AI-Powered Configuration To Search Console via @sejournal, @MattGSouthern

Google is rolling out an experimental feature that lets Search Console users configure the Search results Performance report using natural language instead of manual filter selection.

The feature, called AI-powered configuration, translates plain-language requests into the appropriate filters and settings. You can describe the analysis you want to see, and the system handles the technical setup.

What’s New

The AI-powered configuration feature handles three types of report setup:

1. Filters

You can narrow data by query, page, country, device, search appearance, or date range through natural language.

A request like “Show me queries on phone searches that contain the word ‘sports’ in the last 6 months” applies the relevant filters automatically.

2. Comparisons

Complex date range comparisons that previously required manual configuration can now be set up through prompts like “Compare traffic for my pages that contain ‘/blog’ in this quarter to the same quarter last year.”

3. Metric Selection

The feature can display specific combinations of the four available metrics (Clicks, Impressions, Average CTR, and Average Position) based on your requests.

For example, you can ask, “Show me the Average CTR and Average Position of my queries in Spain in the last 28 days.”

Limitations

Google noted several limitations.

The feature works only with Performance reports for Search results, it doesn’t support Discover or News reports.

AI-powered configuration is designed only for configuring filters, comparisons, and metrics. It can’t sort tables or export data.

Google also cautioned that the AI can misinterpret requests. You should review suggested filters before analyzing data to make sure they match the query you intended.

Why This Matters

This feature could reduce the manual effort required to set up complex filter combinations in Search Console.

The ability to request custom date comparisons or multi-filter configurations through natural language removes several steps from the reporting process.

The accuracy caveat matters, especially for higher-stakes reporting. You’ll want to verify that the AI understood your request correctly before you rely on the data for decisions or client reporting.

Looking Ahead

Google is rolling out AI-powered configuration to a limited set of websites and will expand availability over time. No timeline for broader access was provided.

You can share feedback through the buttons in Search Console or by posting in the Google Search Central Community.


Featured Image: IB Photography/Shutterstock

The New Structure Of AI Era SEO via @sejournal, @DuaneForrester

People keep asking me what it takes to show up in AI answers. They ask in conference hallways, in LinkedIn messages, on calls, and during workshops. The questions always sound different, but the intent is the same. People want to know how much of their existing SEO work still applies. They want to know what they need to learn next and how to avoid falling behind. Mostly, they want clarity (hence my new book!). The ground beneath this industry feels like it moved overnight, and everyone is trying to figure out if the skills they built over the last twenty years still matter.

They do. But not in the same proportions they used to. And not for the same reasons.

When I explain how GenAI systems choose content, I see the same reaction every time. First, relief that the fundamentals still matter. Then a flicker of concern when they realize how much of the work they treated as optional is now mandatory. And finally, a mix of curiosity and discomfort when they hear about the new layer of work that simply did not exist even five years ago. That last moment is where the fear of missing out turns into motivation. The learning curve is not as steep as people imagine. The only real risk is assuming future visibility will follow yesterday’s rules.

That is why this three-layer model helps. It gives structure to a messy change. It shows what carries over, what needs more focus, and what is entirely new. And it lets you make smart choices about where to spend your time next. As always, feel free to disagree with me, or support my ideas. I’m OK with either. I’m simply trying to share what I understand, and if others believe things to be different, that’s entirely OK.

This first set contains the work every experienced SEO already knows. None of it is new. What has changed is the cost of getting it wrong. LLM systems depend heavily on clear access, clear language, and stable topical relevance. If you already focus on this work, you are in a good starting position.

You already write to match user intent. That skill transfers directly into the GenAI world. The difference is that LLMs evaluate meaning, not keywords. They ask whether a chunk of content answers the user’s intent with clarity. They no longer care about keyword coverage or clever phrasing. If your content solves the problem the user brings to the model, the system trusts it. If it drifts off topic or mixes multiple ideas in the same chunk/block, it gets bypassed.

Featured snippets prepared the industry for this. You learned to lead with the answer and support it with context. LLMs treat the opening sentences of a chunk as a kind of confidence score. If the model can see the answer in the first two or three sentences, it is far more likely to use that block. If the answer is buried under a soft introduction, you lose visibility. This is not stylistic preference. It is about risk. The model wants to minimize uncertainty. Direct answers lower that uncertainty.

This is another long-standing skill that becomes more important. If the crawler cannot fetch your content cleanly, the LLM cannot rely on it. You can write brilliant content and structure it perfectly, and none of it matters if the system cannot get to it. Clean HTML, sensible page structure, reachable URLs, and a clear robots.txt file are still foundational. Now they also affect the quality of your vector index and how often your content appears in AI answers.

Updating fast-moving topics matters more today. When a model collects information, it wants the most stable and reliable view of the topic. If your content is accurate but stale, the system will often prefer a fresher chunk from a competitor. This becomes critical in categories like regulations, pricing, health, finance, and emerging technology. When the topic moves, your updates need to move with it.

This has always been at the heart of SEO. Now it becomes even more important. LLMs look for patterns of expertise. They prefer sources that have shown depth across a subject instead of one-off coverage. When the model attempts to solve a problem, it selects blocks from sources that consistently appear authoritative on that topic. This is why thin content strategies collapse in the GenAI world. You need depth, not coverage for the sake of coverage.

This second group contains tasks that existed in old SEO but were rarely done with discipline. Teams touched them lightly but did not treat them as critical. In the GenAI era, these now carry real weight. They do more than polish content. They directly affect chunk retrieval, embedding quality, and citation rates.

Scanning used to matter because people skim pages. Now chunk boundaries matter because models retrieve blocks, not pages. The ideal block is a tight 100 to 300 words that covers one idea with no drift. If you pack multiple ideas into one block, retrieval suffers. If you create long, meandering paragraphs, the embedding loses focus. The best performing chunks are compact, structured, and clear.

This used to be a style preference. You choose how to name your product or brand and try to stay consistent. In the GenAI era, entity clarity becomes a technical factor. Embedding models create numeric patterns based on how your entities appear in context. If your naming drifts, the embeddings drift. That reduces retrieval accuracy and lowers your chances of being used by the model. A stable naming pattern makes your content easier to match.

Teams used to sprinkle stats into content to seem authoritative. That is not enough anymore. LLMs need safe, specific facts they can quote without risk. They look for numbers, steps, definitions, and crisp explanations. When your content contains stable facts that are easy to lift, your chances of being cited go up. When your content is vague or opinion-heavy, you become less usable.

Links still matter, but the source of the mention matters more. LLMs weigh training data heavily. If your brand appears in places known for strong standards, the model builds trust around your entity. If you appear mainly on weak domains, that trust does not form. This is not classic link equity. This is reputation equity inside a model’s training memory.

Clear writing always helped search engines understand intent. In the GenAI era, it helps the model align your content with a user’s question. Clever marketing language makes embeddings less accurate. Simple, precise language improves retrieval consistency. Your goal is not to entertain the model. Your goal is to be unambiguous.

This final group contains work the industry never had to think about before. These tasks did not exist at scale. They are now some of the largest contributors to visibility. Most teams are not doing this work yet. This is the real gap between brands that appear in AI answers and brands that disappear.

The LLM does not rank pages. It ranks chunks. Every chunk competes with every other chunk on the same topic. If your chunk boundaries are weak or your block covers too many ideas, you lose. If the block is tight, relevant, and structured, your chances of being selected rise. This is the foundation of GenAI visibility. Retrieval determines everything that follows.

Your content eventually becomes vectors. Structure, clarity, and consistency shape how those vectors look. Clean paragraphs create clean embeddings. Mixed concepts create noisy embeddings. When your embeddings are noisy, they lose queries by a small margin and never appear. When your embeddings are clean, they align more often and rise in retrieval. This is invisible work, but it defines success in the GenAI world.

Simple formatting choices change what the model trusts. Headings, labels, definitions, steps, and examples act as retrieval cues. They help the system map your content to a user’s need. They also reduce risk, because predictable structure is easier to understand. When you supply clean signals, the model uses your content more often.

LLMs evaluate trust differently than Google or Bing. They look for author information, credentials, certifications, citations, provenance, and stable sourcing. They prefer content that reduces liability. If you give the model clear trust markers, it can use your content with confidence. If trust is weak or absent, your content becomes background noise.

Models need structure to interpret relationships between ideas. Numbered steps, definitions, transitions, and section boundaries improve retrieval and lower confusion. When your content follows predictable patterns, the system can use it more safely. This is especially important in advisory content, technical content, and any topic with legal or financial risk.

The shift to GenAI is not a reset. It is a reshaping. People are still searching for help, ideas, products, answers, and reassurance. They are just doing it through systems that evaluate content differently. You can stay visible in that world, but only if you stop expecting yesterday’s playbook to produce the same results. When you understand how retrieval works, how chunks are handled, and how meaning gets modeled, the fog lifts. The work becomes clear again.

Most teams are not there yet. They are still optimizing pages while AI systems are evaluating chunks. They are still thinking in keywords while models compare meaning. They are still polishing copy while the model scans for trust signals and structured clarity. When you understand all three layers, you stop guessing at what matters. You start shaping content the way the system actually reads it.

This is not busywork. It is strategic groundwork for the next decade of discovery. The brands that adapt early will gain an advantage that compounds over time. AI does not reward the loudest voice. It rewards the clearest one. If you build for that future now, your content will keep showing up in the places your customers look next.


My new book, “The Machine Layer: How to Stay Visible and Trusted in the Age of AI Search,” is now on sale at Amazon.com. It’s the guide I wish existed when I started noticing that the old playbook (rankings, traffic, click-through rates) was quietly becoming less predictive of actual business outcomes. The shift isn’t abstract. When AI systems decide which content gets retrieved, cited, and trusted, they’re also deciding which expertise stays visible and which fades into irrelevance. The book covers the technical architecture driving these decisions (tokenization, chunking, vector embeddings, retrieval-augmented generation) and translates it into frameworks you can actually use. It’s built for practitioners whose roles are evolving, executives trying to make sense of changing metrics, and anyone who’s felt that uncomfortable gap opening between what used to work and what works now.

The Machine Layer
Image Credit: Duane Forrester

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Master1305/Shutterstock

How CMOs Should Prioritize SEO Budgets In 2026 Q1 And H1 via @sejournal, @TaylorDanRW

Search evolved quickly throughout 2025 as AI systems became a primary route for information discovery, which, in turn, reduced the consistency and predictability of traditional organic traffic for many brands.

As blue‑link visibility tightened and click‑through rates became more erratic, CMOs found themselves under growing pressure to justify marketing spend while still demonstrating momentum. This shift required marketing leaders to think more seriously about resilience across their owned channels. It is no longer viable to rely solely on rankings.

Brands need stable visibility across AI surfaces, stronger and more coherent content operations, and cleaner technical foundations that support both users and AI systems.

Q1 and H1 2026 are the periods in which these priorities need to be funded and executed.

Principles For 2026 SEO Budgeting In Q1/H1

A well‑structured SEO budget for early 2026 is built on a clear set of principles that guide both stability and experimentation.

Protect A Baseline Allocation For Core SEO

This includes technical health, site performance, information architecture, and the ongoing maintenance of content. These activities underpin every marketing channel, and cutting them introduces unnecessary risk at a time when discovery patterns are shifting.

Create A Separate Experimental Pot For AI Discovery

As AI Overviews and other generative engines influence how users encounter brands, it becomes important to ring‑fence investment for testing answer‑led content, entity development, evolving schema patterns, and AI measurement frameworks. Without a dedicated pot, these activities either stall or compete with essential work.

Invest In Measurement That Explains Real User Behavior

Because AI visibility remains immature and uneven, analytics must capture how users move through journeys, where AI systems mention the brand, and which content shapes those outcomes.

This level of insight strengthens the CMO’s ability to defend and adjust budgets later in the year.

Where To Put Money In Q1

Q1 is the moment to stabilize the foundation while preparing for new patterns in discovery. The work done here shapes the results achieved in H1.

Technical Foundations

Begin with site health. Improve performance, resolve crawl barriers, modernize internal linking, and strengthen information architecture. AI systems and LLMs rely heavily on clean and consistent signals, so a strong technical environment supports every subsequent content, GEO, and measurement initiative.

Entity‑Rich, Question‑Led Content

Users are now expressing broader and more layered questions, and AI engines reward content that defines concepts clearly, addresses common questions in detail, and builds meaningful topical depth. Invest in structured content programmes aligned to real customer problems and journeys, placing emphasis on clarity, usefulness, and authority rather than chasing volume for its own sake.

Early GEO Experimentation

There is considerable overlap between SEO and LLM inclusion because both rely on strong technical foundations, consistent entity signals, and helpful content that is easy for systems to interpret. LLM discovery should be seen as an extension of SEO rather than a standalone discipline, since most of the work that strengthens SEO also strengthens LLM inclusion by improving clarity, coherence, and relevance.

Certain sectors are beginning to experience new nuances. One example is Agentic Commerce Protocol (ACP), which is influencing how AI systems understand products, evaluate them, and, in some cases, transact with them.

Whether we refer to this area as GEO, AEO, or LLMO, the principle is the same – brands are now optimising for multiple platforms and an expanding set of discovery engines, each with its own interpretation of signals.

Q1 is the right time to assess how your brand appears across these systems. Review answer hubs, evaluate your entity relationships, and examine how structured signals are interpreted. This initial experimentation will inform where budget should be expanded in H1.

H1 View: Scaling What Works

H1 is when early insights from Q1 begin to mature into scalable programmes.

Rolling Winning Experiments Into BAU

When early LLM discovery or structured content initiatives show clear signs of traction, they should be incorporated into business‑as‑usual SEO. Formalizing these practices allows them to grow consistently without requiring new budget conversations every quarter.

Cutting Low‑ROI Tools And Reinvesting In People And Process

Many organizations overspend on tools that fail to deliver meaningful value.

H1 provides the opportunity to review tool usage, identify duplication, and retire underused platforms. Redirecting that spend towards people, content quality, and operational improvements generally produces far stronger outcomes. The AI race that pretty much all tool providers have entered will begin to die down, and those that drive clear value will begin to emerge from the noise.

Adjusting Budget Mix As Data Emerges

By the latter part of H1, the business should have clearer evidence of where visibility is shifting and which activities genuinely influence discovery and engagement. Budgets should then be adjusted to support what is working, maintain core SEO activity, expand successful content areas, and reduce investment in experiments that have not produced results.

CMO Questions Before Sign‑Off

As CMOs review their SEO budgets for 2026, the final stage of sign‑off should be shaped by a balanced view of both offensive and defensive tactics, ensuring the organization invests in movement as well as momentum.

Defensive tactics protect what the brand has already earned: stability in rankings, continuity of technical performance, dependable content structures, and the preservation of existing visibility across both search and AI‑driven experiences.

Offensive tactics, on the other hand, are designed to create new points of visibility, unlock new categories of demand, and strengthen the brand’s presence across emerging discovery engines.

A balanced budget needs to fund both, because without defence the brand becomes fragile, and without offence it becomes invisible.

Movement refers to the activities that help the brand adapt to evolving discovery environments. These include early LLM discovery experiments, entity expansion, and the modernization of content formats.

Momentum represents the compounding effect of sustained investment in core SEO and consistent optimization across key journeys.

CMOs should judge budgets by their ability to generate both: movement that positions the brand for the future, and momentum that sustains growth.

With that in mind, CMOs may wish to ask the following questions before approving any budget:

  • To what extent does this budget balance defensive activity, such as technical stability and content maintenance, with offensive initiatives that expand future visibility?
  • How clearly does the plan demonstrate where movement will come from in early 2026, and how momentum will be protected and strengthened throughout H1?
  • Which elements of the programme directly enhance the brand’s presence across AI surfaces, GEO, and other emerging discovery engines?
  • How effectively does the proposed content strategy support both immediate user needs and longer‑term category growth?
  • How will we track changes in brand visibility across multiple platforms, including traditional search, AI‑driven answers, and sector‑specific discovery systems?
  • What roles do teams, processes, and first‑party data play in sustaining movement and momentum, and are they funded appropriately?
  • What reporting improvements will allow the leadership team to judge the success of both defensive and offensive investments by the end of H1?

More Resources:


Featured Image: N Universe/Shutterstock

What is link building in SEO?

Link building is the practice of earning links from other websites to your own. These links act as signals of trust and authority for search engines, helping your pages rank higher in search results. Quality matters more than quantity. A few relevant, high-authority links are far more valuable than many low-quality ones. Modern link building focuses on creating genuinely useful content, building genuine relationships, and earning links naturally, rather than manipulating rankings.

Table of contents

Key takeaways

  • Link building helps establish content credibility through acquiring backlinks from other websites.
  • It focuses on quality over quantity, emphasizing trust and relevance in search engine rankings.
  • Effective link building involves engaging with digital PR and fostering genuine relationships with sources.
  • Producing valuable content and fostering connections leads to high-quality links and improved online visibility.
  • Today, AI-driven search evaluates authority based on context, relevance, and structured data, not just backlinks.

Link building means earning hyperlinks from other sites to show search engines your content is trustworthy and valuable. Now, it’s more like digital PR, focusing on relationships, credibility, and reputation, not just quantity. AI-powered search also considers citations, structured data, and context alongside backlinks. By prioritizing quality, precision, and authority, you build lasting online visibility. Ethical link building remains one of the most effective ways to enhance your brand’s search presence and reputation.

Link building is a core SEO tactic. It helps search engines find, understand, and rank your pages. Even great content may stay hidden if search engines can’t reach it through at least one link.

To get indexed by Google, you need links from other sites. The more relevant and trusted those links are, the stronger your reputation becomes. This guide covers the basics of link building, its connection to digital PR, and how AI-driven search evaluates trust and authority.

If you are new to SEO, check out our Beginner’s guide to SEO for a complete overview.

A link, or hyperlink, connects one page on the internet to another. It helps users and search engines move between pages.

For readers, links make it easy to explore related topics. For search engines, links act like roads, guiding crawlers to discover and index new content. Without inbound links, a website can be challenging for search engines to discover or assess.

You can learn more about how search engines navigate websites in our article on site structure and SEO.

A link in HTML

In HTML, a link looks like this:

Yoast SEO plugin for WordPress

The first part contains the URL, and the second part is the clickable text, called the anchor text. Both parts matter for SEO and user experience, as they inform both people and search engines about what to expect when they click.

Internal and external links

There are two main types of links that affect SEO. Internal links connect pages within your own website, while external links come from other websites and point to your pages. External links are often called backlinks.

Both types of links matter, but external links carry more authority because they act as endorsements from independent sources. Internal linking, however, plays a crucial role in helping search engines understand how your content fits together and which pages are most important.

To learn more about structuring your site effectively, refer to our guide on internal linking for SEO.

Anchor text

The anchor text describes the linked page. Clear, descriptive anchor text helps users understand where a link will direct them and provides search engines with more context about the topic.

For example, “SEO copywriting guide” is much more useful and meaningful than “click here.” The right anchor text improves usability, accessibility, and search relevance. You can optimize your own internal linking by using logical, topic-based anchors.

For more examples, read our anchor text best practices guide.

Link building is the process of earning backlinks from other websites. These links serve as a vote of confidence, signaling to search engines that your content is valuable and trustworthy.

Search engines like Google still use backlinks as a key ranking signal; however, the focus has shifted away from quantity to quality and context. A single link from an authoritative, relevant site can be worth far more than dozens from unrelated or low-quality sources.

Effective link building is about establishing genuine connections, rather than accumulating as many links as possible. When people share your content because they find it useful, you gain visibility, credibility, and referral traffic. These benefits reinforce one another, helping your brand stand out in both traditional search and AI-driven environments, where authority and reputation are most crucial.

Link quality over quantity

Not all links are created equal. A high-quality backlink from a well-respected, topic-relevant website has far more impact than multiple links from small or unrelated sites.

Consider a restaurant owner who earns a link from The Guardian’s food section. That single editorial mention is far more valuable than a dozen random directory links. Google recognizes that editorial links earned for merit are strong signals of expertise, while low-effort links from unrelated pages carry little or no value.

High-quality backlinks typically originate from websites with established reputations, clear editorial guidelines, and active audiences. They fit naturally within the content and make sense to readers. Low-quality links, on the other hand, can make your site appear manipulative or untrustworthy. Building authority takes time, but the reward is a reputation that search engines and users can rely on.

Read more about this long-term approach in our post on holistic SEO.

Shady techniques

Because earning high-quality links can take time, some site owners resort to shortcuts, such as buying backlinks, using link farms, or participating in private blog networks. These tactics may yield quick results, but they violate Google’s spam policies and can result in severe penalties.

When a site’s link profile looks unnatural or manipulative, Google may reduce its visibility or remove it from results altogether. Recovering from such penalties can take months. It is far safer to focus on ethical, transparent methods. In short, you’re better off avoiding these risky link building tricks, as quality always lasts longer than trickery.

The most effective way to earn strong backlinks is to create content that others genuinely want to reference and link to. Start by understanding your audience and their challenges. Once you know what they are looking for, create content that provides clear answers, unique insights, or helpful tools.

For example, publishing original data or research can attract links from journalists and educators. Creating detailed how-to guides or case studies can help establish connections with blogs and businesses that want to cite your expertise. You can also build relationships with people in your industry by commenting on their content, sharing their work, and offering collaboration ideas.

Newsworthy content is another proven approach. Announce a product launch, partnership, or study that has real value for your audience. When you provide something genuinely useful, you will find that links and citations follow naturally.

Structured data also plays an important role. By using Schema markup, you help search engines understand your brand, authors, and topics, making it easier for them to connect mentions of your business across the web.

For a more detailed approach, visit our step-by-step guide to link building.

Search is evolving quickly. Systems like Google Gemini, ChatGPT, and Perplexity no longer rely solely on backlinks to determine authority. They analyze the meaning and connections behind content, paying attention to context, reputation, and consistency.

Links still matter, but they are part of a wider ecosystem of trust signals. Mentions, structured data, and author profiles all contribute to how search and AI systems understand your expertise. This means that link building is now about being both findable and credible.

To stay ahead, make sure your brand and authors are clearly represented across your site. Use structured data to connect your organization, people, and content. Keep your messaging consistent across all channels where your brand appears. When machines and humans can both understand who you are and what you offer, your chances of visibility increase.

You can read more about how structured data supports this process in our guide to Schema and structured data.

There are many ways to put link building into action. A company might publish a research study that earns coverage from major industry blogs and online magazines. A small business might collaborate with local influencers or community organizations that naturally reference its website, thereby increasing its online presence. Another might produce in-depth educational content that other professionals use as a trusted resource.

Each of these examples shares the same principle: links are earned because the content has genuine value. That is the foundation of successful link building. When people trust what you create and see it as worth sharing, search engines take notice, too.

In conclusion

Link building remains one of the most effective ways to establish visibility and authority. Today, success depends on more than collecting backlinks. It depends on trust, consistency, and reputation.

Consider link building as an integral part of your digital PR strategy. Focus on creating content that deserves attention, build relationships with credible sources, and communicate your expertise clearly and effectively. The combination of valuable content, ethical outreach, and structured data will help you stand out across both Google Search and AI-driven platforms.

When you build content for people first, the right links will follow.

5 Reasons To Use The Internet Archive’s New WordPress Plugin via @sejournal, @martinibuster

The Internet Archive, also known as the Wayback Machine, is generally regarded as a place to view old web pages, but its value goes far beyond reviewing old pages. There are five ways that Archive.org can help a website improve their user experience and SEO. The Wayback Machine’s new WordPress plugin  makes it easy to benefit from the Internet Archive automatically.

1. Copyright, DMCA, And Business Disputes

The Internet Archive can serve as an independent timestamped record to prove ownership of content or to defend against false claims that someone else wrote the content first. The Internet Archive is an independent non-profit organization and there is no way to fake an entry, which makes it an excellent way to prove who was first to publish disputed content.

2. The Worst Case Scenario Backup

Losing the entire website content due to hardware failure, ransomware, a vulnerability, or even a datacenter fire is almost always within the realm of possibility. While it’s a best-practice to always have an up to date backup stored off the server, unforseen mistakes can happen.

The Internet Archive does not offer a way to conveniently download website content. But there are services that facilitate it. It used to be a popular technique with spammers to use these services to download the previous content from expired domains and bring them back to the web. Although I’ve not used any of these services and therefore can’t vouch for any of them, if you search around you’ll be able to find them.

3. Fix Broken Links

Sometimes a URL gets lost in a website redesign or maybe it was purposely removed but then find out later that the page is popular and people are linking to it. What do you do?

Something like this happened to me in the past where I changed domains and decided I didn’t need certain of the pages. A few years later I discovered that people were still linking to those pages because they were still useful. The Internet Archive made it easy to reproduce the old content on the new domain. It’s one way to recover the Page Rank that would otherwise have been lost.

Having old pages archived can help in reviving old pages back into the current website. But you can’t do this unless the page is archived and the new plugin makes sure that this happens for every web page.

4. Can Indicate Trustworthiness

This isn’t about search algorithms or LLMs. This is about trust with other sites and site visitors. Spammy sites tend to not be around very long. A documented history on Archive.org can be a form of proof that a site has been around for a long time. A legitimate business can point to X years of archived pages to prove that they are an established business.

5. Identify Link Rot

The Internet Archive Wayback Machine Link Fixer plugin provides an easy way to archive your web pages at Archive.org. When you publish a new page or update an older page the Wayback Machine WordPress plugin will automatically create a new archive page.

But one of the useful features of the plugin is that it automatically scans all outbound links and tests them to see if the linked pages still exist. The plugin can automatically update the link to a saved page at the Internet Archive.

The official plugin lists these features and benefits:

  • “Automatically scans for outbound links in post content
  • Checks the Wayback Machine for existing archives
  • Creates new snapshots if no archive exists
  • Redirects broken or missing links to archived versions
  • Archives your own posts on updates
  • Works on both new and existing content
  • Helps maintain long-term content reliability and SEO”

I don’t know what they mean about maintaining SEO but one benefit they don’t mention is that it keeps users happy and that’s always a plus.

Wayback Machine Is Useful For Competitor Analysis

The Internet Archive makes it so easy to see how a competitor has changed over the years. It’s also a way to catch competitors who are copying or taking “inspiration” from your content when they do their annual content refresh.

The Wayback Machine can let you see what services or products a competitor offered and how they were offered. It can also give a peek into what changed during a redesign which tells something about what their competitive priorities are.

Takeaways

  • The Internet Archive provides practical benefits for website owners beyond simply viewing old pages.
  • Archived snapshots help address business disputes, lost content, broken links, and long-term site credibility.
  • Competitor history and past site versions become easy to evaluate through Archive.org.
  • The Wayback Machine WordPress plugin automates archiving and helps manage link rot.
  • Using the Archive proactively can improve user experience and support SEO-adjacent needs, even if indirectly.

The six examples in this article show that the Internet Archive is useful for SEO, competitor research, and for improving the user experience and maintaining trust. The Internet Archive’s new WordPress plugin makes archiving and link-checking easy because it’s completely automatic. Taken together, these strengths make the Archive a useful part of keeping a website reliable, recoverable, and easier for people to use.

The Internet Archive Wayback Machine Link Fixer is a project created by Automattic and the Internet Archive, which means that it’s a high quality and trusted plugin for WordPress.

Download The Internet Archive WordPress Plugin

Check it out at the official WordPress plugin repository: Internet Archive Wayback Machine Link Fixer By Internet Archive

Featured Image by Shutterstock/Red rose 99