Report Links Original Research to Higher B2B ROI via @sejournal, @MattGSouthern

TopRank Marketing and Ascend2 released a survey involving 797 B2B leaders. They discovered that 97% consider thought leadership as essential for achieving success throughout the entire marketing funnel.

The report frames the findings as building an “Answer Engine” for how buyers discover information across SEO and GenAI answer platforms (ChatGPT, Perplexity, AI search).

The authors analyzed differences between high-ROI and lower-performing marketers. Here’s what stands out.

What The Research Says

Respondents report strong results from original, data-driven content. 93% percent of teams that use original research say it effectively drives engagement and leads, and 48% call it “very effective.”

When asked to compare formats, 35% rated original research significantly more valuable than AI-generated content for building trust, and another 32% said it’s more impactful overall.

The study positions trusted experts and partners as part of a broader “trust system” that validates research-based content. The takeaway is quality over quantity. Partnerships work best when they add credibility and insight, not just reach.

Formats & Distribution

Marketers point to video, live or virtual events, and interactive experiences as the most effective vehicles for thought leadership.

Topic selection is guided primarily by customer signals. Direct customer feedback leads at 53%, followed by CRM/customer data and market-trend analysis at 44% each. Seasonal moments and industry events also influence planning.

High-performing teams run integrated multi-channel programs that weave SEO, advertising, experts and partners, media, email, and social into a cohesive plan.

Barriers To Success

One persistent barrier is channel concentration. About one-third cite over-reliance on a few channels or tactics as a top reason programs underperform.

Measurement is a key friction point: 41% cite difficulty proving ROI as a cause of underperforming content. High-performing programs use full-funnel analytics linking brand metrics to demand and revenue.

Why This Matters

The survey data questions two common strategies: relying on AI-generated content to foster trust and viewing SEO solely as a top-of-funnel tactic. Original research tends to be more trusted, which aligns with longer B2B sales cycles.

Additionally, successful programs link SEO to multi-channel activation and pipeline development. If your analytics don’t connect search performance with closed-won deals, that disconnect likely accounts for inconsistent ROI.

Looking Ahead

For 2026, center your plan on original research and treat search and GenAI answer platforms as connected discovery surfaces. Pair the research with credible experts, then extend it through video, events, and interactive pieces where it fits.


Featured Image: Roman Samborskyi/Shutterstock

Budget SEO For Capacity, Not Output via @sejournal, @Kevin_Indig

Marketing leaders are still budgeting to grow clicks in 2026, even though AI Overviews cut organic traffic in half and AI Mode kills it almost entirely.

Image Credit: Kevin Indig

Meanwhile, close to 60% of those who responded to my recent poll report their stakeholders don’t understand the value of brand mentions in LLMs.

The SEO budget conversation has to move from “Why isn’t SEO driving more clicks?/What can we do to drive more traffic?” to “What capabilities do we need to build authority in new discovery channels?”

In 2026, the best marketing teams will stop measuring SEO success by clicks and start treating it as what it really is: a capacity and influence system.

1. Traffic-Based ROI Is A Decayed Model

Marketing budgets, on average, rose modestly in the last 12 months. Overall, marketing budgets are up 3.31%. And digital marketing spending specifically is up 7.25%.

SEO gets less than 10% of the marketing budget despite being one of the most efficient channels.

Image Credit: Kevin Indig

And for years, marketers invested this sliver of SEO budget like paid media – spend more, get more clicks. It’s time to let this go. There’s discomfort here, of course: We’re losing a significant leading indicator with traffic stagnation. In theory, SEO now appears to take “longer” to show results.

As Google dials AI in the search results up, organic clicks are destined to shrink. AI surfaces decouple visibility from clicks. Your brand can appear in every AI output response and get zero measurable traffic. In Semrush’s AI Mode study, 92-94% of AI Mode sessions produced no external clicks. (But that doesn’t mean people buy less. The opposite could be true.) Slowed growth in clicks is not a performance issue of an SEO team – it’s a system feature, and it’s the future of search. Platforms want users to stay within their ecosystems.

The implication: Traffic no longer equals demand. Brand visibility happens upstream inside AI responses, UGC threads, and recommendation loops that don’t often show in your analytics.

Image Credit: Kevin Indig

2. SEO Budgets Are Capacity Allocation, Not Spend-To-Output Trading

With paid ads, you’re buying impressions. Double your spend, you roughly double your impressions (with diminishing returns). There’s a direct, measurable relationship.

But most SEO costs are fixed: salaries, tool subscriptions, infrastructure. You pay for capacity regardless of whether your team delivers a 10% or 50% lift.

65% of those surveyed by Search Engine Journal don’t expect a reduction in SEO budget for 2026.

When deciding on next year’s budget, the question “What ROI do we expect from this spend?” is an outdated one. Instead, you need to answer this query: “What capabilities do we need to earn visibility?”

The variable isn’t spend; it’s prioritization and execution quality:

  • Paid media is transactional: Spend → user impression → user click.
  • SEO is compounding: Optimization → brand visibility → user impressions → brand influence.

Your SEO dollars don’t buy results. They buy the ability to earn trust and surface in the right systems.

3. Design Your SEO Budget Around Influence, Not Output In 2026.

Your budget planning must be scenario-based, not traffic-forecasted.

Because your SEO costs are mostly fixed, you can model it out: “If we allocate 40% of capacity to digital PR, 30% to technical SEO, 20% to content operations, and 10% to foundational research, what visibility outcomes can we reasonably expect?”

Allocate resources by priority, not by historical traffic performance. Strategize your resources for the zero-click world ahead:

  1. Digital PR: Third-party signals drive 85% of brand visibility in LLMs. Digital PR and high-quality, topically related backlink investment are crucial. The biggest gains come when you hit the upper boundaries of link quality/authority over volume.
  2. Technical SEO + UX: Get the foundation right. Agents need to review your site and make recommendations or decisions quickly.
  3. Audience + first-party data research: Users are making decisions about brands within the AI Mode outputs – know your audience and which search surfaces they use. Data from one study showed 71% of companies that exceeded revenue goals had documented personas.
  4. Content operations + re-optimizations: Content recency is non-negotiable, and LLMs prefer it. Some evidence shows refreshing every ~90 days could be a competitive edge.
  5. Additive content rich with information gain: Evergreen content is less valuable. Additive content that provides net-new takes, insights, and conversations is rewarded.
  6. Engineering + design support for interactive tools:Once the validation click is earned, you must provide value that’s worth on-page engagement.
  7. Video and custom graphics: Organic low-fi video content and custom graphics are earning highly visible mid-output placement in AIOs. Don’t let restricted resources stop you from investing in this visibility lever.

Your brand’s prioritization could vary based on audience, goals, and – of course – capacity.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!


Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: Do I Need To Rethink My Content Strategy For LLMs? via @sejournal, @MordyOberstein

For this week’s Ask An SEO, the question asked was:

“Do I need to rethink my content strategy for LLMs and how do I get started with that?”

To answer, I’m going to explain the non-linear journey down the customer journey funnel and where large language models (LLMs) show up.

From rethinking traffic expectations to conducting an audit on sentiment picked up by LLMs, I will talk about why brand identity matters in building the kind of reputation that both users and machines recognize as authoritative.

You can watch this week’s Ask An SEO video and read the full transcript below.

Editor’s note: The following transcript has been edited for clarity, brevity, and adherence to our editorial guidelines.

Don’t Rush Into Overhauling Your Strategy

Off the bat, I strongly advise not to rush into this. I know there’s an extreme amount of noise and buzz and advice out there on social media that you need to rethink your strategy because of LLMs, but this thing is very, very far from settled.

For example, or most notably, AI Mode is still not in traditional search results. When that happens, when Google moves the AI Mode tab from being a tab into the main search results, the whole ecosystem is set for another upheaval, whatever that looks like, because we don’t actually know what that will look like.

I personally think that Google’s Gemini demo (the one they did way, way back, where they showed customized results for certain types of queries with certain answer formats) might be what AI Mode ends up resembling more than what it does right now, which is purely a text-based output that sort of aligns with ChatGPT.

I think Google will differentiate those two products once it moves AI Mode over from the tab into the main search results. So, things are not settled yet. And if you think they’re not. They are not settled yet.

Rethinking Traffic Expectations From LLMs

The other thing I want you to rethink is the traffic expectations from LLMs.

There’s been a lot of talk about citations and traffic – citations and traffic, citations and traffic. I don’t think citations, and therefore traffic, are the main diamond within the LLM ecosystem. I believe mentions are. And I don’t think that’s anything really new, by the way.

Traditionally, the funnel has been messy, and Google’s been talking about that for a long time. Now, you have an LLM that may be a starting point or a step in that messy funnel, but I don’t believe it’s fundamentally different.

I’ll give you an example. If I’m looking for a pair of shoes, I might go to Google and search, [Are these Nike shoes any good?]. I might look at a website, then go to Amazon and look at the actual product.

Then I might go to YouTube, see a review of the product, maybe watch a different one, go back to Amazon, have a look, check Google Shopping to see if it’s cheaper there, and then head back to Amazon to buy it.

Now, you have an LLM thrown into the mix, and that’s really the main difference. Maybe now, the LLM gives me the answer. Or maybe Google gives me the answer. Then I go to Amazon, look at the product, go to Google Shopping to see if it’s cheaper, watch a YouTube review, maybe switch things up a bit, go back to ChatGPT, see if it recommends something different this time, go through the whole process, and eventually buy on Amazon. That’s just me, personally.

It’s important to realize that the paradigm has been around for a while. But if you’re thinking of LLMs as a source of traffic, I highly recommend you don’t. They are not necessarily built for that.

ChatGPT, specifically, is not built for citations or to offer traffic. It’s built to provide answers and to be interactive. You’ll notice you usually don’t get a citation in ChatGPT until the third, fourth, or fifth prompt, whatever it is.

Other LLMs, like AI Mode or Perplexity, are a little bit more citation or link-based, but still, their main commodity is the output, giving you the answer and the ability to explore further.

So, I’m a big believer that the brand mention is far more important than the actual citation, per se. Also, the citation might just be the source of information. If I’m asking, “Are Nike shoes good?” I might get a review from a third-party website, say, the CNET of shoes, and even if I click there, that’s not where I’m going to buy the actual shoe.

So, the traffic in that case isn’t even the desirable outcome for the brand. You want users to end up where they can buy the shoe, not just read a review of it.

The Importance Of Synergy And Context With Content

The next thing is the importance of synergy and context with your content. In order to be successful with LLMs, it’s not about (and I’ve heard this before from people) that the top citations are just the ones that already do well on Google. Not necessarily.

There might be a correlation, but not causation. LLMs are trying to do something different than search engines. They’re trying to synthesize the web to serve as a proxy for the entire web. So, what happens with your content across the web matters way more: How your content is talked about, where it’s talked about, who’s talking about it, and how often it’s mentioned.

That doesn’t mean what’s on your site doesn’t factor in, but it’s weighted differently than with traditional search engines. You need to give the LLM the brand context to realize that you have a digital presence in this area, that you’re someone worth mentioning or citing.

Again, I’d focus more on mentions. That’s not to say citations aren’t important (they are), but mentions tend to carry more weight in this context.

Conducting An Audit

The way to go about this, in my opinion, is to conduct an audit. You need to see how the LLM is talking about the topic.

LLMs are notoriously positive and tend to loop in tiny bits of negative sentiment within otherwise positive answers. I was looking at a recent dataset. I don’t have the formal numbers, but I can tell you they’re built to lean neutral or net positive.

For example, if I ask, “Are the Dodgers good?” LLMs, in this case, I was looking at AI Mode, which will say, “Yes, the Dodgers are good…” and go on about that. If I ask, “Are the Yankees good?” and let’s say two or three weeks ago they weren’t doing well, it won’t say, “Yes, the Yankees are good.” It’ll say, “Well, if you look at this and you look at that, overall you might say the Yankees are good.”

Those are two very different answers. They’re both trying to be positive, but you have to read between the lines to understand how the LLM is actually perceiving the brand and what possible user hesitancies or skepticism are bound up in that. Or where are the gaps?

For instance, if I ask, “Is Gatorade a great drink?” and it answers one way, and then I ask, “Is Powerade a good drink?” and it answers slightly differently, you have to notice why that’s happening. Why does it say, “Gatorade is great,” but “Powerade is loved by many”? You have to dig in and understand the difference.

Running an audit helps you see how the LLM is treating your brand and your market. Is it consistently bringing up the same user points of skepticism or hesitation? If I ask, “What’s a good alternative to Folgers coffee?” AI Mode might say, “If you’re looking for a low-cost coffee, Folgers is an option. But if you want something that tastes better at a similar price, consider Brand X.”

That tells you something: There’s a negative sentiment around Folgers and its taste. That’s something you should be picking up on for your content and brand strategy. The only way to know that is to conduct an audit, read between the lines, and understand what the LLM is saying.

Shaping What LLMs Say About Your Brand

The way to get LLMs to say what you want about your brand is to start with a conscious point of view: What do you want LLMs to say about your brand? Which really comes down to: what do you want people to say about your brand?

And the only way to do that is to have a very strong, focused, and conscious brand identity. Who are you? What are you trying to do? Why is that meaningful? Who are you doing it for? And who is interested in you because of it?

Your brand identity is what gives your brand focus. It gives your content marketing focus, your SEO strategy focus, your audience targeting focus, and your everything focus.

If this is who you are, and that is not who you are, then you’re not going to write content that’s misaligned with who you are and what you’re trying to do. You’re not going to dilute your brand identity by creating content that’s tangential or inconsistent.

If you want third-party sites and people around the web to pick up who you are and what you’re about, to build that presence, you need a very conscious and meaningful understanding of who you are and what you do.

That way, you know where to focus, where not to, what content to create, what not to, and how to reinforce the idea around the web that you are X and relevant for X.

It sounds simple, but developing all of that, making sure it’s aligned, and auditing all the way through to ensure it’s actually happening … that’s easier said than done.

Final Thoughts

LLMs may shift how your customers find information about your brands, but chasing citations and clicks isn’t a solid strategy.

Despite the chaos in AI and search in the age of LLMs, marketers need to stick to the fundamentals: brand identity, trust, and relevance still matter.

Focus on brand identity to build your reputation, ensuring that both users and search engines recognize your brand as an authority in your niche.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Is Not Diminishing The Use Of Structured Data In 2026 via @sejournal, @martinibuster

A recent announcement on the Google Search Central blog gave a Redditor the impression that Google was significantly reducing the use of structured data, causing them to ask if it’s worthwhile to use it anymore.

The person on Reddit posted:

“Google just posted a new update — they’re removing support for some structured data types starting in January 2026. Dataset already works only in Dataset Search, and rich results are getting more selective.

So… is schema still worth it? Or are we moving past it entirely?”

Matt Southern covered the blog post (Google Deprecates Practice Problem Structured Data In Search), focusing on the specific structured data that Google was deprecating. Google’s blog post, authored by John Mueller, could, if read quickly, be accidentally interpreted to be more alarming than it was intended to be.

Google’s announcement explained:

“We’re constantly working to simplify the search results page, so that it’s quick and easy to find the information and websites you’re looking for. As part of this effort, we regularly evaluate all of our existing features to make sure they’re still useful, both for people searching on Google and for website owners.

Through this process, we’ve identified some features that aren’t being used very often and aren’t adding significant value to users. In these cases, we’ve found that other advancements on the search results page are able to get people what they’re looking for more seamlessly. So we’re beginning to phase these lesser-used features out.

For most searches, you likely won’t notice a major difference — most of these features didn’t trigger often and weren’t interacted with much by users. But overall, this update will simplify the page and improve the speed of search results.”

Ending with the following sentence:

“Starting in January 2026, we’ll remove support for the structured data types in Search Console and its API.”

Google’s Search Features Are Always Changing

Someone responded to the initial post to reassure them that Google’s search features and the structured data that triggers them are always changing. That’s true. Google Search has consistently been in a state of change and never more visibly on the front end as it is today with AI search.

Google’s John Mueller responded to the Redditor who noted that Google is constantly changing by affirming that markup types (which includes Schema.org structured data) are always changing.

He responded:

“Exactly. Understand that markup types come and go, but a precious few you should hold on to (like title, and meta robots).”

Structured Data Curation Is Automatic

Keeping up with Schema.org structured data is easy with any modern content management system through plugins or as part of a native functionality because they are responsive to Google’s structured data guidance. So in general, it’s not something that a publisher or SEO needs to think about. Publishers on WordPress just need to keep their plugins updated.

Featured Image by Shutterstock/pathdoc

The Download: busting weather myths, and AI heart attack prediction

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why it’s so hard to bust the weather control conspiracy theory

It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.”

She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding technology to control the weather and wreak havoc on their enemies. This preposterous claim has grown louder and more common in recent years, especially after extreme weather strikes.

But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth. Read the full story.

—Dave Levitan

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here.

AI could predict who will have a heart attack 

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups are applying AI algorithms to screen millions of CT scans for early signs of heart disease.

This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale, while raising thorny questions about implementation and even how we define disease. Read the full story.

—Vishal Khetpal

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Spending on AI may be to blame for all those tech layoffs
AI isn’t necessarily replacing jobs, but spending on it is gobbling up budgets. (Fast Company $)
+ Junior roles are likely to be the first on the chopping block. (FT $)
+ Are the crazy sums that businesses are sinking into AI sustainable? (WP $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

2 Anti-vaccine activists gathered in Austin over the weekend
They celebrated RFK Jr’s rise and outlined their goals—including eliminating school vaccine mandates. (WP $)
+ We’re on the verge of stopping the next pandemic. But will we? (Vox)
+ How conspiracy theories infiltrated the doctor’s office. (MIT Technology Review)

3 People who’ve experienced AI-induced delusions are forming a movement
They’re pushing for legal action against chatbot makers. (Bloomberg $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

4 AI-generated clips of women being strangled are flooding social media
Many of them appear to have been created using OpenAI’s Sora 2. (404 Media)

5 Tech leaders are obsessed with bioengineering babies
They’re not allowed to, but they’re not letting a little thing like ethics get in the way. (WSJ $)
+ The race to make the perfect baby is creating an ethical mess. (MIT Technology Review)

6 Apple has removed two popular gay dating apps in China 
The country ordered it to take down Blued and Finka from its app. (Wired $)

7 The UK government is worried China could turn off its buses remotely
It fears hundreds of Chinese-made electric buses on British roads could be at risk. (FT $)

8 How AI is changing the world’s newsrooms 📰
It’s brilliant at analyzing large data sets—but shouldn’t be used to write stories. (NYT $)

9 How to contain an invasive species
Experts argue that too much red tape is getting in the way. (Undark)
+ The weeds are winning. (MIT Technology Review)

10 The world’s largest electric ship is charging up 🚢
Once it’s ready to go, it’ll serve as a ferry in 90 minute bursts. (IEEE Spectrum)

Quote of the day

“We would move heaven and Earth, pun intended, to try to get to the Moon sooner.” 

—Dave Limp, CEO of Blue Origin, says the company is raring to work with NASA to get humans back on the Moon, Ars Technica reports.

One more thing

Design thinking was supposed to fix the world. Where did it go wrong?

In the 1990s, a six-step methodology for innovation called design thinking started to grow in popularity. Key to its spread was its replicable aesthetic, represented by the Post-it note: a humble square that anyone can use in infinite ways.

But in recent years, for a number of reasons, the shine of design thinking has been wearing off. Critics have argued that its short-term focus on novel and naive ideas results in unrealistic and ungrounded recommendations.

Today, some groups are working to reform both design thinking’s principles and its methodologies. These new efforts seek a set of design tools capable of equitably serving diverse communities and solving diverse problems well into the future. It’s a much more daunting—and crucial—task than design thinking’s original remit. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These tree-dwelling toads give birth to live young—who knew?!
+ Now’s the time to practice your baking skills ahead of Thanksgiving.
+ Younguk Yi’s glitching paintings are a lot of fun.
+ Place your bets! This fun game follows three balls in a race to the bottom, but who will win?

Reimagining cybersecurity in the era of AI and quantum

AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate.

The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. From reconnaissance to ransomware, cybercriminals can automate attacks faster than ever before with AI. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.

But AI isn’t the only force shaping the threat landscape. Quantum computing has the potential to seriously undermine current encryption standards if developed unchecked. Quantum algorithms can solve the mathematical problems underlying most modern cryptography, particularly public-key systems like RSA and Elliptic Curve, widely used for secure online communication, digital signatures, and cryptocurrency.

“We know quantum is coming. Once it does, it will force a change in how we secure data across everything, including governments, telecoms, and financial systems,” says Peter Bailey, senior vice president and general manager of Cisco’s security business.

“Most organizations are understandably focused on the immediacy of AI threats,” says Bailey. “Quantum might sound like science fiction, but those scenarios are coming faster than many realize. It’s critical to start investing now in defenses that can withstand both AI and quantum attacks.”

Critical to this defense is a zero trust approach to cybersecurity, which assumes no user or device can be inherently trusted. By enforcing continuous verification, zero trust enables constant monitoring and ensures that any attempts to exploit vulnerabilities are quickly detected and addressed in real time. This approach is technology-agnostic and creates a resilient framework even in the face of an ever-changing threat landscape.

Putting up AI defenses 

AI is lowering the barrier to entry for cyberattacks, enabling hackers even with limited skills or resources to infiltrate, manipulate, and exploit the slightest digital vulnerability.

Nearly three-quarters (74%) of cybersecurity professionals say AI-enabled threats are already having a significant impact on their organization, and 90% anticipate such threats in the next one to two years. 

“AI-powered adversaries have advanced techniques and operate at machine speed,” says Bailey. “The only way to keep pace is to use AI to automate response and defend at machine speed.”

To do this, Bailey says, organizations must modernize systems, platforms, and security operations to automate threat detection and response—processes that have previously relied on human rule-writing and reaction times. These systems must adapt dynamically as environments evolve and criminal tactics change.

At the same time, companies must strengthen the security of their AI models and data to reduce exposure to manipulation from AI-enabled malware. Such risks could include, for instance, prompt injections, where a malicious user crafts a prompt to manipulate an AI model into performing unintended actions, bypassing its original instructions and safeguards.

Agentic AI further ups the ante, with hackers able to use AI agents to automate attacks and make tactical decisions without constant human oversight. “Agentic AI has the potential to collapse the cost of the kill chain,” says Bailey. “That means everyday cybercriminals could start executing campaigns that today only well-funded espionage operations can afford.”

Organizations, in turn, are exploring how AI agents can help them stay ahead. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months, especially in cybersecurity, according to Cisco’s 2025 AI Readiness Index. Use cases include AI agents trained on telemetry, which can identify anomalies or signals from machine data too disparate and unstructured to be deciphered by humans. 

Calculating the quantum threat

As many cybersecurity teams focus on the very real AI-driven threat, quantum is waiting on the sidelines. Almost three-quarters (73%) of US organizations surveyed by KPMG say they believe it is only a matter of time before cybercriminals are using quantum to decrypt and disrupt today’s cybersecurity protocols. And yet, the majority (81%) also admit they could do more to ensure that their data remains secure.

Companies are right to be concerned. Threat actors are already carrying out harvest now, decrypt later attacks, stockpiling sensitive encrypted data to crack once quantum technology matures. Examples include state-sponsored actors intercepting government communications and cybercriminal networks storing encrypted internet traffic or financial records. 

Large technology companies are among the first to roll out quantum defenses. For example, Apple is using cryptography protocol PQ3 to defend against harvest now, decrypt later attacks on its iMessage platform. Google is testing post-quantum cryptography (PQC)—which is resistant to attacks from both quantum and classical computers—in its Chrome browser. And Cisco “has made significant investments in quantum-proofing our software and infrastructure,” says Bailey. “You’ll see more enterprises and governments taking similar steps over the next 18 to 24 months,” he adds. 

As regulations like the US Quantum Computing Cybersecurity Preparedness Act lay out requirements for mitigating against quantum threats, including standardized PQC algorithms by the National Institute of Standards and Technology, a wider range of organizations will start preparing their own quantum defenses. 

For organizations beginning that journey, Bailey outlines two key actions. First, establish visibility. “Understand what data you have and where it lives,” he says. “Take inventory, assess sensitivity, and review your encryption keys, rotating out any that are weak or outdated.”

Second, plan for migration. “Next, assess what it will take to support post-quantum algorithms across your infrastructure. That means addressing not just the technology, but also the process and people implications,” Bailey says.

Adopting proactive defense 

Ultimately, the foundation for building resilience against both AI and quantum is a zero trust approach, says Bailey. By embedding zero trust access controls across users, devices, business applications, networks, and clouds, this approach grants only the minimum access required to complete a task and enables continuous monitoring. It can also minimize the attack surface by confining a potential threat to an isolated zone, preventing it from accessing other critical systems.

Into this zero trust architecture, organizations can integrate specific measures to defend against AI and quantum risks. For instance, quantum-immune cryptography and AI-powered analytics and security tools can be used to identify complex attack patterns and automate real-time responses. 

“Zero trust slows down attacks and builds resilience,” Bailey says. “It ensures that even if a breach occurs, the crown jewels stay protected and operations can recover quickly.”

Ultimately, companies should not wait for threats to emerge and evolve. They must get ahead now. “This isn’t a what-if scenario; it’s a when,” says Bailey. “Organizations that invest early will be the ones setting the pace, not scrambling to catch up.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

URL Case Sensitivity and SEO

Google treats the portion of a URL after the domain name as case-sensitive. Thus Google could index both domain.com/example and domain.com/EXAMPLE but combine domain.com/example and DOMAIN.com/example.

This matters because content management systems typically generate URLs from page titles and often retain capitalization, resulting in three problems for search engine optimization:

  • Duplicate content. Google may index both versions of the URL if they serve the same content.
  • Link equity splitting. Capitalized URLs with lowercase inbound links can split link equity, hurting the ranking of the primary version.
  • Internal errors. Sites hosted on Linux servers (such as Shopify and, often, WooCommerce) will display a 404 error page for capitalized URL characters when users enter lowercase versions.

Most web crawlers (and Search Console) do not offer case-sensitive URL filters, making detection challenging. The best way to prevent them is to ensure your server redirects all versions of a URL to the desired version. Each of these could redirect to domain.com/url, for example:

  • domain.com/URL
  • domain.com/Url
  • domain.com/uRl

Some content management systems will automatically redirect, but always check to confirm. Enter the three URL versions above in your browser. If all load the same version (usually lowercase), no further action is required. If any result in an error page, fix immediately.

Depending on the CMS, plugins and apps can help set up cross-site redirection rules. Software engineer Brian Love’s guide explains how to enforce lowercase URLs.

Choosing URLs

For SEO, there is no URL rule as long as all versions redirect to the preferred version. I like lowercase URLs for simplicity and ease of cross-site redirect setup, although some users prefer capitalized URLs for readability, branding, and ad performance.

Cross-site URL consistency will not produce material ranking increases. But it will help Google understand your site structure, improve crawlability, and clarify what’s indexed.

How To Cultivate Brand Mentions For Higher AI Search Rankings via @sejournal, @martinibuster

Building brand awareness has long been an important but widely overlooked part of SEO. AI Search has brought this activity to the forefront. The following ideas should assist in forming a strategy for achieving brand name mentions at a ubiquitous scale, with the goal of achieving similar ubiquity in AI search results.

Tell People About The Site

SEOs and businesses can become overly concerned with getting links and forget that the more important thing to do is to get the word out about a website. A website must have unique qualities that will positively impress people and make them enthusiastic about the brand. If the site you’re trying to build traffic to lacks those unique qualities then building links or brand awareness can become a futile activity.

User behavior signals have been a part of Google’s algorithms since the 2004 Navboost signals were kicking in and the recent Google antitrust lawsuit shows that user behavior signals have continued to play a role. What has changed is that SEOs have noticed that AI search results tend to recommend sites that are recommended by other sites, brand mentions.

The key to all of this has been to tell other sites about your site and make it clear to potential consumers or website visitors what makes your site special.

  • So the first task is always to make a site special in every possible way.
  • The second task is to tell others about the site in order to build word of mouth and top-of-mind brand presence.

Optimizing a website for users and cultivating awareness of that site are the building blocks of the external signals of authoritativeness, expertise, and popularity that Google is always talks about.

Downside of Backlink Searches

Everyone knows how to do a backlink search with third-party tools but a lot of the data consists of garbage-y sites; that’s not the tool’s fault, it’s just the state of the Internet. In any case, a backlink search is limited, it doesn’t surface the conversations real people are having about a website.

In my experience, a better way to do it is to identify all instances of where a site is linked from another site or discussed by another site.

Brand And Link Mentions

Some websites have bookmark and resource pages. These are low hanging fruit.

Search for a competitor’s links:

example.com site:.com “bookmarks” -site:example.com

example.com site:.com “resources” -site:example.com

The “-site:example.com” removes the competitor site from the search results, showing you just the sites that might mention the full URL of the site which may or may not be linked.

The TLD segmented variants are:

example.com site:.net "resources" 
example.com site:.org "resources" 
example.com site:.edu "resources" 
example.com site:.ai "resources" 
example.com site:.net "links" 
example.com site:.org "links" 
example.com site:.edu "links" 
example.com site:.ai "links" 
Etc.

The goal is not necessarily to get links. It’s to build awareness of the site and build popularity.

Brand Mentions By Company Name

One way to identify brand mentions is to search by company name using the TLD segmentation technique. Making a broad search for a company’s name will only get you some of the brand mentions. Segmenting the search by TLD will reveal a wider range of sites.

Segmented Brand Mention Search

The following assumes that the competitor’s site is on the .com domain and you’re limiting the search to .com websites.

Competitor's Brand Name site:.com -site:example.com

Segmented Variants:

Competitor's Brand Name site:.org
Competitor's Brand Name site:.edu
Competitor's Brand Name site:.Reddit.com
Competitor's Brand Name site:.io
etc.

Sponsored Articles

Sponsored articles are indexed by search engines and ranked in AI search surfaces like AI Mode and ChatGPT. These can present opportunities to purchase a sponsored post that enables you to present your message with links that are nofollow and a prominent “sponsored post” disclaimer at the top of the web page – all in compliance with Google and FTC guidelines.

Brand Mentions: Authoritativeness Is Key

The thing that some SEOs never learned is that authoritativeness is important and quite likely millions of dollars have been wasted on paying for links from low-quality blogs and higher quality sites.

ChatGPT and AI Mode are found to recommend sites that are mentioned in high quality authoritative sites. Do not waste time or money paying for mentions on low quality sites.

Some Ways To Search

Product/Service/Solution Search

Name Of Product Or Service Or Problem Needing Solving site:.com “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.net “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.org “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.edu “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.io “sponsored article”
etc.

Sponsored Post Variant

Name Of Product Or Service Or Problem Needing Solving site:.com “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.net “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.org “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.edu “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.io “sponsored post”
etc.

Key insight: Test whether “sponsored post” or “sponsored article” provides better results or just more results. Using quotation marks, or if necessary the verbatim search tool, will stop Google from stemming the search results and prevents it from showing a mix of both “post” and “article” results. By forcing Google to be specific, you’re forcing Google to show more search results.

Competitor Search

Competitor’s Brand Name site:.com “sponsored post”
Competitor’s Brand Name site:.net “sponsored post”
Competitor’s Brand Name site:.org “sponsored post”
Competitor’s Brand Name site:.edu “sponsored post”
Competitor’s Brand Name site:.io “sponsored post”
etc.

Pure Awareness Building With Zero Internet Presence

This method of getting the word out is pure gold, especially for B2B but also for professional businesses such as in the legal niches. There are organizations and associations that print magazines or send out newsletters to thousands, sometimes tens of thousands, of people who are an exact match for the people you want to build top of mind brand name recognition with.

Emails and magazines do not have links and that’s okay. The goal is to build name brand recognition with positive associations. What better way than getting interviewed in a newsletter or magazine? What better way than submitting an article to a newsletter or magazine?

Don’t Forget PDF Magazines

Not all magazines are print, many magazines are in the form of a PDF. For example, I subscribe to a surf fishing magazine that is entirely in a proprietary web format that can only be viewed by subscribers. If I were a fishing company, I would make an effort to meet some of article authors, in addition to the publishers, at fishing industry conferences where they appear as presenters and in product booths.

This kind of outreach is in-person, it’s called relationship building. 

Getting back to the industry organizations and associations, this is an entire topic in itself and I’ll follow up with another article, but many of the techniques covered in this guide will work with this kind of brand building.

Using the filetype search operator in combination with the TLD segmentation will yield some of these kinds of brand building opportunities.

[product/service/keyword/niche] filetype:pdf site:.com newsletter
[product/service/keyword/niche] filetype:pdf site:.org newsletter

1. Segment the search for opportunities search by TLD .net/.com/.org/.us/.edu, etc.
Segmenting by TLD will help you discover different kinds of brand building opportunities. Websites on a Dot Org domain often link to a site for different reasons than a Dot Com website. Dot org domains represent article writing projects, free links on a links page, newsletter article opportunity, and charity link opportunities, just to name a few.

2. Consider Segmenting Dot Com Searches
The Dot Com TLD will yields an overabundance of search results, not all of them useful. This makes it imperative to segment the results to find all available opportunities. Even if you’re

Ways to segment the Dot Com are by:

  • A. Kinds of sites (blog/shopping related keywords/product or service keywords/forum/etc.)
    This is pretty straightforward. If you’re looking for brand mentions be sure to add keywords to the searches that are directly relevant to what your business is about. If your site is about car injuries then sites about cars as well as specific makes, models, and kinds of automobiles are how you would segment a .com search
  • B. Context – Audience Relevance Not Keyword Match
    Context of a sponsored article is important. This is not about whether the website content matches what your site, business, product, or service are about.  What’s important is to identify if the audience reach is an exact match to the audience that will be interested in your product, business, or service.
  • C. Quality And Authoritativeness
    This is not about third-party metrics related to links. This is just about making a common sense judgment about whether a site where you want a mention is well-regarded by those who are likely to be interested in your brand. That’s it.

Takeaway

The thing I want you to walk away with is that it’s useful to just tell people about a site and to get as many people as possible aware of it. Identify opportunities for ways to get them to tell a friend. There is no better recommendation than the one you can get from a friend or from a trusted organization.  This is the true source of authoritativeness and popularity.

Featured Image by Shutterstock/Bird stocker TH

Google AI Overviews Appear On 21% Of Searches: New Data via @sejournal, @MattGSouthern

Ahrefs analyzed 146 million search results to determine which query types trigger AI Overviews. The research tracked AIO appearance across 86 keyword characteristics.

Here’s a concise look at the patterns and how they may affect your strategy.

What The Analysis Found

AI Overviews appear on 20.5% of all keywords. Specific query types show notable variance, with some categories hitting 60% trigger rates while others stay below 2%.

Patterns Observed Across Query Types

Single-word queries activate AIOs only 9.5% of the time, whereas queries with seven or more words trigger them 46.4%. This correlation indicates that Google primarily uses AIOs for complex informational searches rather than simple lookups.

The question format also shows a similar trend: question-based queries result in AIOs 57.9% of the time, while non-question queries have a much lower rate of 15.5%.

The most significant distinctions are seen based on intent. Informational queries make up 99.9% of all AIO appearances, while navigational queries trigger AIOs just 0.09%. Commercial queries account for 4.3%, and transactional queries for 2.1%.

Patterns Observed Across Industry Categories

Science queries have an AIO rate of 43.6%, while health queries are at 43.0%, and pets & animals reach 36.8%. People & society questions result in AIOs 35.3% of the time.

In contrast, commerce categories exhibit opposite trends. Shopping queries are associated with AIOs only 3.2% of the time, the lowest in the dataset. Real estate remains at 5.8%, sports at 14.8%, and news at 15.1%.

YMYL queries display unexpectedly high trigger rates. Medical YMYL searches trigger AI Overviews 44.1% of the time, financial YMYL hits 22.9%, and safety YMYL reaches 31.0%.

These findings contradict Google’s focus on expert content for topics that could impact health, financial security, or safety.

Queries With Low Presence Of AI Overviews

6.3% of “very newsy” keywords trigger AI Overviews, while 20.7% of non-news queries display AIOs.

The pattern indicates that Google deliberately limits AIOs for time-sensitive content where accuracy and freshness are essential.

Local searches demonstrate a similar trend, with only 7.9% of local queries showing AI Overviews compared to 22.8% for non-local queries.

NSFW content consistently avoids AIOs across categories: adult queries trigger AIOs 1.5% of the time, gambling 1.4%, and violence 7.7%. Drug-related queries have the highest NSFW trigger rate at 12.6%, yet this remains well below the baseline.

Brand vs. Non-Brand

Branded keywords show slight differences compared to non-branded ones. Non-branded queries trigger AIOs 24.9% of the time, whereas branded queries do so 13.1% of the time.

The data indicates that AIOs occur 1.9 times more frequently for generic searches than for brand-specific lookups.

No Correlation With CPC

CPC shows no meaningful correlation with AIO appearance. Keyword cost-per-click values don’t affect trigger rates across any price range tested, with rates hovering between 12.4% and 27.6% regardless of commercial value.

Why This Matters

Publishers focused on informational content encounter the greatest AIO exposure. Question-based and how-to guides align closely with Google’s trigger criteria, putting educational content publishers at the highest risk of traffic loss.

Medical content has the highest category-specific AIO rate, despite concerns about AI accuracy in health advice.

Ecommerce and news publishers are relatively less affected by AIOs. The low trigger rates for shopping and news queries indicate these sectors experience less AI-driven traffic disruption compared to informational sites.

Looking Ahead

Using this data, publishers can review their current keyword portfolios to identify AIO exposure patterns. The most reliable indicators are query intent and length, with industry category and question format also playing significant roles.

AIO exposure varies considerably across different industry categories, with differences exceeding 40 percentage points between the highest and lowest. Content strategies need to consider this variation at the category level instead of assuming consistent baseline risk across all topics.

For a more in-depth examination of this data, see the full analysis.


Featured Image: Zorion Art Production/Shutterstock

Meta Projected $16B From Scam Ads, Internal Docs Show via @sejournal, @MattGSouthern

Advertisers on Meta may be unknowingly competing against suspected scam ads that stay in auctions at higher “penalty bid” prices.

Internal documents obtained by Reuters estimate that around 10% of Meta’s 2024 ad revenue, approximately $16 billion, would come from scam ads and banned goods.

Although Meta disagrees with these estimates, the real impact for advertisers includes potential increases in CPM, brand safety concerns, and uneven enforcement risks.

What Advertisers Should Know

Meta reportedly displays an estimated 15 billion ‘higher-risk’ scam advertisements daily across Facebook, Instagram and WhatsApp.

Meta earns about $7 billion annually just from these higher-risk scam ads that show clear signs of fraud, a late 2024 document states.

The company only bans advertisers when automated systems predict they are at least 95% certain to be committing fraud. Advertisers below that threshold face higher ad rates as a penalty but can continue running campaigns.

Internal Review: Easier To Run Scams On Meta Than Google

An internal Meta review concluded it’s easier to advertise scams on its platforms than on Google. The document doesn’t explain why.

Meta restricted anti-scam enforcement in the first half of the year to actions costing no more than 0.15% of total revenue, or approximately $135 million. A manager overseeing the effort wrote: “Let’s be cautious. We have specific revenue guardrails.”

Company spokesman Andy Stone said the internal estimates were “rough and overly-inclusive” and included many legitimate ads. He declined to provide an updated figure.

Meta reduced user reports of scam ads globally by 58% over the past 18 months and removed more than 134 million pieces of scam ad content in 2025, Stone said.

Why This Matters

On Meta’s platforms, internal documents projected about one in ten ad dollars in 2024 came from ads for scams and banned goods.

Meta’s penalty bid system charges suspected scammers higher rates but keeps them in ad auctions. You don’t know when you’re bidding against these inflated rates.

The revenue guardrails mean Meta caps how much fraud enforcement it will do if it impacts financial projections. Small advertisers must be flagged eight times for financial fraud before getting banned. Some large “High Value Accounts” accrued more than 500 strikes without Meta shutting them down.

A Meta presentation estimated the company’s platforms were involved in one-third of all successful scams in the United States.

The SEC is investigating Meta for running ads for financial scams, according to internal documents reviewed by Reuters. The UK Payment Systems Regulator said Meta’s products were linked to 54% of payment-related scam incidents in 2023.

What Meta Says

Stone clarified that the idea Meta should only take action when regulators demand it isn’t how the company operates.

He explained that the 0.15% figure mentioned in strategy documents was based on a revenue forecast and isn’t a strict cutoff. Additionally, testing the penalty bid program revealed a decrease in scam reports and a small dip in total ad revenue.

The main goal was to cut down on scam advertising by making suspicious advertisers less competitive in auctions.

Meta also outlines recent enforcement actions against scam centers in a Newsroom update.

Looking Ahead

Meta plans to lower the share of revenue from scams, illegal gambling, and prohibited goods from an estimated 10.1% in 2024 to 7.3% by the end of 2025. The target is to reach 6% by the end of 2026 and 5.8% in 2027, as outlined in strategy documents.


Featured Image: JarTee/Shutterstock