How To Get Your Content (& Brand) Recommended By AI & LLMs via @sejournal, @andreasvoniatis

The game has changed, and quite recently, too.

Generative engine optimization (GEO), AI Overviews (AIOs), or just an extension of SEO (now being dubbed on LinkedIn as Search Everywhere Optimization) – which acronym is correct?

I’d argue it’s GEO, as you’ll see why. And if you’ve ever built your own large language model from scratch like I did in 2020, you’ll know why.

We’ve all seen various frightening (for some) data on how click-through rates have now dropped off the cliff with Google AIOs, how LLMs like ChatGPT are eroding Google’s share of search – basically “SEO is dead” – so I won’t repeat them here.

What I will cover are first principles to get your content (along with your company) recommended by AI and LLMs alike.

Everything I disclose here is based on real-world experiences of AI search successes achieved with clients.

Using an example I can talk about, I’ll go with Boundless as seen below.

Screenshot by author, July 2025

Tell The World Something New

Imagine the dread a PR agency might feel if it signed up a new business client only to find they haven’t got anything newsworthy to promote to the media – a tough sell. Traditional SEO content is a bit like that.

We’ve all seen and done the rather tired ultimate content guide to [insert your target topic] playbooks, which attempt to turn your website into the Wikipedia (a key data source for ChatGPT, it seems) of whatever industry you happen to be in.

And let’s face it, it worked so well, it ruined the internet, according to The Verge.

The fundamental problem with that type of SEO content is that it has no information gain. When trillions of webpages all follow the same “best practice” playbook, they’re not telling the world anything genuinely new.

You only have to look at the Information Gain patent by Google to underscore the importance of content possessing value, i.e., your content must tell the world (via the internet) something new.

BoundlessHQ commissioned a survey on remote work, asking ‘Ideally, where would you like to work from if it were your choice?’

The results provided a set of data and this kind of content is high effort, unique, and value-adding enough to get cited in AI search results.

Of course, it shouldn’t take AI to produce this kind of content in the first place, as that would be good SEO content marketing in any case. AI has simply forced our hand (more on that later).

After all, if your content isn’t unique, why would journalists mention you? Bloggers link back to you? People share or bookmark your page? AI retrain its models using your content or cite your brand?

You get the idea.

For improved AI visibility, include your data sources and research methods with their limitations, as this level of transparency makes your content more verifiable to AI.

Also, updating your data more regularly than annually will indicate reliability to AI as a trusted information source for citation. What LLM doesn’t want more recent data?

SEO May Not Be Dead, But Keywords Definitely Are

Keywords don’t tell you who’s actually searching. They just tell you what terms trigger ads in Google.

Your content could be appealing to students, retirees, or anyone. That’s not targeting; that’s one size fits all. And in the AI age, one size definitely doesn’t fit all.

So, kiss goodbye to content guides written in one form of English, which win traffic across all English-speaking regions.

AI has created more jobs for marketers, so to win the same traffic as before, you’ll need to create the same content as before for those English-speaking regions.

Keyword tools also allegedly tell you the search volumes your keywords are getting (if you still want them, we don’t).

So, if you’re planning your content strategy on keyword research, stop. You’re optimizing for the wrong search engine.

What you can do instead is a robust market research based on the raw data sources used by LLMs (not the LLM outputs themselves). For example, Grok uses X (Twitter), ChatGPT has publishing partnerships, and so on.

The discussions are the real topics to place your content strategy around, and their volume is the real content demand.

AI Inputs, Not AI Outputs

I’m seeing some discussions (recommendations even) that creating data-driven or research-based content works for getting AI recommendations.

Given the dearth of true data-driven content that AI craves, enjoy it while it lasts, as that will only work in the short term.

AI has raised the content bar, meaning people are specific in their search patterns, such is their confidence in the technology.

Therefore, content marketers will rise to the challenge to produce more targeted, substantial content.

But, even if you are using LLMs in “deep” mode on a premium subscription to inject more substance and value into your content, that simply won’t make the AI’s quality cut.

Expecting such fanciful results is like asking AI to rehydrate itself using its sweat.

The results of AI are derivative, diluted, and hallucinatory by nature. The hallucinatory nature is one of the reasons why I don’t fear LLMs leading to artificial general intelligence (AGI), but that’s another conversation.

Because of the value degradation of the results, AI will not want to risk degrading its models on content founded on AI outputs for fear of becoming dumber.

To create content that AI prefers, you need to be using the same data sources that feed AI engines. It’s long been known that Google started its LLM project over a decade ago when it started training its models on Google Books and other literature.

While most of us won’t have the budget for an X.com data firehose, you can still find creative ways (like we have), such as taking out surveys with robust sample sizes.

Some meaningful press coverage, media mentions, and good backlinks will be significant enough to shift AI into seeing the value of your content, being judged good enough to retrain its models and update its worldview.

And by data-mining the same data sources, you can start structuring content as direct answers to questions.

You’ll also find your content is written to be more conversational to match the search patterns used by your target buyers when they prompt for solutions.

SEO Basics Still Matter

GEO and SEO are not the same. The reverse engineering of search engine results pages to direct content strategy and formulation was effective because rank position is a regression problem.

In AI, there is no rank; there are only winners and losers.

However, there are some heavy overlaps that won’t go away and are even more critical than ever.

Unlike SEO, where more word count was generally more, AI faces the additional constraints of rising energy costs and shortages of computer chips.

That means content needs to be even more efficient than it is for search engines for AI to break down and parse meaning before it can determine its value.

So, by all means:

  • Code pages for faster loading and quicker processing.
  • Deploy schema for adding context to the content.
  • Build a conversational answer-first content architecture.
  • Use HTML anchor jump links to different sections of your content.
  • Open your content to LLM crawling and use llms.txt file.
  • Provide programmatic content access, RSS feeds, or other.

These practices are more points of hygiene to help make your content more discoverable. They may not be a game changer for getting your organization cited by AI, but if you can crush GEO, you’ll crush SEO.

Human, Not AI-Written

AI engines don’t cite boring rehashes. They’re too busy doing that job for us and instead cite sources for their rehash instead.

Now, I have heard arguments say that if the quality of the content (let’s assume it even includes information gain) is on point, then AI shouldn’t care whether it was written by AI or a human.

I’d argue otherwise. Because the last thing any LLM creator wants is their LLM to be retrained on content generated by AI.

While it’s unlikely that generative outputs are tagged in any way, it’s pretty obvious to humans when content is AI-written, and it’s also pretty obvious statistically to AI engines, too.

LLMs will have certain tropes that are common to AI-generated writing, like “The future of … “.

LLMs won’t default to generating lived personal experiences or spontaneously generating subtle humour without heavy creative prompting.

So, don’t do it. Keep your content written by humans.

The Future Is A New Targeted Substantial Value

Getting your content and your company recommended by AI means it needs to tell the world something new.

Make sure it offers information gain based on substantive, non-LLM-derived research (enough to make it worthy of LLM model inclusion), nailing the SEO basics, and keeping it human-written.

The question now becomes, “What can you do to produce high-effort content good enough for AI without costing the earth?”

More Resources:


Featured Image: Collagery/Shutterstock

Perplexity Looks Beyond Search With Its AI Browser, Comet via @sejournal, @MattGSouthern

Perplexity has launched a web browser, Comet, offering users a look at how the company is evolving beyond AI search.

While Comet shares familiar traits with Chrome, it introduces a different interface model. One where users can search, navigate, and run agent-like tasks from a single AI-powered environment.

A Browser Designed for AI-Native Workflows

Comet is built on Chromium and supports standard browser features like tabs, extensions, and bookmarks.

What sets it apart is the inclusion of a sidebar assistant that can summarize pages, automate tasks, schedule meetings, and fill out forms.

You can see it in action in the launch video below:

In an interview, Perplexity CEO Aravind Srinivas described Comet as a step toward combining search and automation into a single system.

Srinivas said:

“We think about it as an assistant rather than a complete autonomous agent but one omni box where you can navigate, you can ask formational queries and you can give agentic tasks and your AI with you on your new tab page, on your side car, as an assistant on any web page you are, makes the browser feel like more like a cognitive operating system rather than just yet another browser.”

Perplexity sees Comet as a foundation for agentic computing. Future use cases could involve real-time research, recurring task management, and personal data integration.

Strategy Behind the Shift

Srinivas said Comet isn’t just a product launch, it’s a long-term bet on browsers as the next major interface for AI.

He described the move as a response to growing user demand for AI tools that do more than respond to queries in chat windows.

Srinivas said:

“The browser is much harder to copy than yet another chat tool.”

He acknowledged that OpenAI and Anthropic are likely to release similar tools, but believes the technical challenges of building and maintaining a browser create a longer runway for Perplexity to differentiate.

A Different Approach From Google

Srinivas also commented on the competitive landscape, including how Perplexity’s strategy differs from Google’s.

He pointed to the tension between AI-driven answers and ad-based monetization as a limiting factor for traditional search engines.

Referring to search results where advertisers compete for placement, Srinivas said:

“If you get direct answers to these questions with booking links right there, how are you going to mint money from Booking and Expedia and Kayak… It’s not in their incentive to give you good answers at all.”

He also said Google’s rollout of AI features has been slower than expected:

“The same feature is being launched year after year after year with a different name, with a different VP, with a different group of people, but it’s the same thing except maybe it’s getting better but it’s never getting launched to everybody.”

Accuracy, Speed, and UX as Priorities

Perplexity is positioning Comet around three core principles: accuracy, low latency, and clean presentation.

Srinivas said the company continues to invest in reducing hallucinations and speeding up responses while keeping user experience at the center.

Srinivas added:

“Let there exist 100 chat bots but we are the most focused on getting as many answers right as possible.”

Internally, the team relies on AI development tools like Cursor and GitHub Copilot to accelerate iteration and testing.

Srinivas noted:

“We made it mandatory to use at least one AI coding tool and internally at Perplexity it happens to be Cursor and like a mix of Cursor and GitHub Copilot.”

Srinivas said the browser provides the structure needed to support more complex workflows than a standalone chat interface.

What Comes Next

Comet is currently available to users on Perplexity’s Max plan through early access invites. A broader release is expected, along with plans for mobile support in the future.

Srinivas said the company is exploring business models beyond advertising, including subscriptions, usage-based pricing, and affiliate transactions.

“All I know is subscriptions and usage based pricing are going to be a thing. Transactions… taking a cut out of the transactions is good.”

While he doesn’t expect to match Google’s margins, he sees room for a viable alternative.

“Google’s business model is potentially the best business model ever… Maybe it was so good that you needed AI to kill it basically.”

Looking Ahead

Comet’s release marks a shift in how AI tools are being integrated into user workflows.

Rather than adding assistant features into existing products, Perplexity is building a new interface from the ground up, designed around speed, reasoning, and task execution.

As the company continues to build around this model, Comet may serve as a test case for how users engage with AI beyond traditional search.


Featured Image: Ascannio/Shutterstock 

OpenAI ChatGPT Agent Marks A Turning Point For Businesses And SEO via @sejournal, @martinibuster

OpenAI announced a new way for users to interact with the web to get things done in their personal and professional lives. ChatGPT agent is said to be able to automate planning a wedding, booking an entire vacation, updating a calendar, and converting screenshots into editable presentations. The impact on publishers, ecommerce stores, and SEOs cannot be overstated. This is what you should know and how to prepare for what could be one of the most consequential changes to online interactions since the invention of the browser.

OpenAI ChatGPT Agent Overview

OpenAI ChatGPT agent is based on three core parts, OpenAI’s Operator and Deep Research, two autonomous AI agents, plus ChatGPT’s natural language capabilities.

  1. Operator can browse the web and interact with websites to complete tasks.
  2. Deep Research is designed for multi-step research that is able to combine information from different resources and generate a report.
  3. ChatGPT agent requests permission before taking significant actions and can be interrupted and halted at any point.

ChatGPT Agent Capabilities

ChatGPT agent has access to multiple tools to help it complete tasks:

  • A visual browser for interacting with web pages with the on-page interface.
  • Text based browser for answering reasoning-based queries.
  • A terminal for executing actions through a command-line interface.
  • Connectors, which are authorized user-friendly integrations (using APIs) that enable ChatGPT agent to interact with third-party apps.

Connectors are like bridges between ChatGPT agent and your authorized apps. When users ask ChatGPT agent to complete a task, the connectors enable it to retrieve the needed information and complete tasks. Direct API access via connectors enables it to interact with and extract information from connected apps.

ChatGPT agent can open a page with a browser (either text or visual), download a file, perform an action on it, and then view the results in the visual browser. ChatGPT connectors enable it to connect with external apps like Gmail or a calendar for answering questions and completing tasks.

ChatGPT Agent Automation of Web-Based Tasks

ChatGPT agent is able to complete entire complex tasks and summarize the results.

Here’s how OpenAI describes it:

“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.

You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.”

ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.

….ChatGPT agent can access your connectors, allowing it to integrate with your workflows and access relevant, actionable information. Once authenticated, these connectors allow ChatGPT to see information and do things like summarize your inbox for the day or find time slots you’re available for a meeting—to take action on these sites, however, you’ll still be prompted to log in by taking over the browser.

Additionally, you can schedule completed tasks to recur automatically, such as generating a weekly metrics report every Monday morning.”

What Does ChatGPT Agent Mean For SEO?

ChatGPT agent raises the stakes for publishers, online businesses, and SEO, in that making websites Agentic AI–friendly becomes increasingly important as more users become acquainted with it and begin sharing how it helps them in their daily lives and at work.

A recent study about AI agents found that OpenAI’s Operator responded well to structured on-page content. Structured on-page content enables AI agents to accurately retrieve specific information relevant to their tasks, perform actions (like filling in a form), and helps to disambiguate the web page (i.e., make it easily understood). I usually refrain from using jargon, but disambiguation is a word all SEOs need to understand because Agentic AI makes it more important than it has ever been.

Examples Of On-Page Structured Data

  • Headings
  • Tables
  • Forms with labeled input forms
  • Product listing with consistent fields like price, availability, name or label of the product in a title.
  • Authors, dates, and headlines
  • Menus and filters in ecommerce web pages

Takeaways

  • ChatGPT agent is a milestone in how users interact with the web, capable of completing multi-step tasks like planning trips, analyzing competitors, and generating reports or presentations.
  • OpenAI’s ChatGPT agent combines autonomous agents (Operator and Deep Research) with ChatGPT’s natural language interface to automate personal and professional workflows.
  • Connectors extend Agent’s capabilities by providing secure API-based access to third-party apps like calendars and email, enabling task execution across platforms.
  • Agent can interact directly with web pages, forms, and files, using tools like a visual browser, code execution terminal, and file handling system.
  • Agentic AI responds well to structured, disambiguated web content, making SEO and publisher alignment with structured on-page elements more important than ever.
  • Structured data improves an AI agent’s ability to retrieve and act on website information. Sites that are optimized for AI agents will gain the most, as more users depend on agent-driven automation to complete online tasks.

OpenAI’s ChatGPT agent is an automation system that can independently complete complex online tasks, such as booking trips, analyzing competitors, or summarizing emails, by using tools like browsers, terminals, and app connectors. It interacts directly with web pages and connected apps, performing actions that previously required human input.

For publishers, ecommerce sites, and SEOs, ChatGPT agent makes structured, easily interpreted on-page content critical because websites must now accommodate AI agents that interact with and act on their data in real time.

Read More About Optimizing For Agentic AI

Marketing To AI Agents Is The Future – Research Shows Why

Featured Image by Shutterstock/All kind of people

Ex-Google Engineer Launches Athena For AI Search Visibility via @sejournal, @MattGSouthern

A former Google Search engineer is betting on the end of traditional SEO, and building tools to help marketers prepare for what comes next.

Andrew Yan, who left Google’s search team earlier this year, co-founded Athena, a startup focused on helping brands stay visible in AI-generated responses from tools like ChatGPT and Perplexity.

The company launched last month with $2.2 million in funding from Y Combinator and other venture firms.

Athena is part of a new wave of companies responding to a shift in how people discover information. Instead of browsing search results, people are increasingly getting direct answers from AI chatbots.

As a result, the strategies that once helped websites rank in Google may no longer be enough to drive visibility.

Yan told The Wall Street Journal:

“Companies have been spending the last 10 or 20 years optimizing their website for the ‘10 blue links’ version of Google. That version of Google is changing very fast, and it is changing forever.”

Building Visibility In A Zero-Click Web

Athena’s platform is designed to show how different AI models interpret and describe a brand. It tracks how chatbots talk about companies across platforms and recommends ways to optimize web content for AI visibility.

According to the company, Athena already has over 100 customers, including Paperless Post.

The broader trend reflects growing concern among marketers about the rise of a “zero-click internet,” where users get answers directly from AI interfaces and never visit the underlying websites.

Yan’s shift from Google to startup founder underscores how seriously some search insiders are taking this transformation.

Rather than competing for rankings on a search results page, Athena aims to help brands influence the outputs of large language models.

Profound Raises $20 Million For AI Search Monitoring

Athena isn’t the only company working on this.

Profound, another startup highlighted by The Wall Street Journal, has raised more than $20 million from venture capital firms. Its platform monitors how chatbots gather and relay brand-related information to users.

Profound has attracted several large clients, including Chime, and is positioning itself as an essential tool for navigating the complexity of generative AI search.

Co-founder James Cadwallader says the company is preparing for a world where bots, not people, are the primary visitors to websites.

Cadwallader told The Wall Street Journal:

“We see a future of a zero-click internet where consumers only interact with interfaces like ChatGPT. And agents or bots will become the primary visitors to websites.”

Saga Ventures’ Max Altman added that demand for this kind of visibility data has surpassed expectations, noting that marketers are currently “flying completely blind” when it comes to how AI tools represent their brands.

SEO Consultants Are Shifting Focus

The shift is also reaching practitioners. Cyrus Shepard, founder of Zyppy SEO, told the Wall Street Journal that AI visibility went from being negligible at the start of 2025 to 10–15% of his current workload.

By the end of the year, he expects it could represent half of his focus.

Referring to new platforms like Athena and Profound, Shepard said:

“I would classify them all as in beta. But that doesn’t mean it’s not coming.”

While investor estimates suggest these startups have raised just a fraction of the $90 billion SEO industry, their traction indicates a need to address the challenges posed by AI search.

What This Means

These startups are early signs of a larger shift in how content is surfaced and evaluated online.

With AI tools synthesizing answers from multiple sources and often skipping over traditional links, marketers face a new kind of visibility challenge.

Companies like Athena and Profound are trying to fill that gap by giving marketers a window into how generative AI models see their brands and what can be done to improve those impressions.

It’s not clear yet which strategies will work best in this new environment, but the race to figure it out has begun.


Featured Image: Roman Samborskyi/Shutterstock

Brave Search API Now Available Through AWS Marketplace via @sejournal, @martinibuster

Brave Search and Amazon Web Services (AWS) announced the availability of the Brave Search API in the new AI Agents and Tools category of the AWS Marketplace.

AI Agents And Tools Category Of AWS Marketplace

AWS is entering the AI agent space with a new marketplace that enables entrepreneurs to select from hundreds of AI agents and tools from their new AWS category.

According to the AWS announcement:

“With this launch, AWS Marketplace becomes a single destination where customers can find everything needed for successful AI agent implementations— includes not just agents themselves, but also the critical components that make agents truly valuable—knowledge bases that power them with relevant data, third-party guardrails that enhance security, professional services to support implementation, and deployment options that enable agents to seamlessly interoperate with existing software.”

Customers can choose a pay-as-you-go pricing structure or through a monthly or yearly pricing.

Brave Search

Brave is an independent, privacy-focused search engine. The Brave Search API provides AI LLMs with real-time data, can power agentic search, and can be used for creating applications that need access to the Internet.

The Brave Search API already supplies many of the top AI LLMs with up to date search data.

According to Brian Brown, Chief Business Officer at Brave Software:

“By offering the Brave Search API in AWS Marketplace, we’re providing customers with a streamlined way to access the only independent search API in the market, helping them buy and deploy agent solutions faster and more efficiently. Our customers in foundation models, search engines, and publishing are already using these capabilities to power their chatbots, search grounding, and research tools, demonstrating the real-world value of the only commercially-available search engine API at the scale of the global Web.”

Featured Image by Shutterstock/Deemerwha studio

Google Rolls Out Gemini 2.5 Pro & Deep Search For Paid Subscribers via @sejournal, @MattGSouthern

Google is rolling out two enhancements to AI Mode in Labs: Gemini 2.5 Pro and Deep Search.

These capabilities are exclusive to users subscribed to Google’s AI Pro and AI Ultra plans.

Gemini 2.5 Pro Now Available In AI Mode

Subscribers can now access Gemini 2.5 Pro from a dropdown menu within the AI Mode tab.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

While the default model remains available for general queries, the 2.5 Pro model is designed to handle more complex prompts, particularly those involving reasoning, mathematics, or coding.

In an example shared by Google, the model walks through a multi-step physics problem involving gravitational fields, showing how it can solve equations and explain its reasoning with supporting links.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

Deep Search Offers AI-Assisted Research

Today’s update also introduces Deep Search, which Google describes as a tool for conducting more comprehensive research.

The feature can generate detailed, citation-supported reports by processing multiple searches and aggregating information across sources.

Google stated in its announcement:

“Deep Search is especially useful for in-depth research related to your job, hobbies, or studies.”

Availability & Rollout

These features are currently limited to users in the United States who subscribe to Google’s AI Pro or AI Ultra plans and have opted into AI Mode through Google Labs.

Google hasn’t provided a firm timeline for when all eligible users will receive access, but rollout has begun.

The “experimental” label on Gemini 2.5 Pro suggests continued adjustments based on user testing.

What This Means

The launch of Deep Search and Gemini 2.5 Pro reflects Google’s broader effort to incorporate generative AI into the search experience.

For marketers, the shift raises questions about visibility in a time when AI-generated summaries and reports may increasingly shape user behavior.

If Deep Search becomes a commonly used tool for information gathering, the structure and credibility of content could play a larger role in discoverability.

Gemini 2.5 Pro’s focus on reasoning and code-related queries makes it relevant for more technical users. Google has positioned it as capable of helping with debugging, code generation, and explanation of advanced concepts. Similar to tools like ChatGPT’s coding features or GitHub Copilot.

Its integration into Search may appeal to users who want technical assistance without leaving the browser environment.

Looking Ahead

The addition of these features behind a paywall continues Google’s movement toward monetizing AI capabilities through subscription services.

While billed as experimental, these updates may provide early insight into how the company envisions the future of AI in search: more automated, task-oriented, and user-specific.

Search professionals will want to monitor how these features evolve, as tools like Deep Search could become more widely adopted.

Anthropic’s New Financial Tool Signals Shift To Offering Specialized Services via @sejournal, @martinibuster

Anthropic announced a new Financial Analysis Solution powered by its Claude 4 and Claude Code models. This is Anthropic’s first foray into a major vertical-focused platform, signaling a shift toward AI providers building tools that directly address common pain points in business workflows and productivity.

Claude For Financial Services

Anthropic’s Claude’s new service is an AI-powered financial analysis tool that’s targeted to financial professionals. It offers data integration via MCP (Model Context Protocol) and secure handling of data and total privacy. No user data is used for training Claude’s generative models.

According to the announcement:

“Claude has real-time access to comprehensive financial information including:

  • Box enables secure document management and data room analysis
  • Daloopa supplies high-quality fundamentals and KPIs from SEC filings
  • Databricks offers unified analytics for big data and AI workloads
  • FactSet provides comprehensive equity prices, fundamentals, and consensus estimates
  • Morningstar contributes valuation data and research analytics
  • PitchBook delivers industry-leading private capital market data and research, empowering users to source investment and fundraising opportunities, conduct due diligence and benchmark performance, faster and with greater confidence
  • S&P Global enables access to Capital IQ Financials, earnings call transcripts, and more–essentially your entire research workflow”

Takeaway:

This launch may signal a shift among AI providers toward building industry-specific tools that solve problems for professionals, rather than offering only general-purpose models that others use to provide the same solutions. Generative AI companies have the ability to stitch together solutions from big data providers in ways that smaller companies can’t.

Read more at Anthropic:

Transform financial services with Claude

Featured Image by Shutterstock/gguy

Nearly 8 In 10 Americans Use ChatGPT For Search, Adobe Finds via @sejournal, @MattGSouthern

A new report from Adobe states that 77% of Americans who use ChatGPT treat it as a search engine.

Among those surveyed, nearly one in four prefer ChatGPT over Google for discovery, indicating a potential shift in user behavior.

Adobe surveyed 800 consumers and 200 marketers or small business owners in the U.S. All participants self-reported using ChatGPT as a search engine.

ChatGPT Usage Spans All Age Groups

According to the findings, usage is strong across demographics:

  • Gen X: 80%
  • Gen Z: 77%
  • Millennials: 75%
  • Baby Boomers: 74%

Notably, 28% of Gen Z respondents say they start their search journey with ChatGPT. This suggests younger users may be leading the shift in default discovery behavior.

Trust In AI Search Is Rising

Adobe’s report indicates growing trust in conversational AI. Three in ten respondents say they trust ChatGPT more than traditional search engines.

That trust appears to influence behavior, with 36% reporting they’ve discovered a new product or brand through ChatGPT. Among Gen Z, that figure rises to 47%.

The top use cases cited include:

  • Everyday questions (55%)
  • Creative tasks and brainstorming (53%)
  • Financial advice (21%)
  • Online shopping (13%)

Why Users Choose AI Over Traditional Search

The most common reason people use ChatGPT for search is its ability to quickly summarize complex topics (54%). Additionally, 33% said it offers faster answers with fewer clicks than Google.

Respondents also report that AI results feel more personalized. A majority (81%) prefer ChatGPT for open-ended, creative questions, while 77% find its responses more tailored than traditional search results.

Marketers Shift Focus To AI Visibility

Adobe’s survey suggests businesses are already responding to the shift. Nearly half of marketers and business owners (47%) say they use ChatGPT for marketing, primarily to create product descriptions, social media copy, and blog content.

Looking ahead, two-thirds plan to increase their investment in “AI visibility,” with 76% saying it’s essential for their brand to appear in ChatGPT results in 2025.

What Works In AI-Driven Discovery

To improve visibility in conversational AI results, marketers report the best-performing content types are:

  • Data-driven articles (57%)
  • How-to guides (51%)

These formats may align well with AI’s tendency to surface factual, instructive, and referenceable information.

Why This Matters

Adobe’s findings highlight the need for marketers to adapt strategies as users turn to AI tools for product discovery.

Instead of replacing SEO, AI visibility can complement it. Brands tailoring content for conversational search may gain an edge in reaching audiences through personalized pathways.


Featured Image: Roman Samborskyi/Shutterstock

OpenAI And Perplexity Set To Battle Google For Browser Dominance via @sejournal, @martinibuster

Credible rumors are circulating that OpenAI is developing a browser. However, the timing of the anonymous tip is curious, because Perplexity coincidentally announced they are releasing a browser named Comet.

It’s a longstanding tradition in Silicon Valley for competitors to try to overshadow competitor announcements with competing announcements of their own, and the timing of OpenAI’s anonymous rumor seems more than coincidental. For example, OpenAI leaked rumors of their own competing search engine on the exact same date that Google officially announced Gemini 1.5, on February 15, 2024. It’s a thing.

According to Reuters:

“OpenAI is close to releasing an AI-powered web browser that will challenge Alphabet’s (GOOGL.O), opens new tab market-dominating Google Chrome, three people familiar with the matter told Reuters.

The browser is slated to launch in the coming weeks, three of the people said, and aims to use artificial intelligence to fundamentally change how consumers browse the web. It will give OpenAI more direct access to a cornerstone of Google’s success: user data.”

Perplexity Comet

According to TechCrunch, Perplexity’s Comet browser comes with its Perplexity AI search engine as the default. The browser includes an AI agent called Comet Assistant that can help with everyday tasks like summarizing emails and navigating the web. Comet will be released first to its $200/month subscribers and to a list of VIPs invited to try it out.

There’s something old-school about Google, Perplexity, and OpenAI battling it out for browser dominance, a technological space that continues to have relevance to users and perhaps the one constant of the Internet, which is that and pop-ups.

ChatGPT Recommendations Potentially Influenced By Hacked Sites via @sejournal, @MattGSouthern

An investigation by SEO professional James Brockbank reveals that ChatGPT may be recommending businesses based on content from hacked websites and expired domains.

The findings aren’t a comprehensive study but the result of personal testing and observations. Brockbank, who serves as Managing Director at Digitaloft, says his report emerged from exploring how brands gain visibility in ChatGPT’s responses.

His analysis suggests that some actors are successfully gaming the system by publishing content on compromised or repurposed domains that retain high authority signals.

This content, despite being irrelevant or deceptive, can surface in ChatGPT-generated business recommendations.

Brockbank wrote:

“I believe that the more we understand about why certain citations get surfaced, even if these are spammy and manipulative, the better we understand how these new platforms work.”

How Manipulated Content Appears In ChatGPT Responses

Brockbank identified two main tactics that appear to influence ChatGPT’s business recommendations:

1. Hacked Websites

In multiple examples, ChatGPT surfaced gambling recommendations that traced back to legitimate websites that had been compromised.

One case involved a California-based domestic violence attorney whose site was found hosting a listicle about online slots.

Other examples included a United Nations youth coalition website and a U.S. summer camp site. They were both seemingly hijacked to host gambling-related content, including pages using white text on a white background to evade detection.

2. Expired Domains

The second tactic involves acquiring expired domains with strong backlink profiles and rebuilding them to promote unrelated content.

In one case, Brockbank discovered a site with over 9,000 referring domains from sources like BBC, CNN, and Bloomberg. The domain, once owned by a UK arts charity, had been repurposed to promote gambling.

Brockbank explained:

“There’s no question that it’s the site’s authority that’s causing it to be used as a source. The issue is that the domain changed hands and the site totally switched up.”

He also found domains that previously belonged to charities and retailers now being used to publish casino recommendations.

Why This Content Is Surfacing

Brockbank suggests that ChatGPT favors domains with perceived authority and recent publication dates.

Additionally, he finds ChatGPT’s recommendation system may not sufficiently evaluate whether the content aligns with the original site’s purpose.

Brockbank observed:

“ChatGPT prefers recent sources, and the fact that these listicles aren’t topically relevant to what the domain is (or should be) about doesn’t seem to matter.”

Brockbank acknowledges that being featured in authentic “best of” listicles or media placements can help businesses gain visibility in AI-generated results.

However, leveraging hacked or expired domains to manipulate source credibility crosses an ethical line.

Brockbank writes:

“Injecting your brand or content into a hacked site or rebuilding an expired domain solely to fool a language model into citing it? That’s manipulation, and it undermines the credibility of the platform.”

What This Means

While Brockbank’s findings are based on individual testing rather than a formal study, they surface a real concern: ChatGPT may be citing manipulated sources without fully understanding their origins or context.

The takeaway isn’t just about risk, it’s also about responsibility. As platforms like ChatGPT become more influential in how users discover businesses, building legitimate authority through trustworthy content and earned media will matter more than ever.

At the same time, the investigation highlights an urgent need for companies to improve how these systems detect and filter deceptive content. Until that happens, both users and businesses should approach AI-generated recommendations with a dose of skepticism.

Brockbank concluded:

“We’re not yet at the stage where we can trust ChatGPT recommendations without considering where it’s sourced these from.”

For more insights, see the original report at Digitaloft.


Featured Image: Mijansk786/Shutterstock