Perplexity Looks Beyond Search With Its AI Browser, Comet via @sejournal, @MattGSouthern

Perplexity has launched a web browser, Comet, offering users a look at how the company is evolving beyond AI search.

While Comet shares familiar traits with Chrome, it introduces a different interface model. One where users can search, navigate, and run agent-like tasks from a single AI-powered environment.

A Browser Designed for AI-Native Workflows

Comet is built on Chromium and supports standard browser features like tabs, extensions, and bookmarks.

What sets it apart is the inclusion of a sidebar assistant that can summarize pages, automate tasks, schedule meetings, and fill out forms.

You can see it in action in the launch video below:

In an interview, Perplexity CEO Aravind Srinivas described Comet as a step toward combining search and automation into a single system.

Srinivas said:

“We think about it as an assistant rather than a complete autonomous agent but one omni box where you can navigate, you can ask formational queries and you can give agentic tasks and your AI with you on your new tab page, on your side car, as an assistant on any web page you are, makes the browser feel like more like a cognitive operating system rather than just yet another browser.”

Perplexity sees Comet as a foundation for agentic computing. Future use cases could involve real-time research, recurring task management, and personal data integration.

Strategy Behind the Shift

Srinivas said Comet isn’t just a product launch, it’s a long-term bet on browsers as the next major interface for AI.

He described the move as a response to growing user demand for AI tools that do more than respond to queries in chat windows.

Srinivas said:

“The browser is much harder to copy than yet another chat tool.”

He acknowledged that OpenAI and Anthropic are likely to release similar tools, but believes the technical challenges of building and maintaining a browser create a longer runway for Perplexity to differentiate.

A Different Approach From Google

Srinivas also commented on the competitive landscape, including how Perplexity’s strategy differs from Google’s.

He pointed to the tension between AI-driven answers and ad-based monetization as a limiting factor for traditional search engines.

Referring to search results where advertisers compete for placement, Srinivas said:

“If you get direct answers to these questions with booking links right there, how are you going to mint money from Booking and Expedia and Kayak… It’s not in their incentive to give you good answers at all.”

He also said Google’s rollout of AI features has been slower than expected:

“The same feature is being launched year after year after year with a different name, with a different VP, with a different group of people, but it’s the same thing except maybe it’s getting better but it’s never getting launched to everybody.”

Accuracy, Speed, and UX as Priorities

Perplexity is positioning Comet around three core principles: accuracy, low latency, and clean presentation.

Srinivas said the company continues to invest in reducing hallucinations and speeding up responses while keeping user experience at the center.

Srinivas added:

“Let there exist 100 chat bots but we are the most focused on getting as many answers right as possible.”

Internally, the team relies on AI development tools like Cursor and GitHub Copilot to accelerate iteration and testing.

Srinivas noted:

“We made it mandatory to use at least one AI coding tool and internally at Perplexity it happens to be Cursor and like a mix of Cursor and GitHub Copilot.”

Srinivas said the browser provides the structure needed to support more complex workflows than a standalone chat interface.

What Comes Next

Comet is currently available to users on Perplexity’s Max plan through early access invites. A broader release is expected, along with plans for mobile support in the future.

Srinivas said the company is exploring business models beyond advertising, including subscriptions, usage-based pricing, and affiliate transactions.

“All I know is subscriptions and usage based pricing are going to be a thing. Transactions… taking a cut out of the transactions is good.”

While he doesn’t expect to match Google’s margins, he sees room for a viable alternative.

“Google’s business model is potentially the best business model ever… Maybe it was so good that you needed AI to kill it basically.”

Looking Ahead

Comet’s release marks a shift in how AI tools are being integrated into user workflows.

Rather than adding assistant features into existing products, Perplexity is building a new interface from the ground up, designed around speed, reasoning, and task execution.

As the company continues to build around this model, Comet may serve as a test case for how users engage with AI beyond traditional search.


Featured Image: Ascannio/Shutterstock 

OpenAI ChatGPT Agent Marks A Turning Point For Businesses And SEO via @sejournal, @martinibuster

OpenAI announced a new way for users to interact with the web to get things done in their personal and professional lives. ChatGPT agent is said to be able to automate planning a wedding, booking an entire vacation, updating a calendar, and converting screenshots into editable presentations. The impact on publishers, ecommerce stores, and SEOs cannot be overstated. This is what you should know and how to prepare for what could be one of the most consequential changes to online interactions since the invention of the browser.

OpenAI ChatGPT Agent Overview

OpenAI ChatGPT agent is based on three core parts, OpenAI’s Operator and Deep Research, two autonomous AI agents, plus ChatGPT’s natural language capabilities.

  1. Operator can browse the web and interact with websites to complete tasks.
  2. Deep Research is designed for multi-step research that is able to combine information from different resources and generate a report.
  3. ChatGPT agent requests permission before taking significant actions and can be interrupted and halted at any point.

ChatGPT Agent Capabilities

ChatGPT agent has access to multiple tools to help it complete tasks:

  • A visual browser for interacting with web pages with the on-page interface.
  • Text based browser for answering reasoning-based queries.
  • A terminal for executing actions through a command-line interface.
  • Connectors, which are authorized user-friendly integrations (using APIs) that enable ChatGPT agent to interact with third-party apps.

Connectors are like bridges between ChatGPT agent and your authorized apps. When users ask ChatGPT agent to complete a task, the connectors enable it to retrieve the needed information and complete tasks. Direct API access via connectors enables it to interact with and extract information from connected apps.

ChatGPT agent can open a page with a browser (either text or visual), download a file, perform an action on it, and then view the results in the visual browser. ChatGPT connectors enable it to connect with external apps like Gmail or a calendar for answering questions and completing tasks.

ChatGPT Agent Automation of Web-Based Tasks

ChatGPT agent is able to complete entire complex tasks and summarize the results.

Here’s how OpenAI describes it:

“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.

You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.”

ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.

….ChatGPT agent can access your connectors, allowing it to integrate with your workflows and access relevant, actionable information. Once authenticated, these connectors allow ChatGPT to see information and do things like summarize your inbox for the day or find time slots you’re available for a meeting—to take action on these sites, however, you’ll still be prompted to log in by taking over the browser.

Additionally, you can schedule completed tasks to recur automatically, such as generating a weekly metrics report every Monday morning.”

What Does ChatGPT Agent Mean For SEO?

ChatGPT agent raises the stakes for publishers, online businesses, and SEO, in that making websites Agentic AI–friendly becomes increasingly important as more users become acquainted with it and begin sharing how it helps them in their daily lives and at work.

A recent study about AI agents found that OpenAI’s Operator responded well to structured on-page content. Structured on-page content enables AI agents to accurately retrieve specific information relevant to their tasks, perform actions (like filling in a form), and helps to disambiguate the web page (i.e., make it easily understood). I usually refrain from using jargon, but disambiguation is a word all SEOs need to understand because Agentic AI makes it more important than it has ever been.

Examples Of On-Page Structured Data

  • Headings
  • Tables
  • Forms with labeled input forms
  • Product listing with consistent fields like price, availability, name or label of the product in a title.
  • Authors, dates, and headlines
  • Menus and filters in ecommerce web pages

Takeaways

  • ChatGPT agent is a milestone in how users interact with the web, capable of completing multi-step tasks like planning trips, analyzing competitors, and generating reports or presentations.
  • OpenAI’s ChatGPT agent combines autonomous agents (Operator and Deep Research) with ChatGPT’s natural language interface to automate personal and professional workflows.
  • Connectors extend Agent’s capabilities by providing secure API-based access to third-party apps like calendars and email, enabling task execution across platforms.
  • Agent can interact directly with web pages, forms, and files, using tools like a visual browser, code execution terminal, and file handling system.
  • Agentic AI responds well to structured, disambiguated web content, making SEO and publisher alignment with structured on-page elements more important than ever.
  • Structured data improves an AI agent’s ability to retrieve and act on website information. Sites that are optimized for AI agents will gain the most, as more users depend on agent-driven automation to complete online tasks.

OpenAI’s ChatGPT agent is an automation system that can independently complete complex online tasks, such as booking trips, analyzing competitors, or summarizing emails, by using tools like browsers, terminals, and app connectors. It interacts directly with web pages and connected apps, performing actions that previously required human input.

For publishers, ecommerce sites, and SEOs, ChatGPT agent makes structured, easily interpreted on-page content critical because websites must now accommodate AI agents that interact with and act on their data in real time.

Read More About Optimizing For Agentic AI

Marketing To AI Agents Is The Future – Research Shows Why

Featured Image by Shutterstock/All kind of people

Ex-Google Engineer Launches Athena For AI Search Visibility via @sejournal, @MattGSouthern

A former Google Search engineer is betting on the end of traditional SEO, and building tools to help marketers prepare for what comes next.

Andrew Yan, who left Google’s search team earlier this year, co-founded Athena, a startup focused on helping brands stay visible in AI-generated responses from tools like ChatGPT and Perplexity.

The company launched last month with $2.2 million in funding from Y Combinator and other venture firms.

Athena is part of a new wave of companies responding to a shift in how people discover information. Instead of browsing search results, people are increasingly getting direct answers from AI chatbots.

As a result, the strategies that once helped websites rank in Google may no longer be enough to drive visibility.

Yan told The Wall Street Journal:

“Companies have been spending the last 10 or 20 years optimizing their website for the ‘10 blue links’ version of Google. That version of Google is changing very fast, and it is changing forever.”

Building Visibility In A Zero-Click Web

Athena’s platform is designed to show how different AI models interpret and describe a brand. It tracks how chatbots talk about companies across platforms and recommends ways to optimize web content for AI visibility.

According to the company, Athena already has over 100 customers, including Paperless Post.

The broader trend reflects growing concern among marketers about the rise of a “zero-click internet,” where users get answers directly from AI interfaces and never visit the underlying websites.

Yan’s shift from Google to startup founder underscores how seriously some search insiders are taking this transformation.

Rather than competing for rankings on a search results page, Athena aims to help brands influence the outputs of large language models.

Profound Raises $20 Million For AI Search Monitoring

Athena isn’t the only company working on this.

Profound, another startup highlighted by The Wall Street Journal, has raised more than $20 million from venture capital firms. Its platform monitors how chatbots gather and relay brand-related information to users.

Profound has attracted several large clients, including Chime, and is positioning itself as an essential tool for navigating the complexity of generative AI search.

Co-founder James Cadwallader says the company is preparing for a world where bots, not people, are the primary visitors to websites.

Cadwallader told The Wall Street Journal:

“We see a future of a zero-click internet where consumers only interact with interfaces like ChatGPT. And agents or bots will become the primary visitors to websites.”

Saga Ventures’ Max Altman added that demand for this kind of visibility data has surpassed expectations, noting that marketers are currently “flying completely blind” when it comes to how AI tools represent their brands.

SEO Consultants Are Shifting Focus

The shift is also reaching practitioners. Cyrus Shepard, founder of Zyppy SEO, told the Wall Street Journal that AI visibility went from being negligible at the start of 2025 to 10–15% of his current workload.

By the end of the year, he expects it could represent half of his focus.

Referring to new platforms like Athena and Profound, Shepard said:

“I would classify them all as in beta. But that doesn’t mean it’s not coming.”

While investor estimates suggest these startups have raised just a fraction of the $90 billion SEO industry, their traction indicates a need to address the challenges posed by AI search.

What This Means

These startups are early signs of a larger shift in how content is surfaced and evaluated online.

With AI tools synthesizing answers from multiple sources and often skipping over traditional links, marketers face a new kind of visibility challenge.

Companies like Athena and Profound are trying to fill that gap by giving marketers a window into how generative AI models see their brands and what can be done to improve those impressions.

It’s not clear yet which strategies will work best in this new environment, but the race to figure it out has begun.


Featured Image: Roman Samborskyi/Shutterstock

Google’s John Mueller Clarifies How To Remove Pages From Search via @sejournal, @MattGSouthern

In a recent installment of SEO Office Hours, Google’s John Mueller offered guidance on how to keep unwanted pages out of search results and addressed a common source of confusion around sitelinks.

The discussion began with a user question: how can you remove a specific subpage from appearing in Google Search, even if other websites still link to it?

Sitelinks vs. Regular Listings

Mueller noted he wasn’t “100% sure” he understood the question, but assumed it referred either to sitelinks or standard listings. He explained that sitelinks, those extra links to subpages beneath a main result, are automatically generated based on what’s indexed for your site.

Mueller said:

“There’s no way for you to manually say I want this page indexed. I just don’t want it shown as a sitelink.”

In other words, you can’t selectively prevent a page from being a sitelink while keeping it in the index. If you want to make sure a page never appears in any form in search, a more direct approach is required.

How To Deindex A Page

Mueller outlined a two-step process for removing pages from Google Search results using a noindexdirective:

  1. Allow crawling: First, make sure Google can access the page. If it’s blocked by robots.txt, the noindex tag won’t be seen and won’t work.
  2. Apply a noindex tag: Once crawlable, add a noindex meta tag to the page to instruct Google not to include it in search results.

This method works even if other websites continue linking to the page.

Removing Pages Quickly

If you need faster action, Mueller suggested using Google Search Console’s URL Removal Tool, which allows site owners to request temporary removal.

“It works very quickly” for verified site owners, Mueller confirmed.

For pages on sites you don’t control, there’s also a public version of the removal tool, though Mueller noted it “takes a little bit longer” since Google must verify that the content has actually been taken down.

Hear Mueller’s full response in the video below:

What This Means For You

If you’re trying to prevent a specific page from appearing in Google results:

  • You can’t control sitelinks manually. Google’s algorithm handles them automatically.
  • Use noindex to remove content. Just make sure the page isn’t blocked from crawling.
  • Act quickly when needed. The URL Removal Tool is your fastest option, especially if you’re a verified site owner.

Choosing the right method, whether it’s noindex or a removal request, can help you manage visibility more effectively.

Google Answers Question About Structured Data And Logged Out Users via @sejournal, @martinibuster

Someone asked if showing different content to logged-out users than to logged-in users and to Google via structured data is okay. John’s answer was unequivocal.

This is the question that was asked:

“Will this markup work for products in a unauthenticated view in where the price is not available to users and they will need to login (authenticate) to view the pricing information on their end? Let me know your thoughts.”

John Mueller answered:

“If I understand your use-case, then no. If a price is only available to users after authentication, then showing a price to search engines (logged out) would not be appropriate. The markup should match what’s visible on the page. If there’s no price shown, there should be no price markup.”

What’s The Problem With That Structured Data?

The price is visible to logged-in users, so technically the content (in this case the product price) is visible for those users who are logged-in. It’s a good question because a good case can be made that the content shown to Google is available, kind of like behind a paywall, in this case it’s for logged-in users.

But that’s not good enough for Google and it’s not really comparable to paywalls because these are two different things. Google is judging what “on the page” means based on what logged-out users will see on the page.

Google’s guideline about the structured data matching what’s on the page is unambiguous:

“Don’t mark up content that is not visible to readers of the page.

…Your structured data must be a true representation of the page content.”

This is a question that gets asked fairly frequently on social media and in forums so it’s good to go over it for those who might not know yet.

Read More

Confirmed CWV Reporting Glitch In Google Search Console

Google’s New Graph Foundation Model Improves Precision By Up To 40X

Featured Image by Shutterstock/ViDI Studio

Brave Search API Now Available Through AWS Marketplace via @sejournal, @martinibuster

Brave Search and Amazon Web Services (AWS) announced the availability of the Brave Search API in the new AI Agents and Tools category of the AWS Marketplace.

AI Agents And Tools Category Of AWS Marketplace

AWS is entering the AI agent space with a new marketplace that enables entrepreneurs to select from hundreds of AI agents and tools from their new AWS category.

According to the AWS announcement:

“With this launch, AWS Marketplace becomes a single destination where customers can find everything needed for successful AI agent implementations— includes not just agents themselves, but also the critical components that make agents truly valuable—knowledge bases that power them with relevant data, third-party guardrails that enhance security, professional services to support implementation, and deployment options that enable agents to seamlessly interoperate with existing software.”

Customers can choose a pay-as-you-go pricing structure or through a monthly or yearly pricing.

Brave Search

Brave is an independent, privacy-focused search engine. The Brave Search API provides AI LLMs with real-time data, can power agentic search, and can be used for creating applications that need access to the Internet.

The Brave Search API already supplies many of the top AI LLMs with up to date search data.

According to Brian Brown, Chief Business Officer at Brave Software:

“By offering the Brave Search API in AWS Marketplace, we’re providing customers with a streamlined way to access the only independent search API in the market, helping them buy and deploy agent solutions faster and more efficiently. Our customers in foundation models, search engines, and publishing are already using these capabilities to power their chatbots, search grounding, and research tools, demonstrating the real-world value of the only commercially-available search engine API at the scale of the global Web.”

Featured Image by Shutterstock/Deemerwha studio

Google Adds Comparison Mode To Search Console’s 24-Hour View via @sejournal, @MattGSouthern

Google has rolled out a new comparison feature in Search Console, letting you analyze hourly performance data against two baselines: the previous 24 hours and the same day one week earlier.

The feature expands on Search Console’s 24-hour performance view, which launched in December. With this new capability, you can compare short-term trends more easily within Search Console’s performance reports.

Building On Near Real-Time Data

The original 24-hour view introduced hourly granularity and reduced the lag in data availability.

Now, the comparison feature adds context to that data. Instead of viewing isolated metrics, you can measure shifts in clicks, impressions, average CTR, and position over time.

The feature appears across Search Console’s performance reports for Search, Discover, and Google News.

How It Works

The comparison mode lives within the same interface as the 24-hour view and operates based on your local timezone.

You can toggle between viewing data for the last 24 hours, the previous 24 hours, and the same day from the week before. Visual indicators show how each metric has changed hour by hour.

Why This Matters

Before this update, the 24-hour view was a valuable but somewhat isolated tool. While it gave fast access to recent performance, there was no way to tell whether a spike or dip was meaningful without exporting the data for external comparison.

Now, you can assess whether fluctuations are part of a broader trend or a one-off anomaly.

For marketers and SEOs, this could help:

  • Validate the impact of content updates or site changes sooner.
  • Spot issues or opportunities that occur at specific times of day.
  • Establish baseline expectations for hourly performance.

News publishers and ecommerce sites with time-sensitive strategies may find this especially useful when timing is critical to outcomes.

Looking Ahead

Over the past year, Search Console has evolved from multi-day delays to near real-time feedback paired with reporting options.

As always, the rollout is gradual, so not all properties may see the new feature immediately. But once live, it fits directly into existing workflows, requiring no additional setup.


Featured Image: Roman Samborskyi/Shutterstock

Cloudflare DDoS Report: 63% Of Known Attacks Blamed On Competitors via @sejournal, @martinibuster

Cloudflare released their 2025 Q2 DDoS Threat Report, which names the top ten sources of DDoS attacks and cites businesses targeting competitors as the largest source of DDoS attacks, according to surveyed respondents who had identified their attackers.

Survey: Who Attacked You?

Cloudflare surveyed customers about DDoS attacks, and 29% claimed to have identified the sources of those attacks. Of those who identified the attackers, 63% pointed to competitors, the largest of whom were businesses in the crypto, gambling, and gaming industries. 21% of the respondents who identified their attackers said they were victims of state‑sponsored attacks, and 5% said they had accidentally attacked themselves, something that can happen with server misconfigurations

This is how Cloudflare explained it:

“When asked who was behind the DDoS attacks they experienced in 2025 Q2, the majority (71%) of respondents said they didn’t know who attacked them. Of the remaining 29% of respondents that claimed to have identified the threat actor, 63% pointed to competitors, a pattern especially common in the Gaming, Gambling and Crypto industries. Another 21% attributed the attack to state-level or state-sponsored actors, while 5% each said they’d inadvertently attacked themselves (self-DDoS), were targeted by extortionists, or suffered an assault from disgruntled customers/users.”

Most Attacked Locations

One would think that the United States would be the most attacked location, given how many businesses and websites are located there. But the most attacked location was China, which climbed from position three to position one. Brazil also climbed four positions to second place. Turkey dropped four positions to land in sixth place, and Hong Kong dropped to seventh place. Vietnam, however, jumped fifteen places to land in eighth place.

Top Ten Most DDoS-Attacked Countries

  1. China
  2. Germany
  3. India
  4. South Korea
  5. Turkey
  6. Hong Kong
  7. Vietnam
  8. Russia
  9. Azerbaijan

Top Attacked Industries

Telecommunications was the most attacked industry, followed by Internet and Information Technology Services. Gaming and Gambling were the third and fourth most attacked industries, followed by Banking/Financial and Retail industries.

  1. Telecommunications
  2. Internet
  3. Information Technology and Services
  4. Gaming
  5. Gambling and Casinos
  6. Banking and financial Services
  7. Retail
  8. Agriculture
  9. Computer Software
  10. Government

Top Country-Level Sources Of DDOS Attacks

Cloudflare’s data shows that Ukraine is the fifth‑largest source of DDoS attacks, but doesn’t say which areas of Ukraine are responsible. When I look at my logs of bot attacks, the Ukrainian‑origin bots are consistently in Russian‑occupied territories. Cloudflare should have made a distinction about this point, in my opinion.

The country of origin doesn’t mean that one country is shiftier than another. For example, the Netherlands rank as the ninth‑largest source of DDoS attacks, and that may be the case because they have strong user privacy laws that protect VPN users and are well positioned for low latency to both Europe and North America.

Cloudflare also provide the following note about country-level origins:

“It’s important to note that these “source” rankings reflect where botnet nodes, proxy or VPN endpoints reside — not the actual location of threat actors. For L3/4 DDoS attacks, where IP spoofing is rampant, we geolocate each packet to the Cloudflare data center that first ingested and blocked it, drawing on our presence in over 330 cities for truly granular accuracy.”

Top Ten Country Origins Of DDOS Attacks

  1. Indonesia
  2. Singapore
  3. Hong Kong
  4. Argentina
  5. Ukraine
  6. Russia
  7. Ecuador
  8. Vietnam
  9. Netherlands
  10. Thailand

Top ASN Sources Of DDOS Attacks

An ASN (Autonomous System Number) is a unique number assigned to networks or groups of networks that share the same rules for routing internet traffic. SEOs and publishers who track the origin of bad traffic and use .htaccess to block millions of IP ranges will recognize a number of the networks on this list. Hetzner, OVH, Tencent, Microsoft, the Google Cloud Platform, and Alibaba are all usual suspects.

According to Cloudflare, Hetzner dropped from first place as the origin of DDoS attacks to third place. DigitalOcean was formerly the number one source of DDoS attacks and was pushed down to position two by Drei‑K‑Tech‑GmbH, which jumped six places to become the leading source of DDoS attacks.

Top Ten Network Sources Of DDOS Attacks

  1. Drei-K-Tech-GmbH
  2. DigitalOcean
  3. Hetzner
  4. Microsoft
  5. Viettel
  6. Tencent
  7. OVH
  8. Chinanet
  9. Google Cloud Platform
  10. Alibaba

DDOS Attacks Could Be Better Mitigated

Cloudflare noted that it has a program that allows cloud computing providers to rapidly respond to bad actors abusing its networks. It’s not just DDoS attacks that originate at cloud and web hosting providers; it’s also bots scanning for vulnerabilities and actively trying to hack websites. If more providers joined Cloudflare, there could be fewer DDoS attacks, and the web would be a lot safer place.

This is how Cloudflare explains it:

“To help hosting providers, cloud computing providers and any Internet service providers identify and take down the abusive accounts that launch these attacks, we leverage Cloudflare’s unique vantage point to provide a free DDoS Botnet Threat Feed for Service Providers. Over 600 organizations worldwide have already signed up for this feed, and we’ve already seen great collaboration across the community to take down botnet nodes.”

Read the Cloudflare report:

Hyper-volumetric DDoS attacks skyrocket: Cloudflare’s 2025 Q2 DDoS threat report

Wix Announces AI Visibility Overview Citation & Sentiment Tracking Tool via @sejournal, @martinibuster

Wix adds support for Generative Engine Optimization (GEO) with new AI visibility tool called The AI Visibility Overview, available to users with a Wix Business Manager account in English, with more languages rolling out soon. AI Visibility Overview enables users to track citations, track AI query volume and traffic plus benchmark performance against competitors.

AI Visibility Overview

Wix continues its innovative and forward-thinking approach to adding AI-powered tools that provide users with real-world benefits that help get work done. Tracking AI visibility is an advanced capability that no other CMS offers.

The newly announced AI tools provides the following benefits for Generative Engine Optimization (GEO):

  • “Manage AI citations & visibility:
    Users can track how often their website is cited by AI platforms in response to relevant queries, as well as add, or remove questions to better reflect their business.
  • Monitor brand sentiment across LLMs:
    GEO empowers users to stay informed on how their brand is perceived by analyzing sentiment, perception, and positioning in AI-generated content.
  • Benchmark visibility and competitive context:
    Users can compare their AI visibility performance to competitors to gain a better understanding of how their visibility stacks up against industry peers, identify growth opportunities, and discover which other sources are being cited in similar contexts.
  • Measure AI-driven traffic & query volume:
    Users can see how much traffic is driven to their site from AI platforms, as well as how frequently people ask about their brand or services in these engines.”

AI Visibility

Business customers are increasingly searching with AI, and Wix’s new AI Visibility Overview is the right tool at the right time for businesses, enabling them to keep up with where customers are today and offering a competitive advantage over the competition.

Read more about the new tool here:

Wix Analytics: About the AI Visibility Overview

Featured Image by Shutterstock/Roman Samborskyi

Google Rolls Out Gemini 2.5 Pro & Deep Search For Paid Subscribers via @sejournal, @MattGSouthern

Google is rolling out two enhancements to AI Mode in Labs: Gemini 2.5 Pro and Deep Search.

These capabilities are exclusive to users subscribed to Google’s AI Pro and AI Ultra plans.

Gemini 2.5 Pro Now Available In AI Mode

Subscribers can now access Gemini 2.5 Pro from a dropdown menu within the AI Mode tab.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

While the default model remains available for general queries, the 2.5 Pro model is designed to handle more complex prompts, particularly those involving reasoning, mathematics, or coding.

In an example shared by Google, the model walks through a multi-step physics problem involving gravitational fields, showing how it can solve equations and explain its reasoning with supporting links.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

Deep Search Offers AI-Assisted Research

Today’s update also introduces Deep Search, which Google describes as a tool for conducting more comprehensive research.

The feature can generate detailed, citation-supported reports by processing multiple searches and aggregating information across sources.

Google stated in its announcement:

“Deep Search is especially useful for in-depth research related to your job, hobbies, or studies.”

Availability & Rollout

These features are currently limited to users in the United States who subscribe to Google’s AI Pro or AI Ultra plans and have opted into AI Mode through Google Labs.

Google hasn’t provided a firm timeline for when all eligible users will receive access, but rollout has begun.

The “experimental” label on Gemini 2.5 Pro suggests continued adjustments based on user testing.

What This Means

The launch of Deep Search and Gemini 2.5 Pro reflects Google’s broader effort to incorporate generative AI into the search experience.

For marketers, the shift raises questions about visibility in a time when AI-generated summaries and reports may increasingly shape user behavior.

If Deep Search becomes a commonly used tool for information gathering, the structure and credibility of content could play a larger role in discoverability.

Gemini 2.5 Pro’s focus on reasoning and code-related queries makes it relevant for more technical users. Google has positioned it as capable of helping with debugging, code generation, and explanation of advanced concepts. Similar to tools like ChatGPT’s coding features or GitHub Copilot.

Its integration into Search may appeal to users who want technical assistance without leaving the browser environment.

Looking Ahead

The addition of these features behind a paywall continues Google’s movement toward monetizing AI capabilities through subscription services.

While billed as experimental, these updates may provide early insight into how the company envisions the future of AI in search: more automated, task-oriented, and user-specific.

Search professionals will want to monitor how these features evolve, as tools like Deep Search could become more widely adopted.