Google Quietly Signals NotebookLM Ignores Robots.txt via @sejournal, @martinibuster

Google has quietly updated its list of user-triggered fetchers with new documentation for Google NotebookLM. The importance of this seemingly minor change is that it’s clear that Google NotebookLM will not obey robots.txt.

Google NotebookLM

NotebookLM is an AI research and writing tool that enables users to add a web page URL, which will process the content and then enable them to ask a range of questions and generate summaries based on the content.

Google’s tool can automatically create an interactive mind map that organizes topics from a website and extracts takeaways from it.

User-Triggered Fetchers Ignore Robots.txt

Google User-Triggered Fetchers are web agents that are triggered by users and by default ignore the robots.txt protocol.

According to Google’s User-Triggered Fetchers documentation:

“Because the fetch was requested by a user, these fetchers generally ignore robots.txt rules.”

Google-NotebookLM Ignores Robots.txt

The purpose of robots.txt is to give publishers control over bots that index web pages. But agents like the Google-NotebookLM fetcher aren’t indexing web content, they’re acting on behalf of users who are interacting with the website content through Google’s NotebookLM.

How To Block NotebookLM

Google uses the Google-NotebookLM user agent when extracting website content. So, it’s possible for publishers wishing to block users from accessing their content could create rules that automatically block that user agent. For example, a simple solution for WordPress publishers is to use Wordfence to create a custom rule to block all website visitors that are using the Google-NotebookLM user agent.

Another way to do it is with .htaccess using the following rule:


RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} Google-NotebookLM [NC]
RewriteRule .* - [F,L]
Google Lighthouse 13 Launches With Insight-Based Audits via @sejournal, @MattGSouthern

Google has released Lighthouse 13 with a broad audit consolidation that aligns Lighthouse reports with Chrome DevTools’ newer insight model.

The update is available now via npm and Chrome Canary. It will roll into PageSpeed Insights within about a week and is slated for Chrome’s stable channel with version 143.

Google says the update doesn’t change how Lighthouse calculates performance scores. This update targets non-scored audits only.

Lighthouse 13 Details

Audit Consolidation

Lighthouse 13 replaces many legacy audits with “insights” that mirror DevTools.

Notable examples include:

  • CLS And Layout: layout-shifts becomes cls-culprits-insight for clearer identification of layout shift causes.
  • Server And Network: document-latency-insight consolidates redirects, server response time, and text compression checks.
  • Images: image-delivery-insight replaces modern formats, optimized images, responsive images, and efficient animated content audits.
  • LCP: Two insights break down Largest Contentful Paint issues: lcp-discovery-insight and lcp-phases-insight. For interaction work, see interaction-to-next-paint-insight (INP).
  • Third-Party: third-parties-insight replaces the older third-party summary to show external script impact.

Additional replacements address DOM size, duplicated JavaScript, font display, legacy JavaScript, HTTP/2 and modern HTTPS, network dependency trees, render-blocking, caching, and viewport configuration.

Audits Removed Without Replacements

Several audits were removed because they are outdated, inactionable, or low value in modern environments. Additionally, some audits were removed because they were costly to run.

Removed audits include:

  • first-meaningful-paint
  • font-size
  • offscreen-images
  • preload-fonts
  • uses-rel-preload
  • no-document-write
  • uses-passive-event-listeners
  • third-party-facades

Minor Differences From Earlier Previews

Google kept non-composited-animations and unsized-images as separate diagnostics to help locate issues that don’t directly cause CLS.

Google also removed font-size and preload-fonts even though those were not in the initial removal list.

Why This Matters

If you rely on Lighthouse for client reporting, you will see fewer line items and more consolidated insights that map to DevTools.

Your scores shouldn’t change just by upgrading, but any automation that keys off audit IDs will need to track the new insight identifiers.

For SEO context, the removal of the font-size audit reflects Google’s position that this is not a current SEO signal, even though legibility remains a UX consideration.

Looking Ahead

Expect Lighthouse and DevTools to stay aligned on the same insight model.

For reporting, consider mapping old audit IDs to the new insights now to avoid broken dashboards when PSI updates.


Featured Image: FotoField/Shutterstock

Multiple WordPress Vulnerabilities Affects 20,000+ Travel Sites via @sejournal, @martinibuster

Two critical vulnerabilities were identified in the WP Travel Engine, travel booking plugin for WordPress that’s installed on more than 20,000 websites. Both vulnerabilities enable unauthenticated attackers to obtain virtually complete control of a website and are rated 9.8 on the CVSS scale, very close to the highest possible score for critical flaws.

WP Travel Engine

The WP Travel Engine is a popular WordPress plugin used by travel agencies to enable users to plan itineraries, select from different packages, and book any kind of vacation.

Improper Path Restriction (Path Traversal)

The first vulnerability comes from improper file path restriction in the plugin’s set_user_profile_image function

Because the plugin fails to validate file paths, unauthenticated attackers can rename or delete files anywhere on the server. Deleting a file such as wp-config.php disables the site’s configuration and can allow remote code execution. This flaw can enable an attacker to stage a remote code execution attack from the site.

Local File Inclusion via Mode Parameter

The second vulnerability comes from improper control of the mode parameter, which lets unauthenticated users include and run arbitrary .php files

This enables an attacker to run malicious code and and access sensitive data. Like the first flaw, it has a CVSS score of 9.8 and is rated as critical because it allows unauthenticated code execution that can expose or damage site data.

Recommendation

Both vulnerabilities affect versions up to and including 6.6.7. Site owners using WP Travel Engine should update the plugin to the latest version as soon as possible. Both vulnerabilities can be exploited without authentication, so prompt updating is recommended to prevent unauthorized access.

Featured Image by Shutterstock/Hybrid_Graphics

YouTube Lets Some Terminated Creators Request A New Channel via @sejournal, @MattGSouthern

YouTube is piloting a policy change that allows some previously terminated creators to request a new channel after a one-year waiting period.

The program will roll out to eligible creators over the coming weeks and months, according to YouTube’s official announcement.

How The Second Chances Pilot Works

Eligible creators will start seeing an option in YouTube Studio (desktop) to request a new channel when signed in with their previously terminated channel credentials.

The YouTube Team wrote:

“We know many terminated creators deserve a second chance… we’ve had our share of second chances to get things right with our community too.”

Creators become eligible to apply one year after termination. During that year, they can still appeal the original decision if they believe YouTube made a mistake.

What A New Channel Includes

YouTube frames this as a fresh start rather than a restoration of the original channel. Creators can rebuild their community and may re-upload prior videos that comply with current Community Guidelines.

Once the new channel meets the YouTube Partner Program criteria, creators can apply for monetization like any other channel.

Eligibility & Exclusions

When reviewing requests, YouTube says it will consider factors such as whether violations were severe or persistent and whether on- or off-platform activity harmed, or could continue to harm, the YouTube community.

YouTube cites channels that endangered kids’ safety as an example that may be disqualified.

The pilot does not apply to:

  • Creators terminated for copyright infringement
  • Creators who violated Creator Responsibility policies
  • Creators who deleted their YouTube channel or Google account

Appeals Versus New Channel Requests

Appeals remain available for one year after termination. YouTube says appeals are evaluated based on how policies apply at the time of the appeal.

Successful appeals reinstate the original channel with its content and subscribers. If the appeal is unsuccessful, the creator may request a new channel after one year.

Why This Matters For Marketers

The pilot softens the finality of termination without fully removing the consequences.

Creators may re-upload compliant videos, but they’ll still need to rebuild from scratch on new channels.

Looking Ahead

YouTube says it will monitor the pilot and adjust as it learns from early applications.

For more details, see the video from YouTube’s Creator Liaison below:

YouTube Launches Brand Pulse Report to Measure Full Brand Impact via @sejournal, @brookeosmundson

For years, marketers have struggled to measure the full picture of how their brand shows up on YouTube.

Paid campaigns have their own dashboards. Creator collaborations usually live in separate, manual spreadsheets. Organic and user-generated content rarely make it into the same conversation.

YouTube’s new Brand Pulse Report, just announced today looks to change that. It aims to offer brands a unified view of how their presence is represented and performing across every corner of the platform.

Read on to understand more about the report and how to use it to your advantage.

A Closer Look at the Brand Pulse Report

YouTube describes Brand Pulse as a new, AI-powered measurement solution that detects and quantifies a brand’s presence across the platform. It doesn’t look at just paid placements, but in creator videos, organic uploads, and even user-generated content.

It uses what YouTube calls multi-modal AI, meaning it analyzes videos across multiple dimensions:

  • Audio: detecting spoken mentions of a brand
  • Visuals: identifying logos, packaging, or even product shots
  • Text: reading brand mentions in titles, captions, or descriptions

This allows the tool to recognize where a brand appears, intentionally or organically, and then tie those signals back to viewer engagement metrics like “Total Unique Viewers” and “Share of Watch Time”.

For brands, that means visibility into where and how they show up across YouTube, even in content they didn’t create or sponsor directly.

Why the Brand Pulse Report is So Notable

Marketers have long been asking for better ways to measure YouTube’s brand impact beyond paid media.

Brand Pulse answers that request by connecting the dots between paid, organic, and creator-driven exposure. It gives the respective teams a more complete picture of influence.

YouTube also notes that the tool will show how brand exposure on the platform drives “Search Lift”, allowing advertisers to see how YouTube content contributes to increases in branded search queries. This connection between upper-funnel video exposure and mid-funnel intent is one of the most interesting aspects of the rollout.

As Google Ads Liaison Ginny Marvin explained on LinkedIn, the Brand Pulse Report is “helping brands finally connect the dots,” showing how paid and organic videos together influence real behaviors, not just views or likes.

YouTube’s move here mirrors a broader industry shift toward holistic measurement: tying together paid and organic activity to give brands a single narrative of influence.

Similar efforts are underway in Connected TV, social, and retail media, where advertisers increasingly want to understand how their brand performs in context, not just in isolation.

For YouTube, Brand Pulse also reinforces its positioning as more than just a performance or creator platform. It’s a brand-building ecosystem: one where paid, creator, and user content coexist in ways that shape real consumer behavior.

What Does This Mean For Brand and Media Teams?

For advertisers, this report could help solve one of the most persistent blind spots in video marketing: the inability to quantify the ripple effect of brand exposure.

Historically, a creator video might boost product awareness, a pre-roll ad might reinforce it, and organic search might capture it.

But, those signals lived in isolation.

Brand Pulse promises to bring those touchpoints together under one lens.

This unified visibility could help teams by:

  • Highlight how paid campaigns amplify creator and organic reach
  • Reveal where brand mentions are naturally gaining traction
  • Help benchmark visibility against competitors within the same category
  • Inform where future collaborations or ad placements could drive incremental reach

For many teams, it may also reshape how budgets are allocated.

For example, if the data consistently shows that paid YouTube campaigns drive organic or creator-based lift, it strengthens the case for reinvesting more heavily at the brand-building stage. Where previously, teams would rely solely on performance metrics like conversions or click-through rates (which we know isn’t the main goal for all YouTube campaigns).

Additionally, if the Brand Pulse report ties together how well each channel performs together, it may strengthen the case to continue investment in all of those channels. It could help signal that without one channel, others may suffer indirectly as a result of cutting.

Current Limitations and Questions to Ponder

The Brand Pulse Report is currently available only to select advertisers, so it’s still in its early days. And while the vision is ambitious, several questions may be top of mind:

  • How accurate is its multi modal AI? Will it correctly recognize a brand when it’s partially visible, mispronounced, or used in a negative context?
  • Are there any thresholds for brands to reach? For example, how long must a logo or mention appear for it to count as meaningful exposure?
  • Is there risk for attribution overlap? If a viewer sees both a paid ad and an organic mention, how will Brand Pulse avoid double-counting influence?

Marketers should also remain cautious about assuming correlation equals causation. While a lift in search volume or engagement may align with YouTube exposure, controlled testing will still be necessary to validate true impact.

A Move Towards Holistic Measurement

YouTube’s Brand Pulse Report represents a meaningful step toward closing one of the biggest gaps in digital measurement: connecting what people see with how they search, engage, and recall brands later on.

If successful, it could give marketers a truer sense of how awareness efforts on YouTube translate into tangible brand outcomes.

Still, adoption will depend on data accuracy and usability. The potential is significant, but the real proof will come from how well the report balances AI ambition with real-world reliability.

For now, Brand Pulse signals where measurement is headed: beyond impressions and clicks, toward understanding the total presence of a brand across the YouTube ecosystem.

WP Engine Vs Automattic & Mullenweg Is Back In Play via @sejournal, @martinibuster

WP Engine filed a Second Amended Complaint against Automattic and Matt Mullenweg in response to the September 2025 court order that dismissed several counts but gave WP Engine an opportunity to amend and fix issues in its earlier filing. Although Mullenweg blogged last month that the ruling was a “significant milestone,” that’s somewhat of an overstatement because the court had, in fact, dismissed the counts related to antitrust and monopolization with leave to amend, allowing WP Engine to amend and refile its complaint, which it has now done.

WP Engine Versus Automattic Is Far From Over

In last month’s court order, two claims were dismissed outright because of technical issues, not because they lacked merit.

Two Claims That Were Dismissed

  1. Count 4, Attempted Extortion: WP Engine’s lawyers cited a section of the California Penal Code for Attempted Extortion. The Penal Code is criminal law intended for use by prosecutors and cannot serve as the basis for a civil claim.
  2. Count 16, Trademark Misuse, was also dismissed on the technical ground that trademark misuse can only be raised as a defense.

The remaining counts that were dismissed last month were dismissed with leave to amend, meaning WP Engine could correct the identified flaws and refile. WP Engine’s amended complaint shows that Automattic and Matt Mullenweg still have to respond to WP Engine’s claims and that the lawsuit is far from over.

Six Counts Refiled

WP Engine refiled six counts to cure the flaws the judge identified in the September 2025 court order, including its Computer Fraud and Abuse Act claim (Count 3).

  1. Count 3: Computer Fraud and Abuse Act (CFAA)
  2. Count 12: Attempted Monopolization (Sherman Act)
  3. Count 13: Illegal Tying (Sherman Act)
  4. Count 14: Illegal Tying (Cartwright Act)
  5. Count 15: Lanham Act Unfair Competition
  6. Count 16: Lanham Act False Advertising

Note: In the amended complaint, Count 16 is newly numbered; the previous Count 16 (Trademark Misuse) was dismissed without leave to amend.

How Second Amended Complaint Fixes Issues

The refiled complaint adds further allegations and examples to address the shortcomings identified by the judge in the previous ruling. One major change is the inclusion of clearer market definitions and more detailed allegations of monopoly power.

Clearer Market Definition

The September 2025 order found that WP Engine’s earlier complaint did not adequately define the relevant markets, and the judge gave WP Engine an opportunity to amend. The amended complaint dedicates about 27 pages to defining and describing multiple relevant markets.

WP Engine’s filing now identifies four markets:

  1. Web Content Management Systems (CMS) Market: Encompassing both open-source and proprietary CMS platforms for website creation and management, with alleged monopoly power concentrated in the WordPress ecosystem.
  2. WordPress Web Hosting Services Market: Consisting of hosting providers that specialize in WordPress websites, where Automattic is alleged to influence competition through its control of WordPress.org and trademark enforcement.
  3. WordPress Plugin Distribution Market: Focused on the distribution of plugins through the WordPress.org repository, which WP Engine alleges Automattic controls as an essential and exclusive channel for visibility and access.
  4. WordPress Custom Field Plugin Market: A narrower segment centered on Advanced Custom Fields (ACF) and similar plugins that provide custom field functionality, where WP Engine claims Automattic’s actions directly suppressed competition.

By defining these markets in greater detail over 27 pages, WP Engine addresses the court’s earlier finding that its market definitions were inadequately supported and insufficiently specific.

New Allegations Of Monopoly Power

The September 2025 court order found that WP Engine had not plausibly alleged Automattic’s monopoly power or exclusionary conduct, and allowed WP Engine to amend its complaint.

The amended filing adds detailed assertions intended to show Automattic’s dominance:

  • Automattic allegedly controls access to the official WordPress plugin and theme repositories, which are essential for visibility and functionality within the WordPress ecosystem.
  • Matt Mullenweg’s dual roles as Automattic’s CEO and his control over WordPress.org’s operations are alleged to enable coordinated market exclusion.
  • The complaint cites WordPress’s scale, powering more than 40 percent of global websites, and argues that Automattic exercises significant influence over this ecosystem through its control of WordPress.org and related trademarks.

These new assertions are meant to show that Automattic’s influence over WordPress.org translates into measurable market power, addressing the court’s finding that WP Engine had not yet made that connection.

Expanded Exclusionary Conduct Examples

The court found that WP Engine framed Automattic’s control of WordPress.org and the WordPress trademarks too vaguely to plausibly show exclusionary conduct or resulting antitrust injury.

The amended complaint addresses this by detailing how Automattic and Matt Mullenweg allegedly used threats and actions involving WordPress.org access and distribution to:

  • Block or restrict WP Engine’s access to WordPress.org resources and community channels.
  • Impose conditions on access to WordPress trademarks and resources through alleged threats and leverage.
  • Pressure plugin developers and partners not to collaborate or integrate with WP Engine’s products.
  • Establish an alleged de facto tying arrangement, linking participation in the WordPress.org ecosystem to compliance with Automattic’s control over governance and distribution.

Together, these examples illustrate how WP Engine is attempting to turn previously vague claims of control into specific allegations of exclusionary conduct.

Abundance Of Evidence

Mullenweg sounded upbeat in his response to the September 2025 ruling:

“Just got word that the court dismissed several of WP Engine and Silver Lake’s most serious claims — antitrust, monopolization, and extortion have been knocked out!”

But WP Engine’s Second Amended Complaint makes it clear that those “serious claims” were dismissed with leave to amend, have since been refiled, and are not yet knocked out.

The amended complaint is 175 pages long, perhaps reflecting the comprehensive scope necessary to address the issues the court identified in the September 2025 order. None of this means WP Engine is winning; it simply means the ball is back in play. That outcome directly contradicts Mullenweg’s earlier claim that the antitrust, monopolization, and extortion counts had been “knocked out.”

Featured Image by Shutterstock/Nithid

Microsoft Explains How To Optimize Content For AI Search Visibility via @sejournal, @MattGSouthern

Microsoft has shared guidance on structuring content to increase its likelihood of being selected for AI-generated answers across Bing-powered surfaces.

Much of the advice reiterates established SEO and UX practices such as clear titles and headings, structured layout, and appropriate schema.

The new emphasis is on how content is selected for answers. Microsoft stresses there is “no secret sauce” that guarantees selection, but says structure, clarity, and “snippability” improve eligibility.

As Microsoft puts it:

“In traditional search, visibility meant appearing in a ranked list of links. In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”

Key Differences In AI Search

AI assistants break down pages into manageable parts, carefully assessing each for authority and relevance, then craft responses by blending information from multiple sources.

Microsoft says fundamentals such as crawlability, metadata, internal links, and backlinks still matter, but they are the starting point. Selection increasingly depends on how well-structured and clear each section is.

Best Practices Microsoft Recommends

To help improve the chances of AI selecting your content, Microsoft recommends these best practices:

  • Align the title, meta description, and H1 to clearly communicate the page purpose.
  • Use descriptive H2/H3 headings that each cover one idea per section.
  • Write self-contained Q&A blocks and concise paragraphs that can be quoted on their own.
  • Use short lists, steps, and comparison tables when they improve clarity (without overusing them).
  • Add JSON-LD schema that matches the page type.

What To Avoid

Microsoft recommends avoiding these practices to improve the chances of your content appearing in AI search results:

  • Writing long walls of text that blur ideas together.
  • Hiding key content in tabs, accordions, or other elements that may not render.
  • Relying on PDFs for core information.
  • Putting important information only in images without alt text or HTML alternatives.
  • Making vague claims without providing specific details.
  • Overusing decorative symbols or long punctuation strings; keep punctuation simple.

Why This Matters

The key takeaway is that structure helps selection. When your titles, headings, and schema are aligned, Copilot and other Bing-powered tools can extract a complete idea from your page.

This connects traditional SEO principles to how AI assistants generate responses. For marketers, it’s more of an operational checklist than a new strategy.

Looking Ahead

Microsoft acknowledges there’s no guaranteed way to ensure inclusion in AI responses, but suggests that these practices can make content more accessible for its AI systems.


Featured Image: gguy/Shutterstock

Google AdSense Replaces Ad Networks With Authorized Buyers via @sejournal, @MattGSouthern

Google is updating its demand source management by replacing the Ad Networks blocking control with a new Authorized Buyers control in AdSense.

This change affects how you control which demand sources can bid on your inventory. The transition begins on November 6. Existing blocks will remain in place, and new authorized buyers will be enabled by default.

What’s Changing

Google is discontinuing the Ad Networks blocking control within Brand Safety and introducing a new Authorized Buyers control.

As part of this update, the “Automatically allow new Google-certified ad networks” option is being eliminated. Instead, new authorized buyers will be permitted by default.

The Authorized Buyers list excludes inactive ad networks, test ad networks, and Display & Video 360 (DV360) networks.

Google states that the new page allows you to permit or block authorized buyers and offers improved visibility into parent–child relationships among buyers. However, DV360 accounts are not managed within the new control.

Timeline & Transition

Before launching, you can preview the view-only Authorized Buyers page in AdSense by navigating to Brand Safety → Content → Blocking controls → Authorized Buyers.

These controls will be active after November 6. Any modifications made to Ad Networks prior to launch will be saved and reflected in the Authorized Buyers section.

Once the change is live, control access by navigating to Brand Safety → Content → Blocking controls → Authorized Buyers. Here, you can permit or restrict specific authorized buyers and utilize search or filters to locate particular entries.

Google’s detailed “Allow and block authorized buyers” guide illustrates this process.

Ad Review Center & DV360

You’ll no longer be managing authorized buyers through the Ad Review Center. You can still allow or block Google ad accounts in the Advertiser section, including DV360 accounts, which stay outside the new Authorized Buyers system.

Looking Ahead

This update changes the default setting to permit new buyers, so tighter configurations might need a regular process to review and prevent unwanted buyers.

Preview the interface now to familiarize your team with control locations, then schedule a post-launch review to verify your existing blocks and any new entries. Maintain DV360 workflows in the Ad Review Center, and utilize the parent–child view to see how related buyers influence bidding and revenue.

Search Atlas Announces New Features For Agencies via @sejournal, @martinibuster

Search Atlas held an event last week to showcase new capabilities and improvements to their SEO platform which make it easier for digital marketer to scale SEO and take on more clients.

The new features enable marketers to more easily handle on-page and off-page SEO, paid search, impact and track LLM visibility, and scale Google Business Profile management, and that’s just a sample of all the new functionalities coming to the platform.

Auto PPC Retargeting

Search Atlas introduced a new new retargeting feature in Otto PPC. This new feature is designed for agencies and advertisers that are managing paid media. It simplifies campaign setup with a quick-start wizard that enables retargeting site visitors, which they claim can be launched in under 60 seconds.

Manick Bhan, founder of Search Atlas explained:

“The hardest thing about taking paid media business from a client is doing it justice, doing a good job, right? Because every time they get a click, they’re paying for it. The best way that you can show a client ROI on paid media is through retargeting. Run a retargeting campaign, retargeting the traffic that they already have on their website.

We wanted to be able to make this easy for you, so all you have to do is enable it inside Otto PPC, and you’re able to run retargeting campaigns now. So we have a wizard set up for you — just a couple clicks and you can launch a retargeting campaign in less than 60 seconds. It’s that easy.”

GBP Galactic

Search Atlas announced a feature for digital marketers who handle Google Business Profiles for clients. The GBP Galactic feature now has Service Area Business (SAB) support. GBP Galactic offers integration with social media auto-posting to Facebook and Instagram, with plans to add more social networks soon.

Bhan explained the social network autoposting:

“We’ve learned the LLMs they want to see your information not just on your website and GBP profile, they want to see your data in the social media platforms.. So what we can do now is, one time, build our GBP posts, and publish to all social networks, which will increase your visibility in the LLMs. And instead of having to use third-party tools to do this, it will be completely integrated.”

Bhan also shared about their citation network:

“We also added support for service area businesses in our citations product, so now you can even build aggregator network citations and put yourself into the aggregator networks for your service businesses… Because normally these aggregator networks, they want an address. We figured out how to do it so we can get you in without one. Pretty cool.

…ChatGPT, Claude, all the LLMs pay for the data from all the aggregator networks. So if you want to put your local business into the aggregators, as well as into all the websites, the aggregator networks are a shortcut to being able to do that and upload directly to ChatGPT.”

LLM Visibility

Another useful feature is LLM Visibility tracking and sentiment analysis. LLM visibility is now measurable directly in Search Atlas. It also tracks brand presence across ChatGPT, Claude, and other LLMs and is able to identify visibility trends beyond Google Search.

Expanded Press Release Network

Bhan announced that Signal Genesys, a press release company they acquired last year, has expanded their distribution to financial news and with a local news media network.

Bhan commented:

“The financial news network costs a whopping $10. And then the news media network costs about $20. So these are really cost-effective, especially for agencies. If you are working with clients and you need to keep prices low for yourselves, there’s a lot of margin in there for you.

And these networks in particular we found were indexed very well in ChatGPT.”

On-Page SEO

Interesting feature launched in their Otto product is a module called Domain Knowledge Network which assists users in building topical relevance with a semantic interface, just speak instructions to it and it will analyze the brand and suggest a content topic structure.

Revamped WordPress Plugin

Their WordPress plugin has been overhauled to make it more user-friendly. It now includes one-click installation to connect WordPress directly to Search Atlas, two-way synchronization that keeps Otto data and WordPress in sync in real time, and auto-publishing that enables SEO fixes generated in Otto to be deployed directly into WordPress.

Universal CMS Integration

Search Atlas is aiming to become CMS-agnostic, able to integrate with any website regardless of the CMS for publishing blog posts and landing pages in one click through their Content Genius feature. Right now Search Atlas can work with Drupal, HubSpot, Magento, Wix, and WordPress. They are also testing to integrate with Joomla, Shopify, and Webflow. Soon they’ll be able to integrate with ClickFunnels, Contentful, Duda, Ghost, and Salesforce.

Near Future: Otto Agent

Otto Agent represents the future of Search Atlas’s agentic revolution, replacing traditional UI-driven workflows with natural-language commands. It’s currently available as a beta program. Users can speak to the platform (via text or voice) to perform SEO actions directly. Otto Agent can execute end-to-end actions: site audits, fixes, title/meta/image optimization, GBP posts, and content generation.

Spending the day listening to their presentations, it became evident that Otto Agent typified Search Atlas’s approach toward developing an SEO platform that is useful. Having come from an SEO agency background, they understand what agencies need and aren’t waiting for competitors to do things first, they’re just moving forward with features that they feel agencies will find useful.

Otto Agent is an example of that forward-looking approach because it’s built on the idea that managing SEO will become agentic, conversational, and autonomous.

I didn’t know that much about Search Atlas before attending the event but now I have a better understanding of why so many agencies embrace Search Atlas.

Featured Image by Shutterstock/Digitala World

Are AI Tools Eliminating Jobs? Yale Study Says No via @sejournal, @MattGSouthern

Marketing professionals rank among the most vulnerable to AI disruption, with Indeed recently placing marketing fourth for AI exposure.

But employment data tells a different story.

New research from Yale University’s Budget Lab finds “the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago,” undercutting fears of economy-wide job losses.

The gap between predicted risk and actual impact suggests “exposure” scores may not predict job displacement.

Yale notes the two measures it analyzes, OpenAI’s exposure metric and Anthropic’s usage, capture different things and correlate only weakly in practice.

Exposure Scores Don’t Match Reality

Yale researchers examined how the occupational mix changed since November 2022, comparing it to past tech shifts like computers and the early internet.

The occupational mix measures the distribution of workers across different jobs. It changes when workers switch careers, lose jobs, or enter new fields.

Jobs are changing only about one percentage point faster than during early internet adoption, according to the research:

“The recent changes appear to be on a path only about 1 percentage point higher than it was at the turn of the 21st century with the adoption of the internet.”

Sectors with high AI exposure, including Information, Financial Activities, and Professional and Business Services, show larger shifts, but “the data again suggests that the trends within these industries started before the release of ChatGPT.”

Theory vs. Practice: The Usage Gap

The research compares OpenAI’s theoretical “exposure” data with Anthropic’s real usage from Claude and finds limited alignment.

Actual usage is concentrated: “It is clear that the usage is heavily dominated by workers in Computer and Mathematical occupations,” with Arts/Design/Media also overrepresented. This illustrates why exposure scores don’t map neatly to adoption.

Employment Data Shows Stability

The team tracked unemployed workers by duration to look for signs of AI displacement. They didn’t find them.

Unemployed workers, regardless of duration, “were in occupations where about 25 to 35 percent of tasks, on average, could be performed by generative AI,” with “no clear upward trend.”

Similarly, when looking at occupation-level AI “automation/augmentation” usage, the authors summarize that these measures “show no sign of being related to changes in employment or unemployment.”

Historical Disruption Timeline

Past disruptions took years, not months. As Yale puts it:

“Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows.”

The researchers also stress their work is not predictive and will be updated monthly:

“Our analysis is not predictive of the future. We plan to continue monitoring these trends monthly to assess how AI’s job impacts might change.”

What This Means

A measured approach beats panic. Both Indeed and Yale emphasize that realized outcomes depend on adoption, workflow design, and reskilling, not raw exposure alone.

Early-career effects are worth watching: Yale notes “nascent evidence” of possible impacts for early-career workers, but cautions that data are limited and conclusions are premature.

Looking Ahead

Organizations should integrate AI deliberately rather than restructure reactively.

Until comprehensive, cross-platform usage data are available, employment trends remain the most reliable indicator. So far, they point to stability over transformation.