Google Users Warned Of Surging Malvertising Campaigns via @sejournal, @MattGSouthern

Cybersecurity researchers are warning people over a troubling rise in “malvertising”—the use of online ads to deploy malware, phishing scams, and other attacks.

A report from Malwarebytes found that malvertising incidents in the U.S. surged 42% last fall.

The prime target? Unsuspecting users conducting searches on Google.

Jérôme Segura, senior director of research at Malwarebytes, warns:

“What I’m seeing is just the tip of the iceberg. Hackers are getting smarter and the ads are often so realistic that it’s easy to be duped.”

Poisoned Paid Promotions

The schemes frequently involve cybercriminals purchasing legitimate-looking sponsored ad listings that appear at the top of Google search results.

Clicking these can lead to drive-by malware downloads or credential phishing pages spoofing major brands like Lowe’s and Slack.

Segura explained of one recent Lowe’s employee portal phishing attack:

“You see the brand, even the official logo, and for you it’s enough to think it’s real.”

Undermining User Trust

Part of what makes these malvertising attacks so volatile is they hijack and undermine user trust in Google as an authoritative search source.

Stuart Madnick, an information technology professor at MIT, notes:

“You see something appearing on a Google search, you kind of assume it is something valid.”

The threats don’t end with poisoned promotions, either. Malicious ads can also sneak through on trusted websites.

Protecting Against Malvertising: For Users

Experts advise several precautions to reduce malvertising risk, including:

  • Carefully vet search ads before taking any actions
  • Keeping device operating systems and browsers updated
  • Using ad-blocking browser extensions
  • Reporting suspicious ads to Google for investigation

Madnick cautioned:

“You should assume that this could happen to you no matter how careful you are.”

Staying vigilant against malvertising exploits will become more critical as cyber attackers evolve their deceptive tactics.

Protecting Against Malvertising: For Websites

While individual users must stay vigilant, websites are also responsible for implementing safeguards to prevent malicious ads from being displayed on their platforms.

Some best practices include:

Ad Verification Services

Many websites rely on third-party ad verification services and malware scanning tools to monitor the ads being served and block those identified as malicious before reaching end users.

Whitelisting Ad Sources

Rather than accepting ads through open real-time bidding advertising exchanges, websites can whitelist only thoroughly vetted and trusted ad networks and sources.

Review Process

For an added layer of protection, websites can implement a human review process on top of automated malware scanning to manually analyze ads before serving them to visitors.

Continuous Monitoring

Malvertisers constantly update their techniques, so websites must monitor their ad traffic data for anomalies or suspicious patterns that could indicate a malicious campaign.

By implementing multi-layered ad security measures, websites can avoid unknowingly participating in malvertising schemes that put their visitors at risk while protecting their brand reputation.


Featured Image: Bits And Splits/Shutterstock

Why Google Indexes Blocked Web Pages via @sejournal, @martinibuster

Google’s John Mueller answered a question about why Google indexes pages that are disallowed from crawling by robots.txt and why the it’s safe to ignore the related Search Console reports about those crawls.

Bot Traffic To Query Parameter URLs

The person asking the question documented that bots were creating links to non-existent query parameter URLs (?q=xyz) to pages with noindex meta tags that are also blocked in robots.txt. What prompted the question is that Google is crawling the links to those pages, getting blocked by robots.txt (without seeing a noindex robots meta tag) then getting reported in Google Search Console as “Indexed, though blocked by robots.txt.”

The person asked the following question:

“But here’s the big question: why would Google index pages when they can’t even see the content? What’s the advantage in that?”

Google’s John Mueller confirmed that if they can’t crawl the page they can’t see the noindex meta tag. He also makes an interesting mention of the site:search operator, advising to ignore the results because the “average” users won’t see those results.

He wrote:

“Yes, you’re correct: if we can’t crawl the page, we can’t see the noindex. That said, if we can’t crawl the pages, then there’s not a lot for us to index. So while you might see some of those pages with a targeted site:-query, the average user won’t see them, so I wouldn’t fuss over it. Noindex is also fine (without robots.txt disallow), it just means the URLs will end up being crawled (and end up in the Search Console report for crawled/not indexed — neither of these statuses cause issues to the rest of the site). The important part is that you don’t make them crawlable + indexable.”

Takeaways:

1. Mueller’s answer confirms the limitations in using the Site:search advanced search operator for diagnostic reasons. One of those reasons is because it’s not connected to the regular search index, it’s a separate thing altogether.

Google’s John Mueller commented on the site search operator in 2021:

“The short answer is that a site: query is not meant to be complete, nor used for diagnostics purposes.

A site query is a specific kind of search that limits the results to a certain website. It’s basically just the word site, a colon, and then the website’s domain.

This query limits the results to a specific website. It’s not meant to be a comprehensive collection of all the pages from that website.”

2. Noindex tag without using a robots.txt is fine for these kinds of situations where a bot is linking to non-existent pages that are getting discovered by Googlebot.

3. URLs with the noindex tag will generate a “crawled/not indexed” entry in Search Console and that those won’t have a negative effect on the rest of the website.

Read the question and answer on LinkedIn:

Why would Google index pages when they can’t even see the content?

Featured Image by Shutterstock/Krakenimages.com

Google May Unify Schema Markup & Merchant Center Feed Data via @sejournal, @MattGSouthern

Google revealed it’s working to bridge the gap between two key product data sources that power its shopping results – website markup using schema.org structured data and product feeds submitted via Google Merchant Center.

The initiative, mentioned during a recent “Search Off The Record” podcast episode, aims to achieve one-to-one parity between the product attributes supported by schema.org’s open-source standards and Google’s merchant feed specifications.

Leveraging Dual Product Data Pipelines

In search results, Google leverages structured data markup, and Merchant Center product feeds to surface rich product listings.

Irina Tuduce, a longtime Google employee involved with the company’s shopping search infrastructure, says merchants should utilize both options.

Tuduce stated:

“We recommend doing both. Because, as I said, in signing up on the Merchant Center UI, you make sure some of your inventory, the one that you specify, will be in the Shopping results. And you can make sure you’ll be on dotcom on the Shopping tab and Image tab.

And then, if you specify how often you want us to refresh your data, then you can be sure that that information will be refreshed. Otherwise, yeah, you don’t know when we will have the resources to recrawl you and update that information.”

Meanwhile, implementing schema.org markup allows Google to extract product details from websites during the crawling process.

Reconciling Markup and Feed Discrepancies

However, discrepancies can arise when the product information in a merchant’s schema.org markup doesn’t perfectly align with the details provided via their Merchant Center feed uploads.

Tuduce explained

“If you don’t have the schema.org markup on your page, we’ll probably stick to the inventory that you specify in your feed specification.”

Google’s initiative aims to resolve such discrepancies.

Simplifying Merchant Product Data Management

Unifying the product attributes across both sources aims to simplify data management and ensure consistent product listings across Google.

Regarding the current inconsistencies between schema.org markup and merchant feed specifications, Tuduce says:

“The attributes overlap to a big extent, but there are still gaps that exist. We will want to address those gaps.”

As the effort progresses, Google plans to keep marketers informed by leveraging schema.org’s active GitHub community and opening the update process to public feedback.

The unified product data model could keep product details like pricing, availability, and variant information consistently updated and accurately reflected across Google’s search results.

Why This Matters

For merchants, consistent product listings with accurate, up-to-date details can boost visibility in Google’s shopping experiences. Streamlined data processes also mean less redundant work.

For consumers, a harmonized system translates to more relevant, trustworthy shopping journeys.

What You Can Do Now

  • Audit current product data across website markup and merchant feeds for inconsistencies.
  • Prepare to consolidate product data workflows as Google’s unified model rolls out.
  • Implement richer product schema markup using expanded vocabulary.
  • Monitor metrics like impressions/clicks as consistent data surfaces.
  • Prioritize product data hygiene and frequent catalog updates.

By aligning your practices with Google’s future plans, you can capitalize on new opportunities for streamlined product data management and enhanced shopping search visibility.

Hear the full discussion below, starting around the 12-minute mark:

New LiteSpeed Cache Vulnerability Puts 6 Million Sites at Risk via @sejournal, @martinibuster

Another vulnerability was discovered in the LiteSpeed Cache WordPress plugin—an Unauthenticated Privilege Escalation that could lead to a total site takeover. Unfortunately, updating to the latest version of the plugin may not be enough to resolve the issue.

LiteSpeed Cache Plugin

The LiteSpeed Cache Plugin is a website performance optimization plugin that has over 6 million installations. A cache plugin stores a static copy of the data used to create a web page so that the server doesn’t have to repeatedly fetch the exact same page elements from the database every time a browser requests a web page.

Storing the page in a “cache” reduced the server load and speeds up the time it takes to deliver a web page to a browser or a crawler.

LiteSpeed Cache also does other page speed optimizations like compressing CSS and JavaScript files (minifying), puts the most important CSS for rendering a page in the HTML code itself (inlined CSS) and other optimizations that together make a site faster.

Unauthenticated Privilege Escalation

An unauthenticated privilege escalation is a type of vulnerability that allows a hacker to attain site access privileges without having to sign in as a user. This makes it easier to hack a site in comparison to an authenticated vulnerability that requires a hacker to first attain a certain privilege level before being able to execute the attack.

Unauthenticated privilege escalation typically occurs because of a flaw in a plugin (or theme) and in this case it’s a data leak.

Patchstack, the security company that discovered the vulnerability writes that vulnerability can only be exploited under two conditions:

“Active debug log feature on the LiteSpeed Cache plugin.

Has activated the debug log feature once before (not currently active now) and the /wp-content/debug.log file is not purged or removed.”

Discovered By Patchstack

The vulnerability was discovered by researchers at Patchstack WordPress security company, which offers a free vulnerability warning service and advanced protection for as little as $5/month.

Oliver Sild Founder of Patchstack explained to Search Engine Journal how this vulnerability was discovered and warned that updating the plugin is not enough, that a user still needs to manually purge their debug logs.

He shared these specifics about the vulnerability:

“It was found by our internal researcher after we processed the vulnerability from a few weeks ago.

Important thing to keep in mind with this new vulnerability is that even when it gets patched, the users still need to purge their debug logs manually. It’s also a good reminder not to keep debug mode enabled in production.”

Recommended Course of Action

Patchstack recommends that users of LiteSpeed Cache WordPress plugin update to at least version 6.5.0.1.

Read the advisory at Patchstack:

Critical Account Takeover Vulnerability Patched in LiteSpeed Cache Plugin

Featured Image by Shutterstock/Teguh Mujiono

SearchGPT vs. Google: Early Analysis & User Feedback via @sejournal, @MattGSouthern

OpenAI, the company behind ChatGPT, has introduced a prototype of SearchGPT, an AI-powered search engine.

The launch has sparked considerable interest, leading to discussions about its potential to compete with Google.

However, early studies and user feedback indicate that while SearchGPT shows promise, it has limitations and needs more refinement.

Experts suggest it needs further development before challenging current market leaders.

Study Highlights SearchGPT’s Strengths and Weaknesses

SE Ranking, an SEO software company, conducted an in-depth analysis of SearchGPT’s performance and compared it to Google and Bing.

The study found that SearchGPT’s search results are 73% similar to Bing’s but only 46% similar to Google’s.

Interestingly, 26% of domains ranking in SearchGPT receive no traffic from Google, indicating opportunities for websites struggling to gain traction.

The study highlighted some of SearchGPT’s key features, including:

  • The ability to summarize information from multiple sources Provide a conversational interface for refining searches Offering an ad-free user experience.
  • However, the research noted that SearchGPT lacks the variety and depth of Google’s search results, especially for navigational, transactional, and local searches.
  • The study also suggested that SearchGPT favors authoritative, well-established websites, with backlinks being a significant ranking factor.

Around 32% of all SearchGPT results came from media sources, increasing to over 75% for media-related queries.

SE Ranking notes that SearchGPT needs improvement in providing the latest news, as some news results were outdated.

User Experiences & Limitations Reported By The Washington Post

The Washington Post interviewed several early testers of SearchGPT and reported mixed reviews.

Some users praised the tool’s summarization capabilities and found it more helpful than Google’s AI-generated answers for certain queries.

Others, however, found SearchGPT’s interface and results less impressive than those of smaller competitors like Perplexity.

The article also highlighted instances where SearchGPT provided incorrect or “hallucinated” information, a problem that has plagued other AI chatbots.

While the SE Ranking study estimated that less than 1% of searches returned inaccurate results, The Washington Post says there’s significant room for improvement.

The article also highlighted Google’s advantage in handling shopping and local queries due to its access to specialized data, which can be expensive to acquire.

Looking Ahead: OpenAI’s Plans For SearchGPT and Potential Impact on the Market

OpenAI spokesperson Kayla Wood revealed that the company plans to integrate SearchGPT’s best features into ChatGPT, potentially enhancing the popular language model’s capabilities.

When asked about the possibility of including ads in SearchGPT, Wood stated that OpenAI’s business model is based on subscriptions but didn’t specify whether SearchGPT would be offered for free or as part of a ChatGPT subscription.

Despite the excitement surrounding SearchGPT, Google CEO Sundar Pichai recently reported continued growth in the company’s search revenue, suggesting that Google may maintain its dominant position even with the emergence of new AI-powered search tools.

Top Takeaways

Despite its current limitations, SearchGPT has the potential to shake up online information seeking. As OpenAI iterates based on user feedback, its impact may grow significantly.

Integrating SearchGPT’s best features into ChatGPT could create a more powerful info-seeking tool. The proposed subscription model raises questions about competition with free search engines and user adoption.

While Google’s search revenue and specialized query handling remain strong, SearchGPT could carve out its own niche. The two might coexist, serving different user needs.

For SearchGPT to truly compete, OpenAI must address accuracy issues, expand query capabilities, and continuously improve based on user input. It could become a viable alternative to traditional search engines with ongoing development.


Featured Image: Robert Way/Shutterstock

Google Confirms It’s Okay To Ignore Spam Scores via @sejournal, @martinibuster

Google’s John Mueller answered a Reddit question about how to lower a website’s spam score. His answer reflected an important insight about third-party spam scores and their relation to how Google ranks websites.

What’s A Spam Score?

A spam score is the opinion of a third-party tool that reviews data like inbound links and on page factors based on whatever the tool developers believe are spam-related factors and signals. While there are a few things about SEO that most people can agree on there is a lot more about SEO that digital marketers dispute.

The reality is that third-party tools use unknown factors to assign a spam score, which reflects how a search engine might use unknown metrics to assess website quality. That’s multiple layers of uncertainty to trust.

Should You Worry About Spam Scores?

The question asked in Reddit was about whether they should be worrying about a third-party spam score and what can be done to achieve a better score.

This is the question:

“My site is less than 6 months old with less than 60 blog posts.

I was checking with some tool it says I have 302 links and 52 referring domains. My worry is on the spam score.

How should I go about reducing the score or how much is the bad spam score?”

Google’s John Mueller answered:

“I wouldn’t worry about that spam score.

The real troubles in your life are apt to be things that never crossed your worried mind, the kind that blindside you at 4 p.m. on some idle Tuesday.”

He then followed up with a more detailed response:

“And to be more direct – Google doesn’t use these spam scores. You can do what you want with them. They’re not going to change anything for your site.

I’d recommend taking the time and instead making a tiny part of your website truly awesome, and then working out what it would take the make the rest of your website like that. This spam score tells you nothing in that regard. Ignore it.”

Spam Scores Tells You Nothing In That Regard

John Mueller is right, third-party spam scores don’t reflect site quality. They’re only opinions based on what the developers of a tool believe, which could be outdated, could be insufficient, we just don’t know because the factors used to calculate third-party spam scores are secret.

In any case, there is no agreement about what ranking factors are, no agreement of what on-page and off-page factors are and even the idea of “ranking factors” is somewhat debatable because nowadays Google uses various signals to determine if a site is trustworthy and relies on core topicality systems to understand search queries and web pages. That’s a world-away from using ranking factors to score web pages. Can we even agree on whether there’s a difference between ranking factors and signals? Where does something like a (missing) quality signal even fit in a third-party spam metric?

Popular lists of 200 ranking factors often contain factual errors and outdated ideas based on decades-old concepts of how search engines rank websites. We’re in a period of time when search engines are somewhat moving past the concepts of “ranking factors” in favor of  core topicality systems for understanding web pages (and search queries) and an AI system called SpamBrain that weeds out low-quality websites.

So yes, Mueller makes a valid point when he advises not to worry about spam scores.

Read the discussion on Reddit:

Is site spam score of 1% bad?

Featured Image by Shutterstock/Krakenimages.com

How New Chrome AI Feature Challenges SEO To Evolve via @sejournal, @martinibuster

A Google Chrome Engineer published a LinkedIn post outlining the new Chrome AI History feature and the signals it uses to surface previously visited sites. The post illustrates that natural language browser history search could become a traffic source, and SEO must evolve in response.

History Search Powered By AI

Google recently announced a new opt-in feature in Chrome that gives users the benefit of AI to search through their browser history and find a page that they have previously visited. This makes it easier for a site that has previously been visited to obtain another visit from the same person.

Chrome AI History Searches Page Content

Chrome Engineering Leader Addy Osmani wrote a description of the new Chrome AI History feature that contained some undocumented information about how it works which shows how text and images are used as data sources for the AI to identify a site that a user had previously visited.

The Chrome Browser history normally just searches the URL and Page Title to find something in the search history. “History Search, powered by AI” looks at the webpage content, including the images.

Osmani shared an example where he identified a page he had previously visited in which the AI used image content to find what he was looking for.

He gave an example of finding a page he visited that’s related to shopping:

“Recently, I was browsing for a new sweater and took a look at a few options across a few sites. I saw some neat Burberry designs. But there was one specific Burberry sweater I liked from a while back that said “England” on it. I can’t remember where I saw it or how to find that page again.

With AI history search, I simply type “Burberry sweater England” and voilà – the exact page appears, even though “England” was only mentioned within an image on the site.”

What does he mean that the word “England” was only mentioned in an image? He doesn’t specifically say that the word was in the image meta data like in the alt tag. I assumed that’s what he meant, that the word England was in the image metadata. So I found the exact page he was looking at (it’s in a video he embedded in his LinkedIn post) and checked the source code and the word “England” was not in the meta data.

If you watch the video the AI Browser history shows multiple pages so it’s possible that the AI simply ignored the word “England” just surfaced everything that had a partial match. But, Osmani said it was surfaced because of the image.

Here’s a screenshot from his video:

Screenshot of a page surfaced by Chrome AI Browser History result

Here’s the AI search results showing multiple pages in the results:

Screenshot of Chrome AI Browser history

The above image shows that the AI history surfaced more than just one page and the other pages weren’t about a shirt that said England, only the one. So it could very well be that the AI history was surfacing the England page not because it had the word England in the image but because it was relevant for the words Burberry and Sweater. But again, it could be because the word was in the image, this is something that needs clarification.

Osmani then offers two more examples that show how using keywords that appear in the page content will help surface web pages that a user had previously visited.

AI Browser Search Documentation

Google maintains a help page dedicated to this new feature where it lists the following tips that also give more information about how the AI browser search works.

  • “When you search short and simple text, you’ll be matched directly to the page title or URL. You won’t find an AI-powered result.
  • You can rate the best match result. At the bottom of the best match result, select Thumbs up or Thumbs down .
  • If you select Thumbs down , you can provide additional feedback on why the result didn’t meet your needs.
  • You can also search for browsing history in the address bar.”

Takeaways

Chrome AI search enables repeat visitors through natural language searches. But when users search with simple text Chrome will default to simple keyword matching to the page title and URL.

  • Exact keywords are not necessary
  • URLs are not necessary
  • Short simple text is matched via Title tag and URL
  • Keywords in title tag and URL that match to how users will remember the site (the topic) can still be important
  • The ability to rate results shows that this feature will continue to evolve

Chrome AI History is a useful feature and will likely become more prominent as people become more aware of it and people become more accustomed to using AI that’s built into their browsers and devices. This doesn’t mean it will become useful to add keywords all over the meta data but it does show how the future of SEO is growing to accommodate more than just search as AI takes a greater role in surfacing web pages.

Featured Image by Shutterstock/Cast Of Thousands

WordPress Just Locked Down Security For All Plugins & Themes via @sejournal, @martinibuster

WordPress announced a major clampdown to protect its theme and plugin ecosystem from password insecurity. These improvements follow a flurry of attacks in June that compromised multiple plugins at the source.

Improves Plugin Developer Security

This WordPress security update fixes a flaw that allowed hackers to use compromised passwords from other breaches to unlock developer accounts that used the same credentials and had “commit access” enabling them to make changes to the plugin code right at the source. This closes a WordPress security gap that allowed hackers to compromise multiple plugins beginning in late June of this year.

Double Layer Of Developer Security

WordPress is introducing two layers of security, one on the individual developer account and a second one on the code commit access. This separates the author security credentials from the code committing environment.

1. Two-Factor Authorization

The first improvement to security is the imposition of a mandatory two-factor authorization for all plugin and theme authors that will be enforced beginning on October 1, 2024. WordPress is already prompting users to use 2FA. Users can also visit this page to configure their two-factor authorization.

2. SVN Passwords

WordPress also announced it will begin using SVN (Subversion) passwords, an additional layer of security for authenticating developers as a part of a version control system. SVN ensures that only authorized individuals can make changes to the code, adding a second layer of security to plugins and themes.

The WordPress announcement explains:

“We’ve introduced an SVN password feature to separate your commit access from your main WordPress.org account credentials. This password functions like an application or additional user account password. It protects your main password from exposure and allows you to easily revoke SVN access without having to change your WordPress.org credentials. Generate your SVN password in your WordPress.org profile.”

WordPress noted that technical limitations prevented them from using 2FA to existing code repositories, thereby requiring them to use SVN instead.

Takeaway: Vastly Improved WordPress Security

These changes will results in greater security for the entire WordPress ecosystem and immensely contribute to ensuring that all plugins and themes are trustworthy and not compromised at the source.

Read the announcement

Upcoming Security Changes for Plugin and Theme Authors on WordPress.org

Featured Image by Shutterstock/Cast Of Thousands

Google’s AI Overviews Slammed By News Publishers via @sejournal, @MattGSouthern

Since its U.S. launch in May, Google’s AI Overviews feature has created controversy among news publishers.

The generative search tool attempts to directly answer queries by synthesizing information from web sources into AI-generated overviews.

While offering users a new level of convenience, AI Overviews has been criticized for factual inaccuracies, lack of transparency in sourcing content, and disincentivizing clicks to original articles.

Despite an initial scale-back, Google has doubled down – releasing Overviews in six more countries and additional languages in August.

Background on AI Overviews

Google introduced AI Overviews as an experimental opt-in feature that has since been rolled out to general search results.

Instead of listing links to webpages, AI Overviews aim to provide a complete answer using natural language.

Many publishers are concerned that AI Overviews could cannibalize their organic search traffic by satisfying user queries without requiring a click-through.

There are also complaints that Google is repackaging and republishing content without attribution or revenue sharing.

Audience Directors Speak Out

In interviews with the Nieman Journalism Lab at Harvard, seven leading audience strategy experts shared their perspectives on adapting to the AI Overviews disruption.

Veronica de Souza of New York Public Radio emphasized reducing reliance on Google by building direct audience relationships through owned channels like apps and newsletters.

Souza states:

“We’ve doubled down on converting people to our O&O (owned-and-operated) platforms like our app and newsletters…More transparency about which categories of search queries surface AI Overviews would be a good start.”

Washington Post’s Bryan Flaherty raised concerns about misinformation risks and lack of performance data insights from Google.

Flaherty states:

“If Google loses users due to the quality issues in its results and AI Overviews, users could continue to turn to non-traditional search platforms that don’t have as direct a tie back to sites, like YouTube and TikTok, which will have an impact on traffic.”

Vermont Public’s Mike Dougherty pointed out the lack of clear citations to original sources in Overviews.

Dougherty states:

“This product could so easily put clickable citations into or above the text. It could even write, ‘According to [publisher],…’ the way one news outlet might credit another.”

Scott Brodbeck of Local News Now remained optimistic that quality journalism can outcompete brief AI summaries.

Brodbeck states:

“If you as a news publisher cannot out-compete a brief AI-written summary, I think you have a big problem that’s not just being caused by Google and AI.”

Marat Gaziev of IGN advocated for deeper symbiosis between Google and reputable information providers to uphold accuracy standards.

Gaziev states:

“RAG requires a deep and symbiotic relationship with content publishers and the media industry to ensure that only credible sources are utilized during retrieval and augmentation.”

YESEO founder Ryan Restivo warned about potential carbon impacts from the heavy computing power required at scale.

Restivo states:

“The biggest problem, in my opinion, is the competition entering this space…The amount of compute needed to produce these at scale is hurting our environment.”

LA Times’ Seth Liss speculated Google may eventually prioritize generating answers over linking to external sites.

Liss states:

“If Google decides its best way forward is to keep all of those readers on its own site, there will be a lot of sites that have to figure out other ways to find new audiences.”

Measured Optimism

While most publishers interviewed by Nieman Journalism Lab expressed reservations, some took a more optimistic view.

The consensus is that high-quality, in-depth journalism will draw readers to visit publisher websites for full context beyond a brief AI summary.

There’s also hope that Google will find mutually beneficial ways to incorporate publisher content without usurping it entirely.

The Path Forward

As the search evolves, publishers are exploring strategies to adapt – from re-investing in email newsletters and mobile apps to developing AI-focused SEO best practices.

The debate highlights a challenge all publishers share – how to remain discoverable and generate traffic/revenue when search engines can directly answer queries themselves.


Featured Image: Marco Lazzarini/Shutterstock