Google Explains Why Indexed Pages May Not Appear In Search via @sejournal, @MattGSouthern

Google’s Martin Splitt explains why indexed pages may not appear in search results, highlighting relevance and ranking competition.

  • Indexed pages may not appear if other pages are more relevant or user engagement is low.
  • Google’s process involves discovery, crawling, indexing, and ranking for visibility.
  • Focus on high-quality, user-focused content to improve search visibility.
LinkedIn Lists Top 15 In-Demand Skills, Makes Related Courses Free via @sejournal, @MattGSouthern

LinkedIn has published its “Skills on the Rise” report, which lists the 15 fastest-growing skills in the U.S. job market.

To stay competitive, here’s what professionals should focus on.

The Top 15 Skills In Demand for 2025

AI is driving major workplace changes. LinkedIn predicts that by 2030, about 70% of skills in most jobs will significantly change. A quarter of professionals plan to learn new skills this year.

“AI Literacy” is now the most in-demand skill, reflecting the need for workers who can use AI tools across all industries.

While many list “AI” as a skill, this usually means basic familiarity with tools like ChatGPT rather than in-depth expertise.

The complete list of fastest-growing skills identified by LinkedIn includes:

  1. AI Literacy
  2. Conflict Mitigation
  3. Adaptability
  4. Process Optimization
  5. Innovative Thinking
  6. Public Speaking
  7. Solution-Based Selling
  8. Customer Engagement & Support
  9. Stakeholder Management
  10. Large Language Model (LLM) Development & Application
  11. Budget & Resource Management
  12. Go-to-Market (GTM) Strategy
  13. Regulatory Compliance
  14. Growth Strategy
  15. Risk Assessment

The report explains why each skill is gaining importance and the most common job titles and industries where these skills are prevalent.

Soft Skills Gaining Importance

While AI skills are essential, there is a growing need for soft skills. These skills are valuable as organizations address complex workplace issues such as return-to-office policies and managing teams from different generations.

For example, “Conflict Resolution” (ranked #2) is essential for customer service representatives, administrative assistants, and project managers in the technology and internet sectors.

“Adaptability” (ranked #3) is becoming essential for teachers, administrative assistants, and project managers as they face fast technological and economic changes.

Free Learning Resources Available

To help people develop these skills, LinkedIn is offering free access to related LinkedIn Learning courses until April 18. The list includes a link to a recommended course for each skill.

The report also includes in-demand skills lists for 15 job functions and seven additional countries, including Australia, Brazil, France, Germany, India, Spain, and the UK.

LinkedIn created a separate list specifically for marketing job functions, as shown below.

Screenshot from: LinkedIn, March 2025.

LinkedIn’s methodology for determining the fastest-growing skills considers three key factors: skill acquisition (the rate at which members add new skills to their profiles), hiring success (the share of a skill possessed by recently hired members), and emerging demand (increased presence of skills in job postings).

See LinkedIn’s full report.

EU Charges Google With DMA Violations: What This Means via @sejournal, @MattGSouthern

The long-brewing conflict between Google and EU regulators has reached a new milestone.

The European Commission has officially issued preliminary findings that Google has violated the Digital Markets Act (DMA) in two key areas that directly impact digital marketers and app developers.

What’s Happening With Google Search?

Despite Google’s algorithm tweaks over the past year, EU regulators aren’t satisfied. They claim Google still gives preferential treatment to its verticals, such as Google Shopping, Hotels, Flights, and other specialized results.

The Commission called out Google for displaying its services “at the top of Google Search results or on dedicated spaces, with enhanced visual formats and filtering mechanisms” that third-party services don’t enjoy.

If you’ve been wondering why your clients’ listings seem pushed down by Google’s products, EU regulators are validating those concerns.

Google Play Also Under Fire

In a separate finding, the Commission claims Google Play doesn’t allow app developers to freely direct users to alternative channels for better deals or direct purchases.

For marketers working with apps or managing app-based clients, this could eventually lead to new opportunities to reach users outside Google’s ecosystem without the steep Play Store fees.

What This Means For Digital Marketers

If the findings are confirmed and Google is forced to make changes, we could see significant shifts in search visibility and ranking opportunities:

  • More prominent placement for third-party comparison sites in travel, shopping, and financial verticals
  • Reduced visual emphasis on Google’s services
  • Potentially more organic visibility for businesses currently competing with Google’s featured elements

For app marketers, we might see new options for communicating with users about direct purchase options and alternatives to Google Play’s payment system.

Timeline and Next Steps

Google now has the opportunity to respond to these preliminary findings, and the company has consistently maintained that its changes already comply with the DMA.

In previous statements, Google’s EMEA competition director cautioned that further modifications could negatively impact user experience.

The Bigger Picture

This escalation follows the DMA’s implementation in March 2024, which designated Google as a “gatekeeper” alongside other tech giants. The law specifically targets large platforms that serve as critical intermediaries between businesses and consumers.

If Google fails to address the Commission’s concerns, it could face penalties of up to 10% of its global annual revenue. This prospect will likely motivate changes to how search results appear in Europe.

We’ll monitor this situation as it develops and provide updates on how changes might impact your search and app marketing strategies.

Google Expands AI Overviews To Thousands More Health Queries via @sejournal, @MattGSouthern

Google is expanding AI overviews to “thousands more health topics,” per an announcement at the company’s health-focused ‘The Check Up’ event.

The event included developments spanning research, wearable technology, and medical records.

Here’s more about how Google is refining health results in Search.

AI Overviews For Health Queries

Google is showing AI overviews for more health-related queries.

Compared to other types of questions, this topic has had fewer AI overviews. Now, these overviews will be available for more queries and in more languages.

Google states:

“Now, using AI and our best-in-class quality and ranking systems, we’ve been able to expand these types of overviews to cover thousands more health topics. We’re also expanding to more countries and languages, including Spanish, Portuguese and Japanese, starting on mobile.”

Google notes health-focused advancements to its Gemini models will go into summarizing information for health topics.

With these updates, Google claims AI overviews for health queries are “more relevant, comprehensive and continue to meet a high bar for clinical factuality.”

New “What People Suggest” Feature

Google is introducing a new feature for health queries called “What People Suggest.”

It uses AI to organize perspectives from online discussions and to analyze what people with similar health conditions are saying.

For example, someone with arthritis looking for exercise recommendations could use this feature to learn what works for others with the same condition.

See an example below.

Screenshot from: blog.google/technology/health/the-check-up-health-ai-updates-2025/, March 2025.

“What People Suggest” is currently available only on mobile devices in the U.S.

Broader Health AI Initiatives

The search updates were part of a larger set of health technology announcements at The Check Up event. Google also revealed:

  • Medical Records APIs in Health Connect for managing health data across applications
  • FDA clearance for Loss of Pulse Detection on Pixel Watch 3
  • An AI co-scientist built on Gemini 2.0 to help biomedical researchers
  • TxGemma, a collection of open models for AI-powered drug discovery
  • Capricorn, an AI tool for pediatric oncology treatment developed with Princess Máxima Center

Looking Ahead

Hallucination remains a problem for AI models. While Gemini may have upgrades that make it more accurate, it will still be wrong at least sometimes.

Google’s inclusion of personal experiences alongside medical websites marks a shift, recognizing people value both clinical information and real-world perspectives.

Health publishers should be aware that this could affect search visibility but may also increase chances of appearing for more queries or the “What People Suggest” section.

New Cybersecurity Bot Attack Defense Helps SaaS Apps Stay Secure via @sejournal, @martinibuster

Cybersecurity company HUMAN introduces a new feature for its HUMAN Application Protection service called HUMAN Sightline. The new Sightline enables users to defend their SaaS applications with detailed analyses of attacker activities and to track changes in bot behavior. This feature is available as a component of Account Takeover Defense, Scraping Defense, and Transaction Abuse Defense at no additional cost.

Human is a malicious traffic analytics and bot blocking solution that enables analysts to understand what bots and humans are doing and also block them.

According to the Human Sightlines announcement:

“Customers have long asked us to provide advanced anomaly reporting—or, in other words, to mark anomalies that represent distinct attacks. But when we started down that path, we realized that simply labeling spikes would not provide the information that customers really need…

…We built a secondary detection engine using purpose-built AI that analyzes all the malicious traffic in aggregate after the initial block or allow decision is made. This engine compares every automated request to every other current and past request in order to construct and track “attack profiles,” groups of requests thought to be from the same attacker based on their characteristics and actions.

Beyond visibility, secondary detection allows HUMAN’s detection to adapt and learn to the attacker’s changing behavior. Now that we can monitor individual profiles over time, the system can react to their specific adaptation, which allows us to continue to track and block the attacker. The number of signatures used by the system for each profile increases over time, and this information is surfaced in the portal.”

Search Engine Journal Asked Human About Their Service

How is this solution implemented?

“HUMAN Sightline will be a new dashboard in HUMAN Application Protection. It will be available in Account Takeover Defense, Scraping Defense, and Transaction Abuse Defense, at no additional cost. No other bot management product on the market has similar capabilities to HUMAN Sightline. HUMAN’s new attack profiling approach segments malicious traffic into distinct profiles, so customers can identify the different profiles that make up each traffic volume. Analysts can understand what each is doing, their sophistication, their capabilities, and the specific characteristics that distinguish them from other humans and bots on the application. This allows HUMAN to bring attack reporting to the next level, serving as both a bot blocking solution and a data-centric, machine learning-driven analyst tool.”

Is it a SaaS solution? Or is it something that lives on a server?

“Our Human Defense Platform safeguards the entire customer journey with high-fidelity decision-making that defends against bots, fraud, and digital threats. HUMAN helps SaaS platforms provide a safe user journey by preserving high-quality customer interactions across online accounts, applications, and websites.”

Is this aimed at enterprise level businesses? How about universities, are they an end user that can implement this solution?

“This solution is aimed at organizations that are interested in expanding its bot traffic analyzing capabilities. Enterprise level businesses and higher education can certainly utilize this solution; again, it depends how committed the organization is to tracking bot traffic. HUMAN has long been helping clients in the higher education sector from evolving cyber threats, and HUMAN Sightline will only benefit these organizations to protect themselves further.”

Read more about Human Sightline:

Human Sightline: A New Era in Bot Visibility

Featured Image by Shutterstock/AntonKhrupinArt

LinkedIn Study: AI Shortens B2B Sales Cycles By 1 Week via @sejournal, @MattGSouthern

A new report shows that B2B sales teams increasingly use AI to improve efficiency and close deals.

Commissioned by LinkedIn and conducted by Ipsos, the survey included 1,250 sales professionals and found that AI is now a key part of sales practices.

Here’s what marketers need to know.

AI Adoption on the Rise

88% of sales professionals use AI weekly, and 56% use it daily. This trend reflects changes in the sales field, where teams must manage complex buying processes.

Karin Kimbrough, LinkedIn’s Chief Economist, notes that companies using AI gain a competitive advantage.

“Companies integrating AI are gaining a competitive edge,” says Kimbrough in the report. “Teams that don’t embrace AI will fall behind.”

Microsoft’s Future of Work report also shows that sales professionals see significant productivity increases from AI.

Key Drivers Of Investment

98% of sales executives plan to invest more in AI this year. They’ll focus on:

  1. Sales intelligence
  2. Sales enablement
  3. AI-powered CRM tools

Methodology Note:
Ipsos surveyed sales professionals in the United States, the United Kingdom, Germany, Australia, India, and Singapore, focusing on mid-market (200–999 employees) and enterprise (1,000+ employees) sectors spanning tech, finance, manufacturing, professional services, and other industries.

Top Three Impact Areas

Sellers exceeding their targets are 2.5 times more likely to use AI daily than those not meeting their goals.

Researchers found three main ways AI improves sales:

  1. Finding Leads
    1. 38% say AI helps to identify leads faster and more accurately.
    2. Sellers save at least 1.5 hours weekly using AI for lead research.
  2. Personalized Messages
    1. AI tools enable faster and more tailored outreach campaigns.
    2. Sellers using AI saw a 28% increase in responses.
  3. Sales Efficiency
    1. AI streamlines data entry and scheduling in CRM systems.
    2. Nearly 69% of sellers say AI shortens their sales cycle by about one week and helps them close more deals.

Looking Ahead

Dan Shapero, LinkedIn COO, advises companies to “start small” and focus on delivering immediate wins as a foundation for long-term AI adoption.

This approach resonates with the growing number of sales executives (39%) who feel “highly confident” about their readiness for future challenges.

In practical terms, sales teams can begin by:

  • Automating routine tasks like updating CRM records or lead qualification.
  • Leveraging real-time insights for targeted outreach (e.g., tracking job changes or company news).
  • Experimenting with generative AI to craft more engaging prospect messages.
  • Regularly training teams on new tools to reduce resistance and smooth adoption.

Dan Shapero, COO at LinkedIn, states:

“It’s too early to know what your AI strategy is. I think the question you ask yourself is, “What is my AI win?”. What’s the one thing that I can do with my team right now that’s going to create value over the next six months? Because the world is changing so quickly, it’s one of these moments to start small, to go big over time.”

For more insights, see the full report.


Featured Image: Screenshot from Linkedin ROI of AI report, March 2025. 

Google Researchers Improve RAG With “Sufficient Context” Signal via @sejournal, @martinibuster

Google researchers introduced a method to improve AI search and assistants by enhancing Retrieval-Augmented Generation (RAG) models’ ability to recognize when retrieved information lacks sufficient context to answer a query. If implemented, these findings could help AI-generated responses avoid relying on incomplete information and improve answer reliability. This shift may also encourage publishers to create content with sufficient context, making their pages more useful for AI-generated answers.

Their research finds that models like Gemini and GPT often attempt to answer questions when retrieved data contains insufficient context, leading to hallucinations instead of abstaining. To address this, they developed a system to reduce hallucinations by helping LLMs determine when retrieved content contains enough information to support an answer.

Retrieval-Augmented Generation (RAG) systems augment LLMs with external context to improve question-answering accuracy, but hallucinations still occur. It wasn’t clearly understood whether these hallucinations stemmed from LLM misinterpretation or from insufficient retrieved context. The research paper introduces the concept of sufficient context and describes a method for determining when enough information is available to answer a question.

Their analysis found that proprietary models like Gemini, GPT, and Claude tend to provide correct answers when given sufficient context. However, when context is insufficient, they sometimes hallucinate instead of abstaining, but they also answer correctly 35–65% of the time. That last discovery adds another challenge: knowing when to intervene to force abstention (to not answer) and when to trust the model to get it right.

Defining Sufficient Context

The researchers define sufficient context as meaning that the retrieved information (from RAG) contains all the necessary details to derive a correct answer​. The classification that something contains sufficient context doesn’t require it to be a verified answer. It’s only assessing whether an answer can be plausibly derived from the provided content.

This means that the classification is not verifying correctness. It’s evaluating whether the retrieved information provides a reasonable foundation for answering the query.

Insufficient context means the retrieved information is incomplete, misleading, or missing critical details needed to construct an answer​.

Sufficient Context Autorater

The Sufficient Context Autorater is an LLM-based system that classifies query-context pairs as having sufficient or insufficient context. The best performing autorater model was Gemini 1.5 Pro (1-shot), achieving a 93% accuracy rate, outperforming other models and methods​.

Reducing Hallucinations With Selective Generation

The researchers discovered that RAG-based LLM responses were able to correctly answer questions 35–62% of the time when the retrieved data had insufficient context. That meant that sufficient context wasn’t always necessary for improving accuracy because the models were able to return the right answer without it 35-62% of the time.

They used their discovery about this behavior to create a Selective Generation method that uses confidence scores and sufficient context signals to decide when to generate an answer and when to abstain (to avoid making incorrect statements and hallucinating).

The confidence scores are self-rated probabilities that the answer is correct. This achieves a balance between allowing the LLM to answer a question when there’s a strong certainty it is correct while also receiving intervention for when there’s sufficient or insufficient context for answering a question, to further increase accuracy.

The researchers describe how it works:

“…we use these signals to train a simple linear model to predict hallucinations, and then use it to set coverage-accuracy trade-off thresholds.
This mechanism differs from other strategies for improving abstention in two key ways. First, because it operates independently from generation, it mitigates unintended downstream effects…Second, it offers a controllable mechanism for tuning abstention, which allows for different operating settings in differing applications, such as strict accuracy compliance in medical domains or maximal coverage on creative generation tasks.”

Takeaways

Before anyone starts claiming that context sufficiency is a ranking factor, it’s important to note that the research paper does not state that AI will always prioritize well-structured pages. Context sufficiency is one factor, but with this specific method, confidence scores also influence AI-generated responses by intervening with abstention decisions. The abstention thresholds dynamically adjust based on these signals, which means the model may choose to not answer if confidence and sufficiency are both low.

While pages with complete and well-structured information are more likely to contain sufficient context, other factors such as how well the AI selects and ranks relevant information, the system that determines which sources are retrieved, and how the LLM is trained also play a role. You can’t isolate one factor without considering the broader system that determines how AI retrieves and generates answers.

If these methods are implemented into an AI assistant or chatbot, it could lead to AI-generated answers that increasingly rely on web pages that provide complete, well-structured information, as these are more likely to contain sufficient context to answer a query. The key is providing enough information in a single source so that the answer makes sense without requiring additional research.

What are pages with insufficient context?

  • Lacking enough details to answer a query
  • Misleading
  • Incomplete
  • Contradictory​
  • Incomplete information
  • The content requires prior knowledge

The necessary information to make the answer complete is scattered across different sections instead of presented in a unified response.

Google’s third party Quality Raters Guidelines (QRG) has concepts that are similar to context sufficiency. For example, the QRG defines low quality pages as those that don’t achieve their purpose well because they fail to provide necessary background, details, or relevant information for the topic.

Passages from the Quality Raters Guidelines:

“Low quality pages do not achieve their purpose well because they are lacking in an important dimension or have a problematic aspect”

“A page titled ‘How many centimeters are in a meter?’ with a large amount of off-topic and unhelpful content such that the very small amount of helpful information is hard to find.”

“A crafting tutorial page with instructions on how to make a basic craft and lots of unhelpful ‘filler’ at the top, such as commonly known facts about the supplies needed or other non-crafting information.”

“…a large amount of ‘filler’ or meaningless content…”

Even if Google’s Gemini or AI Overviews doesn’t not implement the inventions in this research paper, many of the concepts described in it have analogues in Google’s Quality Rater’s guidelines which themselves describe concepts about high quality web pages that SEOs and publishers that want to rank should be internalizing.

Read the research paper:

Sufficient Context: A New Lens on Retrieval Augmented Generation Systems

Featured Image by Shutterstock/Chris WM Willemsen

Google’s Mueller Predicts Uptick Of Hallucinated Links: Redirect Or Not? via @sejournal, @MattGSouthern

Website owners and SEO professionals are facing a new problem. AI content generation tools are creating fake URLs when referencing real websites.

This issue was discussed in a recent social media conversation between industry professionals.

Hallucinated Links Causing 404s

On Bluesky, digital marketer Dan Thornton pointed out a pattern of 404 errors from non-existent URLs generated by AI systems.

His question: Should these links be redirected to existing pages?

Thornton states:

“Investigated a number of 404s recorded on a client website.

And a significant amount were generated by an AI service, which appears to have just made up articles, and URLs, in citations. It isn’t even using the right URL structure 🤦‍♂️

Debating the value of redirects and any potential impact.”

Thornton adds:

“On one hand, mistakes by more obscure AI bots might not seem worth correcting for the sake of adding more redirects. On the other, if it’s a relatively small client with a high value for conversions, even a couple of lost sales due to the damage to the brand will be noticeable.”

Google’s Perspective

Predicting an increase in hallucinated links, Google Search Advocate John Mueller offers guidance that can help navigate this issue.

First, he recommends having a good 404 page in place, stating:

“A good 404 page could help explain the value of the site, and where to go for more information. You could also use the URL as a site-search query & show the results on the 404 page, to get people closer.”

Before investing in solutions, he recommends collecting data.

Mueller states:

“I wonder if this is going to be a more common thing? It’s tempting to extrapolate from one off [incidents], but perhaps it makes sense to collect some more data before spending too much on it.”

In a follow-up comment, Mueller predicted:

“My tea leaves say that for the next 6-12 months we’ll see a slight uptick of these hallucinated links being clicked, and then they’ll disappear as the consumer services adjust to better grounding on actual URLs.”

Don’t Hope For Accidental Clicks

Mueller provided a broader perspective, advising SEO professionals to avoid focusing on minor metrics.

He adds:

“I know some SEOs like to over-focus on tiny metrics, but I think sites will be better off focusing on a more stable state, rather than hoping for accidental by-clicks. Build more things that bring real value to the web, that attract & keep users coming back on their own.”

What This Means

As AI adoption grows, publishers may need to develop new strategies for mitigating hallucinations.

Ammon Johns, recognized as a pioneer in the SEO industry, offers a potential solution to consider.

In response to Thornton, he suggests:

“I think any new custom 404 page should include a note to anyone that arrived there from an AI prompt to explain hallucinations and how AI makes so many of them you’ve even updated your site to warn people. Always make your market smarter – education is the ultimate branding.”

It’s too early to recommend a specific strategy at this time.

Mueller advises monitoring these errors and their impact before making major changes.


Featured Image: Iljanaresvara Studio/Shutterstock