LinkedIn Shares 7 Insights For Powerful Online Engagement via @sejournal, @martinibuster

LinkedIn shared insights with Search Engine Journal about how to effectively plan and roll out new features based on their experience planning and rolling out new AI features. The insights are useful whether you’re planning a content strategy or adding new features to your business.

I spoke with Prashanthi Padmanabhan, Head of Engineering for LinkedIn Premium. LinkedIn recently rolled out a massive change for their premium subscribers that analyzes comments, articles, videos, and posts and suggest how the information is useful for the member, as well as a new job seeker experience.

What happened behind the scenes and the takeaways from it offer useful insights that are useful to anyone who publishes or sells online.

Prashanthi Padmanabhan, Head of Engineering for LinkedIn Premium

Image/LinkedIn

Creating A Foundation For Success

I asked Prashanthi about her takeaways on planning and creating these features and her answer consisted of three points

  1. Anchor your strategy to your mission
  2. Think through how your plans add value to your audience or customers
  3. Get member feedback from day one

Here is what she shared:

“There are three main takeaways for me from this experience so far. The first is to anchor your strategy to your mission. A robust product strategy and roadmap should always be anchored in the company’s overarching mission. By aligning every decision on our roadmap with this purpose, we ensure our efforts directly contribute to member success.

The next is about thinking through how to leverage technical innovations. As part of the engineering team, we embrace cutting-edge technologies like Generative AI. These innovations allow us to craft elegant and practical solutions that cater to our members’ needs. Our commitment lies in delivering features that truly add value to our members’ experiences.

Last, but not least, is to incorporate member feedback early and often. We strongly believe that our members’ feedback and sentiments are invaluable. From the moment our product faces our customers, it’s Day 1. We build and roll out features through iterative development, relying on a blend of internal reviews and in-product feedback to gauge quality.

For instance, our initial foray into AI-powered writing suggestions for LinkedIn profiles and messages provided valuable insights from our members’ point of view. By listening to our members and adapting based on their actions, we will continue to refine features to meet—and ideally exceed—their expectations.”

Map Your Plans To User’s Needs, Not Trends

There are always many ideas of things that a business can do for their users. But what’s the right way to assess if something is worth doing?

Prashanthi answered that she and team started with understanding member’s needs as an ongoing iterative process. This is a great insight for anyone who works online and wants to go beyond what competitors are doing.

Another insight that everyone should pay attention to is that LinkedIn didn’t look at what others are doing, they focused on what their users might find useful. A lot of SEO and online content projects begin with competitor research and that’s something that in my opinion leads to unoriginal content that is the opposite of the unique experiences that Google wants to show in the search engine results pages (SERPs).

She answered:

“The process of identifying the right features to add begins with a deep understanding of our members’ and customers’ needs. We do this by validating our hypotheses through research and feedback. However, it’s not a one-time task; it’s an ongoing, iterative process. At LinkedIn, we rely on a combination of data, success metrics, and member feedback to gauge how well we’re meeting those needs. As we evolve our products, alignment to our mission, data insights, and feedback help guide our overall development journey.

For example, when we recognized that Generative AI could revolutionize technology, we didn’t simply follow trends. Instead, we asked ourselves: Could this technology truly benefit our members? If so, how could we integrate it into our Premium platform? For instance, we explored using it to simplify tasks like helping to write when starting a blank page or extracting key insights from LinkedIn feed posts.

It’s important to note that LinkedIn Premium is intentionally designed to enhance member productivity and experience based on their individual goals. So the features we add to Premium should map to their specific needs – for job seekers that could be helping them stand out to find the right job, getting the right insights for creators to help them build their audiences, and giving businesses a platform to build and grow their brand.”

The Importance Of The Why, What, & When

Every business faces the question, what do we do next and how do we do it? Prashanthi offered her insights on what to focus on in order to maximize for a successful outcome.

Prashanthi shared:

“Our product engineering principles at LinkedIn are rooted in three fundamental elements: starting with the “why,” aligning on the “what,” and optimizing for the “when.” We found these principles are a solid guide for navigating through the complex process of creating impactful products that resonate with our members.

The why is determined by delving into the site’s purpose and identifying the target audience—those who will benefit most from the site’s offerings. This clarity on the “why” sets the foundation for subsequent decisions.

With the “why” firmly in mind, now align on the “what.” This step involves defining the set of features and capabilities the site needs. We ask ourselves, what functionalities are essential to address the identified needs and then go from there. Carefully curating this feature set can help get a better feel for how they align with members’ requirements.

The final step is optimizing for the “when.” Engineering teams often grapple with the delicate balance between craftsmanship and time-to-market. Rather than waiting indefinitely for perfection, embrace early testing, such as releasing a minimum viable product (MVP) to gather feedback promptly. Metrics such as site visitor volume, engagement duration, and return frequency guide the assessment of the site’s value. It’s a dynamic dance between precision and speed, all aimed at delivering an exceptional experience.”

What Is A Good User Experience?

The concept of user experience can be subjective, we all have an idea of what it might be. I wanted to find out from Prashanthi, as head of engineering, how does one even translate the concept of a good user experience to an actual user experience online?

Her answer emphasized the importance of keeping things as simple and intuitive as possible, plus consistency.

She shared:

“For me, a good user experience means a product is simple, intuitive, and trustworthy. As an engineering team, translating the concept of a good user experience into reality requires meticulous attention to detail throughout the process. At LinkedIn this starts at the very beginning when we are transforming product and design specifications into a technical design. It’s essential to focus on simplicity and the consistency of the user experience across the entire product, so it’s intuitive to use with less cognitive load.

I’m also a big fan of clear and concise messaging (copy) for our customers as they help to build trust; in fact, when users run into issues, the clarity and usefulness of error messages and support resources make a huge difference.

I’ve found that customers are forgiving when your product works well and fast most of the time, and during times when there are issues, clear guidance on how they can best navigate that situation is critical. When it comes to reliability and performance, it’s simple – the product should work reliably every single time. A high-performance product gives users instant gratification as people care a lot about productivity and saving time, so they should be able to trust that the product will always work, and work fast.”

Importance Of Commitment To Improvement

A majority of LinkedIn’s users indicated that the new features are useful. I asked Prashanthi is the takeaway for online businesses that would in their own way increase the helpfulness of their business, whether that’s an ecommerce site, recipe blog, product review or comparison site?

Her answer suggests that creating content or features that resonate with users is a key to increasing the helpfulness of a website, something that’s super important for any online business today.

She offered the following insights:

“We’re extremely excited that early tests show that 90% of subscribers with access to our popular AI-powered job experience find it useful! This positive feedback underscores our commitment to creating features that genuinely resonate with our members. Rather than focusing on technology for technology’s sake, prioritizing how this tech can genuinely benefit our members seems to be resonating.

As professionals we know that job hunting can be an isolating and overwhelming experience, so we’ve introduced AI-assistant features designed to support and guide members throughout their job search journey, leveraging the knowledge from our Economic Graph. Our goal is to provide a virtual handhold, enabling job seekers to efficiently and confidently identify roles that align with their skills and aspirations. The overwhelmingly positive response reinforces that we’re moving in the right direction.

Our product development journey is guided by a combination of essential factors:

  • Product intuition
  • Technical innovation
  • Data insights
  • Customer feedback.

These elements apply universally to any product we create. It’s essential to recognize that achieving success doesn’t happen overnight. Instead, it requires a culture of rapid experimentation and continuous learning. We understand that perfection isn’t attainable on the first try, but our commitment to improvement drives us forward.”

How To Decide What’s Helpful For Users?

Being unique and helpful is important for ranking in today’s search engine. But how does one go about reimagining the user’s experience? It can be difficult to someone inside the business to understand what users may need.

I asked, what advice would you give an online business, whether that’s an ecommerce or a product review site that is contemplating what they can do better to serve their users?

She suggested the following steps:

“When we create new products, it’s essential to consider what other people need. So, right at the start, finding ways to bring more of the outside into development is critical. In the initial phases of developing our product strategy and roadmap for Premium, our user experience research and marketing teams conducted a combination of qualitative (numbers) and quantitative (stories) research to develop a deeper understanding of specific needs and related sentiments. This kind of research helps refine the personas we are building products for and clearly articulates the specific jobs and goals people are trying to accomplish with our products. For any business, this process can really humanize the product development process by helping to build a clear picture of the people that the product is designed for. It’s like getting to know them as real individuals.

But don’t just stop there. Once a basic version of the product (MVP) is ready, test it with a small group and pay attention to how well it works and what is said by the users. At LinkedIn, we involve our engineers in this process so they can learn about member’s needs and hear feedback first hand. As an engineering leader, I really enjoy sitting in these research sessions!—it makes the problems the team and I are solving feel more real. It’s better than just reading a list of product requirements.”

Cultivate Empathy For Online Success

A lot of times I read posts on social media where someone describes how they did their keyword research, hired experts for content and did many things to demonstrate expertise, experience, authoritativeness, and trustworthiness but nothing about empathizing with the site visitors, something that Prashanthi suggested was key to creating quality user experiences.

Reading some of LinkedIn’s descriptions of what they do, I saw a reference to a “user-focused lens” and I was curious about what that means to LinkedIn and what the end goal of that is.

She answered:

“Looking through a user-focused lens is about really connecting with our members and understanding their needs and experiences, with the goal being that what we create is functional as well as a joy to use.

As product builders, our most important job is to build ones that solve our member’s needs and create value for them at every touch point. For me, the only way to internalize what this means is to put ourselves in our members’ shoes and empathize with their needs. And this is where all product development functions, especially engineering, staying close to the member experience, sentiments, feedback, etc. will go a long way in developing a member-centric product development culture.

For example, when discussing features like AI-powered writing assistants, some members have reflected on how they consider themselves novice writers and how useful they find our thought-starters and suggested message drafts. When I hear these sentiments, it gives me confidence that the products we are building are helping make their lives easier, taking them a step closer to their goals and, in turn, making our jobs and purpose more meaningful.”

User Focused Online Experiences

Prashanthi’s answers show the value of a user-centric approach to everything we do online. Anchoring your content strategy to your mission, cultivating the quality of empathy, and listening to your site visitors is important.

The information she shared is adaptable to any scenario in online marketing whether that business is sales, content, recipes or reviews.

Google Explains How It Processes Queries & Ranks Content via @sejournal, @martinibuster

Google’s Gary Illyes published an new How Search Works video that gave an inside look into how search queries are interpreted and ranked. Gary’s presentation shows an outline of the ranking process that every SEO should know and understand.

Goal Of Ranking

Gary begins the presentation by emphasizing that the goal in search is to provide results with webpages that are high quality, trustworthy and are relevant.

Later in the video he refines the meaning of the word relevance by emphasizing relevance to the user, which is different than plain semantic relevance. Relevance to the user can mean personalization which can be previous searches, topicality and geolocation. I use the word topicality in the sense of a query being topical, as in trending interest.

Many SEOs are focused on the semantic meaning of words but another way of thinking about relevance is in relation to the user, which can encompass a lot of factors.

Search Query Parsing

Gary next moves on to the first step of how Google ranks webpages which is by interpreting the search query, which begins with cleaning up the search query by removing stop words, identifying entities that need stop words, and query expansion.

Stop words are words like “and”, “in”, “is”, “on”, and “the” that are stripped out of search queries because they appear frequently and don’t add anything meaningful to what the user means. IN general, there’s also a practical reason for removing stop words in that it reduces database bloat and their absence improves processing time.

Gary Illyes mentioned that some phrases need stop words so that’s something they look out for too, using the example of the Statue Of Liberty, where the word “of” is important to the meaning.

Query Expansion

This is the part where search queries are combined with other similar queries, using the example of “car dealership” being the same as “auto dealership” which means that a webpage about one can rank for queries about the other even if the phrase doesn’t appear on the webpage.

Once the query is understood, the parsed query is then sent to the index for ranking

Ranking

Gary says that a large amount of matching webpages are sent to the index to be ranked.

He mentions the following considerations:

  • Relevancy to the user
  • Hundreds of factors determine relevance
  • Webpage content is the most important factor
  • Other factors include user location, language and device type
  • Quality of the webpage and the site are taken into ranking consideration
  • Quality = uniqueness of content
  • Relative importance of the page on the Internet
  • Surfaced search features are query-dependent

Relevancy To The User

Gary mentions that the ranking “largely depends on the relevancy of the results to the user” which is a deceptively simple statement that has a lot of meaning. As I mentioned earlier, many SEOs focus on semantic relevance but the part about relevance to the user is super important because search queries have multiple meanings and contexts that go beyond semantic relevance. Gary’s presentation mentions these other ways of understanding relevance to the user.

Important points that he mentions are:

“Hundreds of factors determine relevance…

…actual contents of the page being the most important one,”

…user’s location, language and device type”

That’s not a complete list but it shows how determining relevance is more complex than anchor text, entity analysis, user intent analysis and semantic keyword SEO.

Quality Of Webpages And Sites

It’s interesting that Gary chose to emphasize the uniqueness of the content not just as a quality factor but as an important factor. Many SEOs interpret the word “uniqueness” literally in the sense of a word-for-word duplication of other content. But unique has another meaning in the sense of something being unlike other things altogether.

I see SEOs list the things they do to create high ranking content and it makes me cringe when they include “competitor analysis” as part of that process because that’s the first step in creating content that is similar to what’s already in the search results, based on the idea that this is what Google ranks so let’s give Google more of it. The resulting content can be the exact opposite of  unique and not at all what Google is looking for, resulting in the “discovered not indexed” designation in search console.

Uniqueness is something that Googlers have been emphasizing for decades and it’s something to think deeply about.

Search Features

Google shows many different kinds of search features and Gary Illyes says that they’re query dependent, meaning that different queries trigger different features.

He said:

“Based on the user’s query, the Search features that appear on the Search result pages may also change.”

Takeaways

Gary covered a lot of topics in a snack-sized video that belies the importance of the information that was shared in it.

Watch Gary’s presentation:

How Google Search serves pages

Featured image a screenshot of Google’s video/modified by author

Labeled: A New Wave Of AI Content Labeling Efforts via @sejournal, @Kevin_Indig

The new oil isn’t data or attention. It’s words. The differentiator to build next-gen AI models is access to content when normalizing for computing power, storage, and energy.

But the web is already getting too small to satiate the hunger for new models.

Some executives and researchers say the industry’s need for high-quality text data could outstrip supply within two years, potentially slowing AI’s development.

Even fine-tuning doesn’t seem to work as well as simply building more powerful models. A Microsoft research case study shows that effective prompts can outperform a fine-tuned model by 27%.

We were wondering if the future will consist of many small, fine-tuned, or a few big, all-encompassing models. It seems to be the latter.

There is no AI strategy without a data strategy.

Hungry for more high-quality content to develop the next generation of large language models (LLMs), model developers start to pay for natural content and revive their efforts to label synthetic data.

For content creators of any kind, this new flow of money could carve the path to a new content monetization model that incentivizes quality and makes the web better.

AI Content Labeling EffortsImage Credit: Lyna ™

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

KYC: AI

If content is the new oil, social networks are oil rigs. Google invested $60 million a year in using Reddit content to train its models and surface Reddit answers at the top of search. Pennies, if you ask me.

YouTube CEO Neal Mohan recently sent a clear message to OpenAI and other model developers that training on YouTube is a no-go, defending the company’s massive oil reserves.

The New York Times, which is currently running a lawsuit against OpenAI, published an article stating that OpenAI developed Whisper to train models on YouTube transcripts, and Google uses content from all of its platforms, like Google Docs and Maps reviews, to train its AI models.

Generative AI data providers like Appen or Scale AI are recruiting (human) writers to create content for LLM model training.

Make no mistake, writers aren’t getting rich writing for AI.

For $25 to $50 per hour, writers perform tasks like ranking AI responses, writing short stories, and fact-checking.

Applicants must have a Ph.D. or master’s degree or are currently attending college. Data providers are clearly looking for experts and “good” writers. But the early signs are promising: Writing for AI could be monetizable.

Job advertisement for a creative writing expert in AI model trainingImage Credit: Kevin Indig
A screenshot of an online job listing for a Image Credit: Kevin Indig

Model developers look for good content in every corner of the web, and some are happy to sell it.

Content platforms like Photobucket sell photos for five cents to one dollar a piece. Short-form videos can get $2 to $4; longer films cost $100 to $300 per hour of footage.

With billions of photos, the company struck oil in its backyard. Which CEO can withstand such a temptation, especially as content monetization is getting harder and harder?

From Free Content:

Publishers are getting squeezed from multiple sides:

  • Few are prepared for the death of third-party cookies.
  • Social networks send less traffic (Meta) or deteriorate in quality (X).
  • Most young people get news from TikTok.
  • SGE looms on the horizon.

Ironically, labeling AI content better might help LLM development because it’s easier to separate natural from synthetic content.

In that sense, it’s in the interest of LLM developers to label AI content so they can exclude it from training or use it the right way.

Labeling

Drilling for words to train LLMs is just one side of developing next-gen AI models. The other one is labeling. Model developers need labeling to avoid model collapse, and society needs it as a shield against fake news.

A new movement of AI labeling is rising despite OpenAI dropping watermarking due to low accuracy (26%). Instead of labeling content themselves, which seems futile, big tech (Google, YouTube, Meta, and TikTok) pushes users to label AI content with a carrot/stick approach.

Google uses a double-pronged approach to fight AI spam in search: prominently showing forums like Reddit, where content is most likely created by humans, and penalties.

From AIfficiency:

Google is surfacing more content from forums in the SERPs is to counter-balance AI content. Verification is the ultimate AI watermarking. Even though Reddit can’t prevent humans from using AI to create posts or comments, chances are lower because of two things Google search doesn’t have: Moderation and Karma.

Yes, Content Goblins have already taken aim at Reddit, but most of the 73 million daily active users provide useful answers.1 Content moderators punish spam with bans or even kicks. But the most powerful driver of quality on Reddit is Karma, “a user’s reputation score that reflects their community contributions.” Through simple up or downvotes, users can gain authority and trustworthiness, two integral ingredients in Google’s quality systems.

Google recently clarified that it expects merchants not to remove AI metadata from images using the IPTC metadata protocol.

When an image has a tag like compositeSynthetic, Google might label it as “AI-generated” anywhere, not just in shopping. The punishment for removing AI metadata is unclear, but I imagine it like a link penalty.

IPTC is the same format Meta uses for Instagram, Facebook, and WhatsApp. Both companies give IPTC metatags to any content coming out from their own LLMs. The more AI tool makers follow the same guidelines to mark and tag AI content, the more reliable detection systems work.

When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them.

The downsides of AI content are small when the content looks like AI. But when AI content looks real, we need labels.

While advertisers try to get away from the AI look, content platforms prefer it because it’s easy to recognize.

For commercial artists and advertisers, generative AI has the power to massively speed up the creative process and deliver personalized ads to customers on a large scale – something of a holy grail in the marketing world. But there’s a catch: Many images AI models generate feature cartoonish smoothness, telltale flaws, or both.

Consumers are already turning against “the AI look,” so much so that an uncanny and cinematic Super Bowl ad for Christian charity He Gets Us was accused of being born from AI –even though a photographer created its images.

YouTube started enforcing new guidelines for video creators that say realistic-looking AI content needs to be labeled.

Challenges posed by generative AI have been an ongoing area of focus for YouTube, but we know AI introduces new risks that bad actors may try to exploit during an election. AI can be used to generate content that has the potential to mislead viewers – particularly if they’re unaware that the video has been altered or is synthetically created. To better address this concern and inform viewers when the content they’re watching is altered or synthetic, we’ll start to introduce the following updates:

  • Creator Disclosure: Creators will be required to disclose when they’ve created altered or synthetic content that’s realistic, including using AI tools. This will include election content.
  • Labeling: We’ll label realistic altered or synthetic election content that doesn’t violate our policies, to clearly indicate for viewers that some of the content was altered or synthetic. For elections, this label will be displayed in both the video player and the video description, and will surface regardless of the creator, political viewpoints, or language.

The biggest imminent fear is fake AI content that could influence the 2024 U.S. presidential election.

No platform wants to be the Facebook of 2016, which saw lasting reputational damage that impacted its stock price.

Chinese and Russian state actors have already experimented with fake AI news and tried to meddle with the Taiwanese and coming U.S. elections.

Now that OpenAI is close to releasing Sora, which creates hyperrealistic videos from prompts, it’s not a far jump to imagine how AI videos can cause problems without strict labeling. The situation is tough to get under control. Google Books already features books that were clearly written with or by ChatGPT.

An open e-book on a computer screen displaying text related to technology, innovation, and AI Content LabelingImage Credit: Kevin Indig

Takeaway

Labels, whether mental or visual, influence our decisions. They annotate the world for us and have the power to create or destroy trust. Like category heuristics in shopping, labels simplify our decision-making and information filtering.

From Messy Middle:

Lastly, the idea of category heuristics, numbers customers focus on to simplify decision-making, like megapixels for cameras, offers a path to specify user behavior optimization. An ecommerce store selling cameras, for example, should optimize their product cards to prioritize category heuristics visually. Granted, you first need to gain an understanding of the heuristics in your categories, and they might vary based on the product you sell. I guess that’s what it takes to be successful in SEO these days.

Soon, labels will tell us when content is written by AI or not. In a public survey of 23,000 respondents, Meta found that 82% of people want labels on AI content. Whether common standards and punishments work remains to be seen, but the urgency is there.

There is also an opportunity here: Labels could shine a spotlight on human writers and make their content more valuable, depending on how good AI content becomes.

On top, writing for AI could be another way to monetize content. While current hourly rates don’t make anyone rich, model training adds new value to content. Content platforms could find new revenue streams.

Web content has become extremely commercialized, but AI licensing could incentivize writers to create good content again and untie themselves from affiliate or advertising income.

Sometimes, the contrast makes value visible. Maybe AI can make the web better after all.


For Data-Guzzling AI Companies, the Internet Is Too Small

The Power Of Prompting

Inside Big Tech’s Underground Race To Buy AI Training Data

OpenAI Gives Up On Detection Tool For AI-Generated Text

IPTC Photo Metadata

Labeling AI-Generated Images on Facebook, Instagram and Threads

How The Ad Industry Is Making AI Images Look Less Like AI

How We’re Helping Creators Disclose Altered Or Synthetic Content

Addressing AI-Generated Election Misinformation

China Is Targeting U.S. Voters And Taiwan With AI-Powered Disinformation

Google Books Is Indexing AI-Generated Garbage

Our Approach To Labeling AI-Generated Content And Manipulated Media


Featured Image: Paulo Bobita/Search Engine Journal

How To Build Authorship As A Travel Brand via @sejournal, @TaylorDanRW

The advice of producing content that aligns with Google’s E-E-A-T guidelines is universal amongst SEO professionals, but guidance on how to achieve this is few and far between.

During the Helpful Content Update of September 2023, many travel websites were impacted.

I ran a small study of more than 100 travel websites, and found that more websites in travel benefited from the update than lost out – but those who did see decreases saw large traffic drops.

From my analysis, many of the travel websites in my sample that saw declines were producing lots of content and, on paper, doing a lot of the box-tick things you see on most content checklists, but they lacked any kind of validity that the author was qualified to talk about the topics being published.

The Importance Of Authorship In Travel Content

When people find your travel brand online through your website or other content, they are more likely to buy something or think about buying if they believe your brand is real and that experts are behind the advice and services you provide.

This makes your brand look more believable and trustworthy, which is very important in nurturing a potential customer through their journey and influencing their research and buying stages.

Being seen as an expert and showing who creates your content helps your brand’s image and increases the chances of people buying from you. It also plays a big role in how well you do in search engine rankings.

Factors like being trustworthy and knowledgeable help Google decide if your website’s content is good, and whilst E-E-A-T is only a guideline laid out in the Quality Rater Guidelines, demonstrating these elements lends itself to a positive user experience and helpful content.

How Your Expertise Can Influence The Travel User Journey

The journey travelers take from researching to booking their vacations is complex and varied, far from a simple, straight path.

It moves through different stages, influenced by various factors such as personal preferences, recommendations, advertisements, and budget considerations.

Potential travelers might start by dreaming about destinations seen on social media, then move on to read reviews and compare prices on various sites.

As they gather information, their plans might change due to discoveries or insights, leading to multiple rounds of consideration before they finally make bookings.

Image made by author, March 2024

Writing useful content and showing that you really know what you’re talking about can make people trust you more.

A large part of what motivates people to buy is trust, and to build that trust, they need to be certain that the information you’re providing (as well as the service) matches their expectations of what they are buying.

These expectations are not only set by your brand and your content but also by what your competitors are saying.

Having an author tied to the brand who is recognized and validated as an expert in the topic they’re writing about can go a long way toward building trust for your brand.

How Does Google Judge Authorship?

Google uses a method called reconciliation to determine whether different pieces of content across the Internet have been written by and are connected by a single author.

To determine authors and authorship, Google faces a number of challenges, as many authors may share the same name (especially if you have a common name like myself), and without clear defining markers or linking to a centralized location consistently, these signals can be confusing.

The process of reconciliation involves analyzing various signals and data points, from author bylines and dedicated author pages to structured data.

To aid Google in the reconciliation process, websites should work to validate authors by linking to personal websites and social media profiles in their article-level author bios and author pages. This consistency helps Google reduce mistakes in the reconciliation process.

This does add a process level of complexity to some businesses, as linking to personal social profiles or professional personal profiles brings a risk to the business should the individual post content the business doesn’t agree with, and this can cause conflict.

To learn more about how Google handles author identification, read this article from 2021, which breaks down what John Mueller has said on the matter.

Displaying Authorship Through Structured Data

Years ago, Google used rel=”author” to identify authors, and now Google identifies the main entity behind content through on-page signals (e.g., a clearly marked content origin) and structured data/schema markup.

You can use various Schema types and attributions to identify and label authors.

While Google only officially supports a handful of schemas for SERP decoration and features, making use of all structured data (where relevant) has correlated with improvements in ranking and search performance.

We can also see correlative improvements when visually mapping entities using the Knowledge Graph API.

Person Schema (Schema.org/Person)

This schema is used to describe a person. Attributes you might use include:

  • name: The name of the person.
  • jobTitle: The job title of the person (e.g., executive director, writer, journalist).
  • worksFor: An organization the person works for (can be used to tie the author to a brand or website).
  • url: URL of the person’s official website or social profile.
  • image: URL of an image of the person.
  • sameAs: An array of URLs that you can use to link the person to their social media profiles, Wikipedia page, etc.

CreativeWork Schema (Schema.org/CreativeWork)

This schema is a broad category that includes articles, blog posts, videos, etc. It can be used to define the relationship between an author and their work. Attributes include:

  • author: The author of the content, which can be linked to the Person schema.
  • publisher: The organization responsible for publishing the work, which can be linked to the Organization schema.
  • datePublished: The date on which the content was published.
  • headline: A headline or title of the content.

Article Schema (schema.org/Article)

A more specific type of CreativeWork focused on articles. Attributes similar to CreativeWork can be used here, with additional emphasis on:

  • articleSection: High-level section name(s) that the article belongs to (e.g., Technology, Lifestyle).

A relatively standard example schema for authors on your travel website would be:

{
"@context": "http://schema.org",
"@type": "Person",
"name": "Bob Bobbins",
"jobTitle": "Travel Writer",
"worksFor": {
"@type": "Organization",
"name": "Curacao.com",
"url": "https://www.curacao.com"
},
"url": "https://www.curacao.com/authors/bob-bobbins",
"image": "https://www.curacao.com/authors/bobbobbins.jpg",
"sameAs": [
"https://www.grenada.com/authors/bob-bobbins",
"https://www.turkscaicos.com/authors/bob-bobbins",
"https://www.travelamericas.com/authors/bob-bobbins"
]
}

Building Travel Authority

Outside of the SEO benefits, creating helpful content with a validated author profile can impact the user buying process and help your brand play a pivotal role in the user’s travel research and booking journey.

The helpful content updates have proven that authorship and expertise are more important than ever.

And coupled with Google’s Hidden Gems initiative, there is a lot of opportunity for challenger travel brands to compete for search queries at various stages of the customer booking journey – queries that have historically been dominated by the major travel brands.

More resources:


Featured Image: Opat Suvi/Shutterstock

Leaked: Google Gemini Availability In Android Search via @sejournal, @martinibuster

A student and researcher who leaks hidden Android features discovered a setting deep within the Android root files that enables Google Gemini directly from Google search in a way that resembles Apple iOS, raising questions about why that’s in there and if it could be connected to a general rollout of AI in search rumored to be happening in May 2024.

Gemini: What SEO Could Be Up Against

There are only rumors that some form of AI search will be rolled out. But if Google rolls out Gemini access as a standard feature then the following gives an idea of what the search community would have to look forward to.

Gemini is Google’s most powerful AI model that contains advanced training, technology and features that in many ways go far beyond existing models.

For example, Gemini is the first AI model to be natively trained to be multimodal. Multimodal means that ability to work with images, text, video and audio and pull knowledge from each of the different forms of media. All previous AI models were trained to be multimodal with separate components and then the separate parts were joined together. According to Google the old way of training for multimodality didn’t work well for complex reasoning tasks. Gemini however is pre-trained with multimodality which enables it to have complex reasoning abilities that exceed those of all previous models.

Another example of the advanced capabilities of Gemini is the unprecedented scale of the context window. A context window is the amount of data a language model can consider simultaneously in order to make a decision. The context window is one measure of how powerful the language model is. Context windows is measured in “tokens” which represent the smallest unit of information.

Comparison Of Context Windows

  • ChatGPT has a maximum context window of 32k
  • GPT-4 Turbo has a context window of 128k
  • Gemini 1.5 pro has a context window of one million tokens.

To put that context window into perspective, Gemini’s context window allows it to process the entire text of the three Lord of the Rings books or ten hours of videos and ask it any question about it. In comparison, OpenAI’s best context window of 128k is able to consider the 198 page Robinson Crusoe book or approximately 1600 tweets.

Internal Google research has shown that their advanced technologies enables context windows as high as 10 million tokens.

Leaked Functionality Resembles iOS Implementation

What was discovered is that Android contains a way to access the Gemini AI directly from the search bar in the Google App in the same way as it’s available in Apple mobile devices.

The official directions for the Apple device mirror the functionality that the researcher discovered hidden in Android.

This is how the iOS Gemini access is described:

“On iPhones, you can chat with Gemini in the Google app. With a tap of the Gemini tab , unlock a whole new way to learn, create images and get help while you’re on the go. Interact with it through text, voice, images, and your camera to get help in new ways.”

The researcher who leaked the Gemini functionality in Google search discovered it hidden within Android. Enabling this function caused a toggle to appear in the Google search bar that makes it easy for users to swipe to directly access Gemini AI functionality exactly the same way as in iOS.

Enabling this functionlity requires rooting an Android phone, which means accessing the operating system at the most fundamental level of files.

According to the person who leaked the information, one of the requirements for the toggle is that Gemini should already be enabled as the mobile assistant. An app called GMS Flags must also be installed in order to obtain the ability to toggle Google app features on and off.

The requirements are:

“Required things –

Rooted devices running Android 12+

Google App latest beta version from Play Store or Apkmirror

GMS Flags app installed with root permission granted. (GitHub)

Gemini should be available for you already in your Google app.”

Screenshot Of New Search Toggle

A screenshot highlighting the 'toggle' button in a user interface with a red arrow pointing towards it, with a google search bar visible in the background and a snippet of a finance-related application at the bottom.A screenshot highlighting the ‘toggle’ button in a user interface with a red arrow pointing towards it, with a google search bar visible in the background and a snippet of a finance-related application at the bottom.

Screenshot Of Gemini Activated In Google Search

The person who uncovered this functionality tweeted:

“Google app for Android to soon get toggle to switch between Gemini and Search [just like on iOS]”

Google Set To Announce Official Rollout Of SGE?

There have been rumors that Google is set to announce the official rollout of Google Search Generative Experience at the May 2024 I/O conference where Google regularly announces new features coming to search (among other announcements).

Eli Schwartz recently posted on LinkedIn about the rumored SGE rollout:

“That date did not come from Google PR; however, as of last week, that is the current planned launch date internally. Of course, the timeline could still change, given that it’s still 53 days away. Throughout the last year, multiple launch dates have been missed.

…Also, it’s important to elaborate on what exactly “launch” means.

Right now, the only way to see SGE, unless you’re in the beta experiment, is if you’re opted into the labs.

Launching means that they’ll show SGE to people who have not opted in, but the scale of that could vary widely.”

It’s unknown if this hidden toggle is a place marker for a future version of the Google search app or if it’s something that enables the rollout of SGE at a future data.

However this hidden toggle does offer a possible clue for those who are curious about how Google may roll out an AI-based front end to search and if this toggle is a connector in some way to that function.

Read how to root to enable Gemini in Android search:

How to enable the material bottom navigation search bar and Gemini toggle in Google Discover on Android [ROOT]

OpenAI context window list

Featured Image by Shutterstock/Mojahid Mottakin

Will AI Replace SEO Specialists? via @sejournal, @wburton27

With the expansion of generative AI and its integration into most SEO workflows and processes, coupled with the significant impact of layoffs in the tech sector, one has to ponder: is AI poised to replace all of our jobs?

In January, Google laid off hundreds of workers as it shifted its investments and focus to AI development. The tech giant is not alone; brands like UPS and Duolingo, to name a few, are doing the same thing.

Is this a new trend, or is it something to be really concerned about?

Let’s explore how AI is unlikely to replace SEO specialists completely, but it will certainly transform how we work.

A Closer Look At How AI Is Transforming The Field Of SEO

Before AI went mainstream, much SEO work was manual and required much time to perform certain tasks.

For example, optimizing a landing page could take thirty minutes to a couple of hours, depending on your experience and skill level.

Producing a content strategy took a good amount of time (i.e., a week or more), depending on the site, competition, search engine results pages (SERPs), etc. But now, with AI, SEO pros can do things quickly and more efficiently.

Here’s how AI can help us become more efficient. But be careful to also acknowledge the limitations of AI. A humanized approach, incorporating AI where appropriate, is a win-win situation.

Enhancement Of Tools To Drive Better Efficiency

AI has definitely enhanced some of the tools we use to perform our jobs, making tasks like keyword research, competitor analysis, and content optimization more efficient and effective.

AI algorithms can process copious amounts of data faster than humans, providing insights that can inform our SEO strategies.

For example, AI tools can help SEO specialists discover new keyword opportunities, analyze the performance of their content, and identify gaps and areas for improvement more quickly and easily than we previously did in the past.

AI tools can also automate some tedious and repetitive tasks that SEO specialists perform, such as generating titles and metadata, checking for broken links, optimizing images, finding the semantic relationships between keywords, identifying search trends, and predicting user behavior.

Content Creation And Optimization

One of the biggest benefits I have seen with AI is that it is particularly good at ideating content topics and even helping to draft content.

However, human oversight is crucial to ensure the content remains high-quality, accurate, and relevant to users while adhering to brand voice and E-E-A-T principles.

AI tools can help SEO specialists generate content ideas based on user intent, search trends, and competitor analysis. They can also help provide suggestions for headlines, subheadings, images, content briefs, and links.

However, AI tools cannot replace the human element of content creation, which requires creativity, empathy, and persuasion.

Humans must still create and review content to avoid potential legal and ethical issues, negative PR outcomes, and factual inaccuracies. With the March update, Google took aim at “scaled content abuse” and applied manual actions to many websites producing a large amount of AI content without human input.

SEO and content editors still need to review, edit, and approve any output from generative AI tools to ensure that it meets the expectations and needs of their target audience.

You can’t just take the content from your AI platform – not make it useful, relevant, factual – and hope it will rank because it probably won’t, especially for competitive phrases.

Changing The SEO Landscape

With the rise of AI and AI powering Google’s Search Generative Experiences (SGE), SEO could potentially go through one of the biggest changes that ever happened to the industry.

As search engines increasingly use AI to refine their algorithms, SEO specialists need to adapt their strategies. AI can help them stay ahead of these changes by predicting trends and identifying new optimization opportunities, such as SGE snippets.

For example, AI tools can help SEO specialists not only monitor and analyze the impact of algorithm updates, but also provide recommendations for adjusting SEO tactics accordingly. They can also help leverage new features and formats that search engines introduce, such as SGE featured snippets.

By leveraging AI tools, SEO specialists can optimize content for these new formats, increasing their chances of ranking higher and attracting more qualified traffic to their clients and their own sites. This success hinges on interpreting the data and putting together a winning SEO strategy.

Human Insight And Creativity

Despite the advancements in AI, human insight and creativity remain essential. Understanding audience needs, crafting compelling messages, and strategic thinking are areas where humans excel and are critical in SEO.

AI tools can provide data and insights but cannot replace the human ability to interpret and apply them.

SEO specialists still need to use their judgment and experience to decide which SEO strategies and tactics are best suited for their goals and context.

They also need to use their creativity and storytelling skills to create content that engages and persuades their audience and builds trust and loyalty.

AI tools cannot replicate the human emotion and connection vital for a successful SEO strategy.

Ethical Considerations And Best Practices

AI tools must be used responsibly and in accordance with search engine guidelines. SEO specialists play a key role in ensuring the ethical use of AI and adherence to best practices to avoid penalties.

As SEO professionals, we need to be aware of the potential risks and challenges of using AI tools, such as data privacy, bias, and quality issues. We also must ensure that the data we use and the content we generate with AI tools are accurate, relevant, and trustworthy.

AI’s Enhancements And Boundaries In SEO

AI has certainly made it easier and more efficient to complete SEO tasks, such as on-page optimization and coding, which frees up some of our time to work on strategic growth opportunities.

These advancements are not perfect and do have some limitations, including:

  • AI is dependent on being trained on pre-existing information and data. It lacks the ability to innovate beyond known information unless it has been trained on it.
  • The lack of human experience and wisdom. AI cannot match the nuanced understanding and contextual insight in a way that experienced SEO professionals do.
  • Requirement for direct inputs. AI’s effectiveness is contingent on the quality of the inputs it receives, and it can struggle with subtle strategy shifts that we humans can easily navigate.

Wrapping Up

AI will continue to become an invaluable tool for SEO specialists, but it won’t replace the need for human expertise, creativity, and strategic thinking.

The role of SEO specialists will evolve, with a greater emphasis on managing and interpreting AI-generated data and insights – and less on manual and repetitive tasks that the machines can now do with human oversight.

SEO specialists who actively learn and embrace AI with a human-centric approach to refine their skill sets will gain a competitive edge and a brighter future in the SEO industry.

More resources: 


Featured Image: Stokkete/Shutterstock

Google Explains How It Chooses Canonical Webpages via @sejournal, @martinibuster

In a Google Search Central video Google’s Gary Illyes explained part of webpage indexing that involves selecting canonicals, explaining what a canonical means to Google, a thumbnail explanation of webpage signals, he mentions the centerpiece of a page and tells what it does with the duplicates which implies a new way of thinking about them.

What Is A Canonical Webpage?

There are several ways of considering the what canonical means, the publisher and the SEO’s viewpoint from our side of the search box and what canonical means from Google’s side.

Publishers identify what they feel is the “original” webpage and SEOs conception of canonicals is about choosing the “strongest” version of a webpage for ranking purposes.

Canonicalization for Google is an entirely different thing from what publishers and SEOs think it is so it’s good to hear it from a Googler like Gary Illyes.

Google’s official documentation about canonicalization uses the word deduplication to reference the process of choosing a canonical and lists five typical reasons for why a site might have duplicate pages.

Five Reasons For Duplicate Pages

  1. “Region variants: for example, a piece of content for the USA and the UK, accessible from different URLs, but essentially the same content in the same language
  2. Device variants: for example, a page with both a mobile and a desktop version
  3. Protocol variants: for example, the HTTP and HTTPS versions of a site
  4. Site functions: for example, the results of sorting and filtering functions of a category page
  5. Accidental variants: for example, the demo version of the site is accidentally left accessible to crawlers”

Canonicals can be considered in three different ways and there are at least five reasons for duplicate pages.

Gary describes one more way to think of canonicals.

Signals Are Used For Choosing Canonicals

Ilyes shares one more definition of a canonical, this time from the indexing point of view, and talks about the signals that are used for selecting canonicals.

Gary explains:

“Google determines if the page is a duplicate of another already known page and which version should be kept in the index, the canonical version.

But in this context, the canonical version is the page from a group of duplicate pages that best represents the group according to the signals we’ve collected about each version.”

Gary stops to explain duplicate clustering and then returns to talking about signals a short while later.

He continued:

“For the most part, only canonical pages appear in Search results. But how do we know which page is canonical?

So once Google has the content of your page, or more specifically the main content or centerpiece of a page, it will group it with one or more pages featuring similar content, if any. This is duplicate clustering.”

Just want to stop here to note that Gary refers to the main content as the “centerpiece of a page” which is interesting because there’s a concept introduced by Google’s Martin Splitt called the Centerpiece Annotation. He didn’t really explain what the Centerpiece Annotation is but this bit that Gary shared helps.

The following is the part of the video where Gary talks about what signals actually are.

Illyes explains what “signals” are:

“Then it compares a handful of signals it has already calculated for each page to select a canonical version.

Signals are pieces of information that the search engine collects about pages and websites, which are used for further processing.

Some signals are very straightforward, such as site owner annotations in HTML like rel=”canonical”, while others, like the importance of an individual page on the internet, are less straightforward.”

Duplicate Clusters Have One Canonical

Gary next explains that one page is chosen to represent the canonical for each cluster of duplicate pages in the search results. Every cluster of duplicates has one canonical.

He continues:

“Each of the duplicate clusters will have a single version of the content selected as canonical.

This version will represent the content in Search results for all the other versions.

The other versions in the cluster become alternate versions that may be served in different contexts, like if the user is searching for a very specific page from the cluster.”

Alternate Versions Of Webpages

That last part is really interesting and is important to consider because it can be helpful for being able to rank for multiple variations of a keyword, particularly for ecommerce webpages.

Sometimes the content management system (CMS) creates duplicate webpages to account for variations of a product like the size or color of a product which then can impact the description. Those variations can be chosen by Google to rank in the search results when that variant page more closely serves as a match for a search query.

This is important to think about because it might be tempting to redirect noindex variant webpages to keep them out of the search index out of fear of the (non-existent) keyword cannibalization problem. Adding a noindex to pages that are variants of one page can backfire because there are scenarios where those variant pages are the best ones to rank for a more nuanced search query that contains colors, sizes or version numbers that are different than on the canonical page.

Top Takeaways About Canonicals (And More) To Remember

There is a lot of information packed in Gary’s discussion of canonicals, including some side topics about the main content.

Here are seven takeaways to consider:

  1. The main content is referred to as the Centerpiece
  2. Google calculates a “handful of signals” for each page it discovers.
  3. Signals are data that are used for “further processing” after webpages are discovered.
  4. Some signals are in control of the publisher, like hints (and presumably directives). The hint that Illyes mentioned is the the rel=canonical link attribute.
  5. Other signals are outside of the control of the publisher, like the importance of the page in the context of the Internet.
  6. Some duplicate pages can serve as alternate versions
  7. Alternate versions of webpages can still rank and are useful for Google (and the publisher) for ranking purposes.

Watch the Search Central Episode about indexing:

How Google Search indexes pages

Featured image from Google video/altered by author

Google’s Indexing Process: When Is “Quality” Determined? via @sejournal, @MattGSouthern

In a recent video, Google’s Gary Illyes, a search team engineer, shared details about how the search engine assesses webpage quality during indexing.

This information is timely, as Google has steadily raised the bar for “quality” content.

Quality: A Key Factor in Indexing & Crawling Frequency

Illyes described the indexing stage, which involves analyzing a page’s textual content, tags, attributes, images, and videos.

During this stage, Google also calculates various signals that help determine the page’s quality and, consequently, its ranking in search results.

Illyes explains:

“The final step in indexing is deciding whether to include the page in Google’s index. This process, called index selection, largely depends on the page’s quality and the previously collected signals.”

This detail is especially relevant for publishers and SEO professionals struggling to get content indexed.

You could be doing everything right from a technical standpoint. However, your pages won’t get indexed if they don’t meet a certain quality threshold.

Further, Google has previously confirmed that high-quality content is crawled more frequently, which is crucial for staying competitive in search results.

One of Google’s goals for the year is to conserve crawling resources by prioritizing pages that “deserve” to be crawled, emphasizing the urgency of meeting Google’s quality standard.

Signals & Duplicate Content Handling

Illyes touched on how Google analyzes signals.

Some signals, like the rel= “canonical” annotation, are straightforward, while others, such as a page’s importance on the internet, are more complex.

Google also employs “duplicate clustering,” where similar pages are grouped, and a single canonical version is selected to represent the content in search results. The canonical version is determined by comparing the quality signals collected about each duplicate page.

Additional Indexing Insights

Along with the insight into quality assessment, Illyes shared these notable details:

  1. HTML Parsing and Semantic Issues: Illyes discussed how Google parses the HTML of a webpage and fixes any semantic issues encountered. If unsupported tags are used within the < head> element, it can cause indexing problems.
  2. Main Content Identification: Illyes mentioned that Google focuses on the “main content or centerpiece of a page” when analyzing it. This suggests that optimizing the primary content of a webpage is more important than incremental technical changes.
  3. Index Storage: Illyes revealed that Google’s search database is spread across thousands of computers. This is interesting context regarding the scale of Google’s infrastructure.

Watch the full video below:

Why SEJ Cares

As Google continues prioritizing high-quality content in its indexing and ranking processes, SEO professionals should be aware of how it assesses quality.

Knowing the factors influencing indexing, such as relevance, quality, and signal calculation, SEO professionals know better what to aim for to meet Google’s indexing threshold.

How This Can Help You

To ensure your content meets Google’s quality standards, consider the following actionable steps:

  1. Focus on comprehensively creating content that addresses your audience’s needs and pain points.
  2. Identify current search demand trends and align your content with these topics.
  3. Ensure your content is well-structured and easy to navigate.
  4. Implement schema markup and other structured data to help Google better understand context.
  5. Regularly update and refresh your content to maintain relevance and value.

You can potentially increase your indexed pages and crawling frequency by prioritizing quality, relevance, and meeting search demand.


FAQ

What does Google’s ‘index selection’ process involve?

The index selection process is the final step in Google’s indexing, where it decides whether to include the page in the search index.

This decision is based on the page’s quality and various signals collected during the initial assessment.

If the page doesn’t meet the quality threshold set by Google, it risks not being indexed. For this reason, the emphasis on generating high-quality content is critical for visibility in Google’s search engine.

How does Google handle duplicate content, and what role do quality signals play in this process?

Google handles duplicate content through a process called “duplicate clustering,” where similar pages are grouped. Then, a canonical version is selected to represent the group in search results.

The canonical version is selected based on the quality signals associated with each duplicate page. These signals can include attributes like the proper use of the rel=”canonical” tag or more complex factors like a page’s perceived importance on the Internet.

Ultimately, the chosen canonical version reflects Google’s assessment of which page is most likely to provide the best value to users.


Featured Image: YouTube.com/GoogleSearchCentral, April 2024. 

New Study On Perplexity AI Offers Good News For SEO via @sejournal, @martinibuster

Research by BrightEdge shows that traffic is surging to Perplexity and where the opportunities lie for optimizing for traffic, particularly for ecommerce .

What Is Perplexity AI?

Perplexity is a self-described Answer Engine founded by researchers and engineers from OpenAI, Facebook, Quora, Microsoft, and Databricks. It has backing from many of the most influential investors and engineers in Silicon Valley which has helped to propel it as one of the top innovators in the new wave AI search engines.

Perplexity contains an index that is ranked by their own version of PageRank. It’s a combination of a search engine and a chatbot, with the chatbot part serving as the interface for receiving queries and the AI part on the backend. But it’s retains the functionality of a chatbot in that it can perform tasks like write an essay.

What sets Perplexity.ai apart from competitors, as will be shown below, is that Perplexity shows generous amounts of citations to websites.

Surge In Traffic

One of the key insights from the BrightEdge research is that Perplexity.ai has surged in referral traffic by a whopping 40% since January, indicating that users are interested in trying something different from the usual ten blue links.

Using their proprietary BrightEdge Generative Parser they were able to detect AI search experiences which showed that users are fine with an AI search engine.

The takeaway here is that the search marketing industry is right back to where it was over two decades ago when Google first appeared. SEOs like myself and others were testing to see what activities were useful and which were not.

Most people in search came into it when the technology was relatively mature and don’t know what it’s like to confront the unknown or even how to do it. The only difference between then and now is that today we know about research papers and patents. Back then we had no idea until roughly 2005.

BrightEdge’s report reflected on this period of transition:

“For marketers, who rely on organic search strategies to reach customers, new AI-first search engines like Perplexity and ChatGPT signal a tectonic shift in how brands market and sell their products. However, the newness of these AI-driven platforms means they frequently undergo dynamic changes, making them difficult to track and adapt to.”

In an ad supported model the organic search results competes with advertising for the most valuable search queries but that’s not the case with Perplexity. BrightEdge sees Perplexity as an opportunity for search marketers because it is an ad-free model that sends organic search traffic.

Overlap With Google Search Generative Experience (SGE)

An interesting data point surfaced in BrightEdge’s research is that there was a “significant” overlap between Perplexity’s search results and that of Google’s SGE results. Perhaps not surprisingly, the strongest overlap was in health related search queries, likely because there’s a limited number of sites that are qualified to create content on health and medical topics.

But what may sound discouraging is that Reddit shows up across most search query topics except for healthcare and finance, two YMYL (your money/your life) topics.

Overlap In B2B Search Results

Another area of overlap with Google’s SGE is due to Perplexity’s tendency to rank authoritative sites in topics like Healthcare and Education and in review and local search sites in relation to Restaurants and Travel. Big brands like Yelp and TripAdvisor are winners in Perplexity.

Overlap In Travel Search Results

Yahoo, MarketWatch and CNN are frequently seen in finance related search queries..

There is less overlap in for eCommerce queries apart from Wikipedia and Amazon, which both search engines rank.

According to BrightEdge:

“Google uses Quora and Consumer Reporters for third-party product information, while Perplexity references Reddit. Overall, Perplexity is most likely to reference product sites, whereas Google SGE will also include informational resources such as lifestyle and news sites.”

That’s good news for ecommerce sites that sell actual products, an actual bright spot.

BrightEdge cites the following opportunities with Perplexity:

  • Perplexity’s share of search growth rate is rising at a rate of 39% per month.
  • Perplexity’s search results offer an average of 5.28 website citations.
  • Perplexity AI shows more citations in Travel and Restaurant queries than Google SGE.
  • BrightEdge encourages search marketers to take advantage of opportunities in optimizing for AI-driven search engines.

Jim Yu, Founder and Executive Chairman of BrightEdge, said:

“Optimizing emerging search platforms is essential for marketers because their impact will be seismic – just 1% of the global organic search market equates to approximately $1.2B in ad revenue per year… AI-first engines are steadily gaining ground and carving out their own areas of expertise, making it critical for the marketing community to master multiple search platforms.

There is too much revenue at stake to get left behind, which is why we’re closely tracking the development of these engines and all things AI search – from traffic trends and queries to result quality and more.”

Read more about optimizing for Perplexity at BrightEdge:

The Ultimate Guide to Perplexity

Featured Image by Shutterstock/rafapress

Google Responds To Criticism Over Forums At Top Of Search Results via @sejournal, @MattGSouthern

Google’s discussions and forums carousel in search results has sparked concern among SEO professionals, who worry that the prominence of forum content could lead to misinformation and scams.

Google’s Search Liaison, Danny Sullivan, has acknowledged the issue and stated that feedback has been passed along for further evaluation.

Sullivan also addressed the broader concern regarding forum content, noting that while some may dislike it, many users appreciate and actively seek it out.

This article explores the implications of the new carousel and its potential opportunities and challenges.

Concerns Raised Regarding Forum Content In Search Results

The introduction of the discussions and forums carousel has made some question Google’s commitment to surfacing reliable information.

Lily Ray, a prominent figure in the SEO community, raised this issue on Twitter, stating, “Isn’t this a bit dangerous for Google?”

She pointed out that Reddit, in particular, has been “overtaken by affiliate spam and scammers.”

Google’s Response

In response, Sullivan explained that the carousel “appears automatically if the systems think it might be relevant and useful.”

However, some users pushed back on this explanation.

Twitter user @sc_kkw argued, “If they actively seek it out, let them. It’s much easier for a user to type ‘Reddit’ at the end of their search than it is for someone who doesn’t want forum answers to sift through and find a reputable website now.”

Sullivan maintained that the goal is to show relevant content, whether from forums, blogs, or websites.

He provides an example of a personal search experience where forum results quickly solved an issue with smart window blinds, demonstrating the potential value of this content.

Potential Improvements On The Way?

Sullivan assured Ray that her concern had been understood and passed on to the search team.

He outlined potential improvements, such as adjusting the frequency of forum content for specific queries or adding disclaimers to clarify that forum participants may not be medical professionals.

Why SEJ Cares

The inclusion of the discussions and forums carousel in search results, particularly for YMYL queries, has implications for both users and publishers:

  1. User trust: If forum content containing misinformation or scams appears prominently in search results, it could erode user trust in Google’s ability to provide reliable information.
  2. Discouraged publishers: SEO professionals and creators who have invested time and resources into creating high-quality, authoritative content may feel discouraged if forum content consistently outranks their work.
  3. Public health and well-being: The spread of misinformation through forum content could potentially harm users who rely on search results for accurate medical information.

How This Can Help You

Despite the concerns raised, the inclusion of forum content in search results can present opportunities, such as:

  1. Identify content gaps: Analyzing the questions and discussions in forum results can help you identify gaps in your content and create targeted, authoritative resources to address user needs.
  2. Engage with the community: Participating in relevant forums and providing helpful, accurate information can help establish your brand as a trustworthy authority in your niche, potentially increasing visibility and traffic.
  3. Adapt your content strategy: Consider incorporating user-generated content, such as expert interviews or case studies, to provide firsthand experiences and perspectives that users find valuable in forum discussions.

In Summary

Google’s discussions and forums carousel in search results has raised concerns among SEO professionals. Google acknowledged the feedback and is considering potential improvements.

This development presents challenges and opportunities for SEO professionals to identify content gaps, engage with the community, and adapt content strategies to serve users’ needs better.


Featured Image: pathdoc/Shutterstock