21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts] via @sejournal, @calltrac

This post was sponsored by CallTrackingMetrics. The opinions expressed in this article are the sponsor’s own.

If you’ve been enjoying having random conversations with ChatGPT, or trying your hand at tricking a car dealership chatbot into giving you a new car for $1, just wait until you start using safe AI professionally.

Marketers are finding lots of ways to use generative AI for things like SEO research, copywriting, and summarizing survey results.

But one of the most natural and safe fits for AI is marketing data discovery during conversational call tracking.

Don’t believe us?

Here are a ton of AI marketing use cases that make perfect sense for your teams to start using.

A Quick Call Tracking Definition

Call tracking is the act of using unique phone numbers to tie a conversation to its marketing source, and collect other caller data, such as:

  • Location of caller.
  • New or returning caller.
  • Website activity associated with the caller.

It can help attribute sales to:

  • Best performing marketing materials.
  • Best performing local website landing pages.
  • Best performing PPC campaigns.

Manually tracking and analyzing each conversation can take hours, and often, important nuances are missed.

This is where AI can help speed up marketing insight discovery and automatically update contact and sales pipelines.

All you need is a prompt.

What Prompt Or Quick Recipe Can I Use To Get AI Insights From Call Tracking?

Your automatically logged call transcriptions + an AI prompt = automated conversation intelligence.

Once you have this setup configured, you can drastically speed up your first-party data collection.

To get more specific, prompts have two main parts. The question you want answered, and how you want AI to answer it. As an example:

The question: What prompted the Caller to reach out?

The prompt [how should AI answer]: You are a helpful Sales agent responsible for identifying what marketing channel prompted the contact to call. If the contact did not identify what prompted their call please only respond with “None”.

Below are some example responses on what a contact might say:

  • Podcast ad.
  • Social post.
  • Friend or family recommendation.
  • Stopped by event booth.
  • Read reviews online.

1 – 18. How To Use AI To Update Customer Contact Fields

Starting off boring, but powerful: Generative AI can take your customer conversations and automate data entry tasks, such as updating caller profiles to keep them relevant and qualified.

21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts]Image created by CallTrackingMetrics, March 2024

Impressive? No.

But the time savings add up quickly, and let your team work on the things they like (that make the company money) instead of manually filling out wrap-up panels after a call.

What Contact Information Can AI Automatically Update?

  1. Name – You’re going to get a name from caller ID which is a great start, but is it the name your caller prefers? Is it up to date or is it still the name of a former customer who left their company to chase their dreams? With a quick AI prompt, you can make sure you’re greeting the right person when they call back.
  2. Email Address – It might be a default value for form submissions, but getting an email address from a caller can take a lot of back and forth. AI isn’t going to ask for that last part again, or require you to read it back to them to verify. It’s just going to do it.
  3. Company Name – You might be using a sales intelligence tool like ZoomInfo to pull this kind of thing from a database. Still, you might also enjoy the accuracy of extracting directly from the words of your prospect.
  4. Buyer Role – Maybe not a basic field, but one AI can fill out nonetheless (much like other custom fields below!). Give your AI a list to choose from like a researcher, influencer, or decision maker. Sure would be nice to know how much influence they actually have without having to ask directly.

Can AI Automatically Tag Conversations In My CRM?

Of course!

In CRMs and sales enablement tools, tags are used to categorize and segment your conversations for further analysis or follow-up.

Some popular tags for call tracking are marking someone a new or returning caller.

You can set a tag manually. You can set a tag using an if/then trigger. And because of what this whole thing is about, you can update tags using AI.

21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts]Image created by CallTrackingMetrics, March 2024

Use AI to automatically add tags to your prospect’s profile, based on their actual calls.

  1. Spam – Sure, you can mark something spam yourself, but why not let AI do it for you so you can move on to real work?
  2. Product Tags – What was the caller asking about? Add product tags to calls for further analysis, or to jump right into the sales pitch when they call back.
  3. Lifecycle Tags – Have AI examine what kinds of questions your prospect is asking and qualify them along a scale of just learning to ready to buy. Or even, mark them as an existing customer.
  4. Target Account – Did the caller mention their company size? Maybe you asked them about revenue or tech stack. If you let AI know what your ideal customer looks like, it’ll help you quickly identify them when you’re talking to one.

Can Generative AI Score Leads In My CRM?

Yes! However, if 100% of your calls end in sales, skip this part.

For the rest of us, phone, text, and chat leads range from “never going to buy anything” to “ready to give you my credit card info.”

You need a way to gauge which leads are closer to “ready.” This is where lead scoring comes in.

21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts]Image created by CallTrackingMetrics, March 2024

While there are lots of ways to score your conversations, you can use AI to sift through the transcription and qualify a lead for you.

For call scoring, this often looks like a score of 1 to 5.

So, here are a few examples of how AI can automatically score your leads from transcripts and chat logs.

  1. Readiness to Buy – The most classic approach to scoring is asking, “How likely is this lead to buy?” A score of 1 is unqualified, and a score of 5 is they’re already paying us.
  2. Ideal Customer Fit – Just like adding a target account tag above, train your AI on what a good customer looks like, and it can also give you a score. How closely does this caller fit your ideal profile?
  3. Coaching – Not everything has to be about the lead. Sometimes we want to grade our own team. How well did your sales team stick to the script? Were they friendly? Let AI roll it up into a score for you.
  4. Follow-up Priority – Aggregate readiness to buy, customer fit, and other inputs to decide on how aggressively to follow up with your leads.

Can Generative AI Capture & Update Custom Fields From Phone Calls & Chat Logs?

Your company is likely not the same as every other company using call tracking to get customer insights.

You’ll want some flexibility to determine what’s important to you, not what your call-tracking provider has determined to be important.

With custom fields, you get to put your creativity and strategy together with AI’s scalability to automate pretty much anything.

21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts]Image created by CallTrackingMetrics, March 2024

AI can accurately assess and notate:

  1. Product Familiarity – You’ve tagged a call with a product name, but how much time do you need to spend educating the prospect vs. selling them?
  2. Related Products – What else could you be selling this person?
  3. Appointments – If your team runs on appointments or demos, having an AI add a calendar date to a custom field opens up a world of automated possibilities.
  4. Next Steps – Follow up with an email, a call, or an appointment confirmation text. Have AI pull the best next step from your conversation.

19 – 21. How To Use Generative AI To Take Action On Automatically Updated Sales Contacts

Ok, so there are some time-savings when you use call tracking and AI to update fields.

If that’s not quite exciting enough, let’s see what you can actually do with those automated fields.

21 AI Use Cases For Turning Inbound Calls Into Marketing Data [+Prompts]Image created by CallTrackingMetrics, March 2024

19. Automate Advertising Optimization

Use conversion data to inform your decisions.

Throw AI into the mix, and you go from A to optimized without lifting a finger.

How?

The tags and fields your AI just updated become qualifiers to send only the signals that matter to your business over to platforms like Google Ads where their machine learning will go wild to find more of the same. Where you might have been stuck sending a simple conversion (like any call with talk time over 90 seconds) now you can send those conversions with a three or better score for readiness to buy, and a product tag.

20. Better Personalization In Your CRM

To kick things off, your AI automatically scraped the conversation for an email address, so now you can add a new contact to an email-centric tool like HubSpot immediately at the end of the conversation. H

ave you updated product tags? Use that as a great trigger to enroll them in a highly relevant email drip.

Feed your call scores and product tags into your CRM’s lead scoring system and add complexity to a usually surface-level approach. Or do something as easy as sync their company name to their record so you can personalize outreach.

21. Following Up & Closing Deals

You’re not having AI fill out custom fields for fun, you’re doing it to make your job easier.

And one of your primary jobs is following up after a conversation to get someone closer to purchasing.

Agreed on a time for your next meeting? Send that date field to your favorite scheduling tool and get a calendar invite in their inbox. Or maybe you had a softer “call me next week” agreement? Use that to send the caller to an outbound dialer that’s set to call as soon as you log in the next week.

How To Use AI For Analyzing Calls

Moving beyond data entry, when you give AI a call transcription to work with, it can pull out insights to help your team get better.

In the time it would take you to read through one eight-minute phone conversation, AI has analyzed your whole day’s worth of calls and is off taking whatever the robot equivalent of a coffee break is.

What can AI do to upgrade your conversation intelligence? Unfortunately, after 16 use cases, we’re bumping up against our word count and we’ll have to save that for part two: Another Ton of AI Use Cases for Call Tracking.


Image Credits

Featured Image: Image by CallTrackingMetrics Used with permission.

Aifficiency: What’s Really Behind Google’s Deal With Reddit via @sejournal, @Kevin_Indig

Vetted.ai is one of many AI chatbots that are gaining traction in Google search.

It’s not much, maybe 10,000 monthly visits, but Google found the site to be the best result for ~100 keywords.

Trend: growing.

Organic traffic and top 3 keywords for vetted aiOrganic traffic and top 3 keywords for vetted.ai (Image Credit: Kevin Indig)

A click on one of Vetted AI’s results opens a blank chatbot page that quickly fills with content about the search query.

Vetted’s AI chatbot Vetted’s AI chatbot (Image Credit: Kevin Indig)

Vetted AI is a shopping assistant that targets long-tail queries like [zep vs draino], [vornado mvh vs vh200], or [ugg ansley vs dakota].

It programmatically creates content for any product comparison you can think of.

zep vs dranoScreenshot from search for [zep vs drano], Google, March 2024 (Image Credit: Kevin Indig)

Now contrast that with another site that gained traction in the SERPs over the last months: Reddit (see result No. 3 in the screenshot above).

Reddit is the opposite of AI chatbots: human, experiential, and unoptimized content.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Sundar Pich-AI

Google’s $60 million deal with Reddit is more than it seems at first.

Google is surfacing more content from forums in the SERPs is to counter-balance AI content.

Verification is the ultimate AI watermarking.

Even though Reddit can’t prevent humans from using AI to create posts or comments, chances are lower because of two things Google search doesn’t have: Moderation and Karma.

Yes, Content Goblins have already taken aim at Reddit, but most of the 73 million daily active users provide useful answers.

Content moderators punish spam with bans or even kicks.

But the most powerful driver of quality on Reddit is Karma, “a user’s reputation score that reflects their community contributions.”

Through simple up or downvotes, users can gain authority and trustworthiness – two integral ingredients in Google’s quality systems.

The press release claims that the deal enables Google to “facilitate more content-forward displays of Reddit information that will make our products more helpful for our users and make it easier to participate in Reddit communities and conversations.

That might be true, but the real reason for the Google-Reddit deal is Karma.

Google already has access to Reddit’s content: It crawls reddit.com many times every day and can use that content to train it’s machine learning models. It most likely already has.

Google also already shows Reddit content prominently in search through high organic ranks (Hidden Gem update) and the Discussions & Forums SERP Feature.

Reddit.com has seen the fastest growth in search of any domain and is now one of the web’s largest.

Reddit is at the same traffic level as AmazonReddit is at the same traffic level as Amazon (Image Credit: Kevin Indig)

But with access to Reddit’s API, Google can access Karma to train model’s on content humans value, and potentially surface better Reddit answers in search. For example, by filtering out spammy or guideline-violating posts.

The implications go way beyond model training and lead right into the heart of what users want out of search.

Authentic Reviews

Reddit’s Karma is especially valuable when it comes to surfacing authentic experiences in search.

Google still has miles to go when it comes to product reviews.

A recent article from HouseFresh shows how big brands are not testing as thoroughly as they might pretend.

As a team that has dedicated the last few years to testing and reviewing air purifiers, it’s disheartening to see our independent site be outranked by big-name publications that haven’t even bothered to check if a company is bankrupt before telling millions of readers to buy their products.

You might also recall a research study from Germany that found “having a separate low-quality review section to support a site’s primary content is a successful and lucrative business model.” Publishers make money with reviews, often to survive.

A recent study by Schultheiß et al. investigates the compatibility between SEO and content quality on medical websites with a user study. The study finds an inverse relationship between a page’s optimization level and its perceived expertise, indicating that SEO may hurt at least subjective page quality.

Reddit, on the other hand, is unoptimized and human. You could say it’s in-ai-efficient.

Users go to Reddit, the prime source for in-ai-fficient reviews, when they want opinions from other users instead of optimized reviews from publishers or affiliates. For some products, people want reviews from other people.

There is a speed benefit of using Reddit’s AI, too.

Similar to indexing APIs, Google gets all Reddit content via API instead of having to cough up crawl budget for the massive domain.

Wikipedia holds ~60 million pages.

Reddit had over 300 million posts in 2020 alone. The site is big and getting bigger.

First, we’re pleased to announce a new Cloud partnership that enables Reddit to integrate new AI-powered capabilities using Vertex AI. Reddit intends to use Vertex AI to enhance search and other capabilities on the Reddit platform.

It’s ironic that Reddit uses Google’s Vertex AI (enterprise Gemini models) to improve its search capabilities as part of the deal since it was Reddit’s poor search function that drove so many users to search for Reddit content on Google in the first place.

The Hidden Gems update, Google’s Discussions & Forums SERP Feature, and the eventual Google deal might have never happened without such demand for Reddit results.

Ai-fficient

Aifficiencies are incremental improvements from AI. Instead of doing new things, the biggest value add from AI so far is doings things faster and better.

I use ChatGPT a lot to come up with better spreadsheet formulas and macros.

I never learned RegEx well, and my JavaScript/Python skills merely prevent me from embarrassing myself in front of developers.

With LLMs, I can solve these problems quickly and independently.

This week, I categorized almost 20,000 keywords into eight core topics for a client and paid less than $20 in one hour. AI is NOS for no-code.

Websites are leveraging Aifficiencies the same way: not new but better.

Reddit uses Vortex AI, but examples like Zillow (I cover more in the second State of AI report) have already pulled the trigger.

Amazon and eBay lower the barrier to entry by allowing merchants to snap a picture of a product and automagically write title and description, including product features, with AI.

Amazon creates helpful AI review summaries (inspiration for SGE?).

Redfin allows visitors to customize interior design with AI.

AI makes products better instead of creating new ones.

However, when it comes to product advice and reviews, we want unoptimized, in-ai-fficient information. Just raw, authentic feedback.

We don’t want a biased list of products based on which ones rake in the biggest affiliate cut.

Users have grown up and can tell that most reviews on Google have financial incentives, just like the pitches “I think this resource would be helpful for your audience” and “Let’s connect and find business synergies” are really about backlinks and closed deals.

User Intent: Amateur Experience

One way to make reviews more authentic is to use amateur opinions.

If I were in the reviews business, I’d interview non-professionals and feature their opinions in reviews – but only for products with a lower need for expertise.

Google’s addition of “experience” in the quality rater guidelines is a direct hint that not every query demands expertise from professionals.

Some queries have a demand for low expertise and high experience. Just like we can use SERP Features to infer the user intent(s) for a query, a prominent Reddit result tells us that searchers value opinions and experiences from non-experts.

For example, when searching for the best GPU that fits your computer hardware, you probably want to hear from an expert. When searching for card games for couples, you most likely want to hear from other couples.

Google wants to surface more results that are not optimized for embeddings and backlinks but for human signals like Karma.

Helpful Content Updates are steps toward more authentic content. The Google-Reddit deal is the sequel.


https://www.sec.gov/Archives/edgar/data/1713445/000162828024006294/reddits-1q423.htm

https://blog.google/inside-google/company-announcements/expanded-reddit-partnership/

https://housefresh.com/david-vs-digital-goliaths/#lwptoc1

https://investors.zillowgroup.com/investors/news-and-events/news/news-details/2023/Zillows-new-AI-powered-natural-language-search-is-a-first-in-real-estate/default.aspx

https://www.aboutamazon.com/news/small-business/amazon-sellers-generative-ai-tool

https://innovation.ebayinc.com/tech/features/magical-listing-tool-harnesses-the-power-of-ai-to-make-selling-on-ebay-faster-easier-and-more-accurate/

https://www.redfin.com/news/introducing-redfin-redesign-ai-tool/


Featured Image: Lyna ™

Google Testing AI Tool That Finds & Rewrites Quality Content via @sejournal, @martinibuster

Google is paying small publishers to use their AI platform to rewrite the content published on news sites that are targeted by Google’s platform for largescale content parasitism. People on Twitter express unfavorable opinions about it.

Google is paying small independent publishers to use a generative AI platform that targets the content of other news content for summarization and republication. According to a paywalled article published on AdWeek, the tool is a beta test that requires the publishers to post a required number of articles per day in order to receive their payment.

Small publishers use a tool that shows them content chosen by the AI tool that can be selected in order to produce content at “at no cost.”

Why Google’s New Tool Is Problematic

It’s common for a news organization to pick up on news that’s broken by another news organization and if it’s done by a capable journalist they put their own spin on it. It’s the way things are done.

Google’s tool however appears to resemble a programmatic way to plagiarize content called article spinning. Article spinning is an automated tactic that uses website feeds to input published content from other sites that is then rewritten by the computer program, typically by replacing words with synonyms. AI however can spin content with greater nuance, essentially summarizing the content in a different tone by replacing entire sentences and paragraphs with content that’s the same as the original but expressed in a different manner.

What’s different is that this tool is something that Google itself is testing and that’s problematic not just because Google is the de facto gatekeeper for online content, the tool places a target on specific news organizations to have their content spun by small independent publishers.

On the one hand that could be a good thing because it could generate inbound links to the original publisher of the news. Free links, that’s a win-win, right?

But it’s not because news publishers don’t really benefit from links to content that has a shelf life of roughly 48 hours, at most. News is a hamster wheel of constant news publication in the service of keeping the wheel spinning to keep the business afloat. It’s a nonstop process that can easily be undermined by wholesale content dilution.

And that’s really at the hear of the problem with Google’s AI tool in that it dilutes the value that an organization creates by hiring professionals to create the “value add” content that Google frequently says it wants to publish. And that is what makes Google’s AI tool hypocritical at best and cynical at worst because Google is encouraging the creation of high quality content while simultaneously undermining it.

An army of publishers programmatically copying every published news article doesn’t look so good to the original publisher, especially if their content is overwhelmed by the parasitic AI in Google News, in the search results and by user preference for their local online news publisher who is republishing the news from the bigger publishers.

Reaction To Google’s AI News Tool

Technology journalist Brian Merchant (who writes for The Atlantic and has published a book) posted a virtual thumbs down on Twitter, a sentiment that was unanimously seconded.

He tweeted:

“The nightmare begins — Google is incentivizing the production of AI-generated slop.

If you are a news outlet who has accepted this meager deal, and especially if you are publishing AI-generated articles without disclaimers, you should be deeply ashamed.”

Brian followed up with this tweet with his observation of what’s wrong with Google’s AI tool for small news sites:

“If we in the media have learned ANYTHING from the last 10 years it is that we do not in fact have to settle for whatever scraps big tech throws us, and in fact it will screw us in the end—why would you participate in automating your field out of existence for like $30k a year???”

Merchant retweeted a comment by technology journalist Alex Kantrow:

“This is sad. Is this the web Google wants?”

Another person tweeted:

“It’s all about unchecked and untaxed profit.

Of course the is what Google wants – corporations selling out their employees and Americans in general for a quick buck.”

A person who works for Microsoft invoked the concept of “autophagy” which when an organism begins consuming itself such as when it is starving.

She tweeted:

“The quality of news content will decline and hurt search. Autophagy is a real threat to information quality and it looks like no one is taking it seriously.”

The Future Of Content

This isn’t just a “news” problem, it’s a problem for everyone who earns a living publishing online content. What can be used for News publications can easily be adapted for product reviews, recipes, entertainment and virtually any topic that affiliates publish content on.

How do you feel about Google’s new tool? Will it help small publishers compete against bigger sites or is it just the onset of autophagy in the body of online publishing?

NYTimes “Paid Someone To Hack OpenAI’s Products” via @sejournal, @martinibuster

OpenAI requested that a judge dismiss parts of the copyright infringement lawsuit filed by The New York Times arguing that, among other things, The New York Times hired someone to hack OpenAI in order to manufacture a basis for filing the lawsuit.

OpenAI filed a request for partial or full dismissal of four counts in the lawsuit filed by The New York Times.

New York Times Allegedly Hired Someone To Hack OpenAI

Among the explanations of why some portions of the lawsuit should be dismissed is the claim that The New York Times hired someone to specifically “hack” OpenAI in a way that a normal person would never actually use OpenAI and in violation of the terms of use.

According to OpenAI:

“The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products. It took them tens of thousands of attempts to generate the highly anomalous results that make up Exhibit J to the Complaint.

They were able to do so only by targeting and exploiting a bug (which OpenAI has committed to addressing) by using deceptive prompts that blatantly violate OpenAI’s terms of use.”

OpenAI goes on to claim that The New York Times took extraordinary steps that were not in any way the normal manner of using OpenAI’s products in order to obtain “verbatim passages” from The New York times, including providing portions of the text that they were trying to get OpenAI to reproduce.

They also call The New York Times’s allegations that the news industry is threatened by OpenAI “pure fiction” saying,

“Normal people do not use OpenAI’s products in this way. The Times’s suggestion that the contrived attacks of its hired gun show that the Fourth Estate is somehow imperiled by this technology is pure fiction.

So too is its implication that the public en masse might mimic its agent’s aberrant activity.”

The part about “its agent’s aberrant activity” is a reference to the “hired gun” that OpenAI claims The New York Times employed to create a situation where OpenAI output verbatim text.

OpenAI’s filing implies that the New York Times is trying to “monopolize facts” and the “rules of language” which is a reference to the idea that using text data to train AI models, which then generate new content, does not infringe on copyright because that’s a transformative use.

Consequence Of Allegations Against NYTimes

Artists are having a hard time in court arguing copyright infringement because AI is increasingly seen as transformative, which is the principle where a copyrighted material is transformed with new meaning or repurposed like in a parody, commentary or in creating something entirely new out of it.

The Electronic Frontier Foundation says about the principle of transformative use:

“The law favors ‘transformative’ uses — commentary, either praise or criticism, is better than straight copying — but courts have said that even putting a piece of an existing work into a new context (such as a thumbnail in an image search engine) counts as ‘transformative.’ ”

If the allegations that OpenAI makes against The New York Times is correct, what do you think are the chances that OpenAI will prevail and the current status quo for AI will remain?

Featured Image by Shutterstock/hakanyalicn

Does AI Democratize SEO Or Amplify Incompetence? via @sejournal, @martinibuster

Generative AI has introduced significant shifts in the SEO industry with some experiencing success integrating SEO into the daily workflow while others not so much. How is AI best used in SEO tasks, and what are its capabilities and limitations and can the subjective nature of SEO negatively affect the outcome for SEO?

AI-Automated SEO

There are some tasks that an AI can do reasonably well, like creating meta description tags and title elements at scale.

But the utility of AI becomes questionable when it comes to other aspects of search optimization.

AI has been put into service analyzing search engine results pages (SERPs), keyword research, content scoring based on keyword use, keyword based article creation, competitive research, as well as just creating content based on keywords.

It’s an act of faith to trust that the people behind the software understand SEO. But who do you trust if SEO is so subjective that people can’t even agree on the proper use of title tags and headings?

Even the concept of SERP analysis can go south depending on the experience of the person doing it. For example, there is a school of thought that the way to beat the competition is to understand why their content is ranking and then use those data point as the ingredients for creating an exact copy that is better, which is oxymoronic.

Obviously, you can’t make something better by making an exact copy of it that’s better. Yet, that’s the underlying logic of the Skyscraper Content Tactic that (ironically) is a copycat of the 10X Content Tactic, which are popular with those who are new to SEO. And as absurd as that tactic is, it’s at the heart of how some AI tools execute SERP analysis.

Clearly, some AI tools can amplify the inexperience of those who created the tools as well as those who use AI tools.

Julia McCoy, president of Content at Scale (LinkedIn) agrees.

She explained:

“AI is absolutely the most incredible advancement of technology that we’ve seen in the last 200 years.

We’re seeing a ton of AI tools designed for content optimization or writing generation that offer incredible efficiencies—they can streamline processes, give you powerfully detailed insights for optimization and ranking improvements, and even generate entire articles that are nearly ready to publish.

But, you’ve got to know how to use them. And you need to know who built them.

I think it’s crucial to acknowledge: no tool can transform an amateur into an expert overnight. Just as Malcolm Gladwell’s skills stem from years of honing his craft—not a tool that landed in his lap overnight—the path from budding learner to seasoned, proficient expert requires time, experience and a deep knowledge of the industry.

While AI has democratized access to advanced techniques making higher-level strategies accessible—it cannot instill wisdom nor insight where there is none. It amplifies capabilities, but also shortcomings. We need to remember that human intuition is complemented by technology, not replaced by it.”

AI Amplifies The User’s SEO Skill

Why is it that some people have success with AI and others do not? In my opinion, AI is just a tool like a paint brush. The talent and skill belongs to the person, not the tool.

A less experienced SEO will analyze a webpage by extracting the keywords from the content, the headings and the title tag. A more experienced SEO will analyze the webpage by understanding what questions it answers.

The importance of skill and experience is evident with AI image generators where some users are able to create amazingly lifelike works of are while others make images of people with seven fingers on each hand.

Does AI Democratize SEO?

There is an idea that AI can empower an SEO beginner to perform at the same level as someone with decades of experience but that’s not how AI works right now, as Julia suggested earlier.

I asked Brenda Malone, an SEO Technical Strategist and Web Developer (LinkedIn), for her opinion on AI and the potential for democratizing SEO.

Brenda shared:

“I don’t necessarily think it will totally democratize the SEO discipline as it exists today.

I think the over-abundance of AI SaaS tools will serve to overwhelm the inexperienced SEO professionals, while further empowering experienced SEO professionals who know how to exploit specific AI tools in order to make more qualified human analyses.

What I think AI’s effect on the SEO industry for the short-term will turn out to be is a decrease in the number of professionals needed because a lot of the data-gathering will be automated.

Current SEO professionals cannot afford to be Luddites, and should instead dig deep into AI to identify tasks that are related to SEO activities and develop analysis specializations because the days of getting away with merely implementing meta and title tags for ranking are gone.”

The ability of AI to amplify makes a person more efficient. For example, deep analysis of data is a snap for AI. But it’s also great for tedious tasks like performing an analysis, generating a bullet point of major takeaways from the data then creating a presentation from that data.

Takeaways

Generative AI like ChatGPT and Claude can have a significant impact on SEO, amplifying what can be done and streamlining the workflow. However it also amplifies the shortcomings of its users. Experienced SEOs can leverage generative AI to enhance their work, while those who are new to SEO might not experience the full potential, although they may benefit from SaaS tools depending on the experience of the publishers of those tools.

  • Generative AI amplifies the user’s SEO skill and experience
  • Generative AI may not necessarily democratize SEO
  • But SaaS AI tools can benefit users who are new to SEO
  • Review the founders and creators of SaaS SEO tools to understand their experience and skill levels
  • AI is the future, don’t be a Luddite

Featured Image by Shutterstock/Krakenimages.com

Google Launches “Help Me Write” AI Assistant For Chrome Browser via @sejournal, @MattGSouthern

Google is releasing a new AI feature in the Chrome web browser that can help you compose written content.

The “Help Me Write” tool, announced last month and launching this week, assists with crafting everything from online reviews to inquiries and classified ads.

Everyday Writing Made Easier

Utilizing Google’s Gemini model, Help Me Write generates text based on the context of the website you’re browsing and the text field you’re writing in.

For example, when selling an item online, Help Me Write may take a brief product description and expand it into a polished, detailed post.

Google states in an announcement:

“The tool will understand the context of the webpage you’re on to suggest relevant content.

For example, if you’re writing a review for a pair of running shoes, Chrome will pull out key features from the product page that support your recommendation so it’s more valuable to potential shoppers.”

Examples Of Help Me Write In Action

To demonstrate how Help Me Write works, Google provided the following examples.

Example One

Google Launches “Help Me Write” AI Assistant For Chrome Browser

When given the prompt “moving to a smaller place selling airfryer for 50 bucks,” the tool generated a post reading in part: “I’m moving to a smaller place and won’t have any room for my air fryer. It’s in good condition and works great. I’m selling it for $50.”

Example Two

Google Launches “Help Me Write” AI Assistant For Chrome Browser

Given the prompt “plane lands at 9 – ask to check in early,” it composed a hotel inquiry: “My flight is scheduled to arrive at 9 am, and I would like to check in as soon as possible. Is there any way I can check in early?”

Example Three

Google Launches “Help Me Write” AI Assistant For Chrome Browser

For the prompt “write a request to return a defective bike helmet that has a line crack despite not stated as covered in the product warranty,” Help Me Write suggested: “I would like to return a bike helmet that I recently purchased. The helmet developed a crack along the line where the helmet is joined together. This crack was not caused by an impact or other damage…”

How to Enable “Help Me Write”

To enable Help Me Write, Chrome users can navigate to the “Experimental AI” section of their browser settings.

This feature, integrated into the latest Chrome M122 update, is now available for English-language users in the United States on Mac and Windows PCs.

You can turn the feature off and on at any time.

Google Announces Gemma: Laptop-Friendly Open Source AI via @sejournal, @martinibuster

Google released an open source large language model based on the technology used to create Gemini that is powerful yet lightweight, optimized to be used in environments with limited resources like on a laptop or cloud infrastructure.

Gemma can be used to create a chatbot, content generation tool and pretty much anything else that a language model can do. This is the tool that SEOs have been waiting for.

It is released in two versions, one with two billion parameters (2B) and another one with seven billion parameters (7B). The number of parameters indicates the model’s complexity and potential capability. Models with more parameters can achieve a better understanding of language and generate more sophisticated responses, but they also require more resources to train and run.

The purpose of releasing Gemma is to democratize access to state of the art Artificial Intelligence that is trained to be safe and responsible out of the box, with a toolkit to further optimize it for safety.

Gemma By DeepMind

The model is developed to be lightweight and efficient which makes it ideal for getting it into the hands of more end users.

Google’s official announcement noted the following key points:

  • “We’re releasing model weights in two sizes: Gemma 2B and Gemma 7B. Each size is released with pre-trained and instruction-tuned variants.
  • A new Responsible Generative AI Toolkit provides guidance and essential tools for creating safer AI applications with Gemma.
  • We’re providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0.
  • Ready-to-use Colab and Kaggle notebooks, alongside integration with popular tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM, make it easy to get started with Gemma.
  • Pre-trained and instruction-tuned Gemma models can run on your laptop, workstation, or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE).
  • Optimization across multiple AI hardware platforms ensures industry-leading performance, including NVIDIA GPUs and Google Cloud TPUs.
  • Terms of use permit responsible commercial usage and distribution for all organizations, regardless of size.”

Analysis Of Gemma

According to an analysis by an Awni Hannun, a machine learning research scientist at Apple, Gemma is optimized to be highly efficient in a way that makes it suitable for use in low-resource environments.

Hannun observed that Gemma has a vocabulary of 250,000 (250k) tokens versus 32k for comparable models. The importance of that is that Gemma can recognize and process a wider variety of words, allowing it to handle tasks with complex language. His analysis suggests that this extensive vocabulary enhances the model’s versatility across different types of content. He also believes that it may help with math, code and other modalities.

It was also noted that the “embedding weights” are massive (750 million). The embedding weights are a reference to the parameters that help in mapping words to representations of their meanings and relationships.

An important feature he called out is that the embedding weights, which encode detailed information about word meanings and relationships, are used not just in processing input part but also in generating the model’s output. This sharing improves the efficiency of the model by allowing it to better leverage its understanding of language when producing text.

For end users, this means more accurate, relevant, and contextually appropriate responses (content) from the model, which improves its use in conetent generation as well as for chatbots and translations.

He tweeted:

“The vocab is massive compared to other open source models: 250K vs 32k for Mistral 7B

Maybe helps a lot with math / code / other modalities with a heavy tail of symbols.

Also the embedding weights are big (~750M params), so they get shared with the output head.”

In a follow-up tweet he also noted an optimization in training that translates into potentially more accurate and refined model responses, as it enables the model to learn and adapt more effectively during the training phase.

He tweeted:

“The RMS norm weight has a unit offset.

Instead of “x * weight” they do “x * (1 + weight)”.

I assume this is a training optimization. Usually the weight is initialized to 1 but likely they initialize close to 0. Similar to every other parameter.”

He followed up that there are more optimizations in data and training but that those two factors are what especially stood out.

Designed To Be Safe And Responsible

An important key feature is that it is designed from the ground up to be safe which makes it ideal for deploying for use. Training data was filtered to remove personal and sensitive information. Google also used reinforcement learning from human feedback (RLHF) to train the model for responsible behavior.

It was further debugged with manual re-teaming, automated testing and checked for capabilities for unwanted and dangerous activities.

Google also released a toolkit for helping end-users further improve safety:

“We’re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications. The toolkit includes:

  • Safety classification: We provide a novel methodology for building robust safety classifiers with minimal examples.
  • Debugging: A model debugging tool helps you investigate Gemma’s behavior and address potential issues.
  • Guidance: You can access best practices for model builders based on Google’s experience in developing and deploying large language models.”

Read Google’s official announcement:

Gemma: Introducing new state-of-the-art open models

Featured Image by Shutterstock/Photo For Everything

Google Launches Gemini Business & Enterprise For Workspace Users via @sejournal, @MattGSouthern

Google launches AI assistant Gemini for Workspace, available in 2 pricing tiers for business and enterprise.

  • Google rebranded its AI assistant Duet as Gemini and made it more widely available.
  • Gemini allows natural conversations to generate ideas, summaries, and more for Workspace users.
  • Google launched Gemini Business and Gemini Enterprise pricing plans for small teams and heavy AI users.
5 Questions Answered About The OpenAI Search Engine via @sejournal, @martinibuster

It was reported that OpenAI is working on a search engine that would directly challenge Google. But details missing from the report raise questions about whether OpenAI is creating a standalone search engine or if there’s another reason for the announcement.

OpenAI Web Search Report

The report published on The Information relates that OpenAI is developing a Web Search product that will directly compete with Google. A key detail of the report is that it will be partly powered by Bing, Microsoft’s search engine. Apart from that there are no other details, including whether it will be a standalone search engine or be integrated within ChatGPT.

All reports note that it will be a direct challenge to Google so let’s start there.

1. Is OpenAI Mounting A Challenge To Google?

OpenAI is said to be using Bing search as part of the rumored search engine, a combination of a GPT-4 with Bing Search, plus something in the middle to coordinate between the two .

In that scenario, what OpenAI is not doing is developing its own search indexing technology, it’s using Bing.

What’s left then for OpenAI to do in order to create a search engine is to devise how the search interface interacts with GPT-4 and Bing.

And that’s a problem that Bing has already solved by using what it Microsoft calls an orchestration layer. Bing Chat uses retrieval-augmented generation (RAG) to improve answers by adding web search data to use as context for the answers that GPT-4 creates. For more information on how orchestration and RAG works watch the keynote at Microsoft Build 2023 event by Kevin Scott, Chief Technology Officer at Microsoft, at the 31:45 minute mark here).

If OpenAI is creating a challenge to Google Search, what exactly is left for OpenAI to do that Microsoft isn’t already doing with Bing Chat? Bing is an experienced and mature search technology, an expertise that OpenAI does not have.

Is OpenAI challenging Google? A more plausible answer is that Bing is challenging Google through OpenAI as a proxy.

2. Does OpenAI Have The Momentum To Challenge Google?

ChatGPT is the fastest growing app of all time, currently with about 180 million users, achieving in two months what took years for Facebook and Twitter.

Yet despite that head start Google’s lead is a steep hill for OpenAI to climb.  Consider that Google has approximately 3 to 4 billion users worldwide, absolutely dwarfing OpenAI’s 180 million.

Assuming that all 180 million OpenAI users performed an average of 4 searches per day, the daily number of searches could reach 720 million searches per day.

Statista estimates that there are 6.3 million searches on Google per minute which equals over 9 billion searches per day.

If OpenAI is to compete they’re going to have to offer a useful product with a compelling reason to use it. For example, Google and Apple have a captive audience on mobile device ecosystem that embeds them into the daily lives of their users, both at work and at home. It’s fairly apparent that it’s not enough to create a search engine to compete.

Realistically, how can OpenAI achieve that level of ubiquity and usefulness?

OpenAI is facing an uphill battle against not just Google but Microsoft and Apple, too. If we count Internet of Things apps and appliances then add Amazon to that list of competitors that already have a presence in billions of users daily lives.

OpenAI does not have the momentum to launch a search engine to compete against Google because it doesn’t have the ecosystem to support integration into users lives.

3. OpenAI Lacks Information Retrieval Expertise

Search is formally referred to as Information Retrieval (IR) in research papers and patents. No amount of searching in the Arxiv.org repository of research papers will surface papers authored by OpenAI researchers related to information retrieval. The same can be said for searching for information retrieval (IR) related patents. OpenAI’s list of research papers also lacks IR related studies.

It’s not that OpenAI is being secretive. OpenAI has a long history of publishing research papers about the technologies they’re developing. The research into IR does not exist. So if OpenAI is indeed planning on launching a challenge to Google, where is the smoke from that fire?

It’s a fair guess that search is not something OpenAI is developing right now. There are no signs that it is even flirting with building a search engine, there’s nothing there.

4. Is The OpenAI Search Engine A Microsoft Project?

There is substantial evidence that Microsoft is furiously researching how to use LLMs as a part of a search engine.

All of the following research papers are classified as belonging to the fields of Information Retrieval (aka search), Artificial Intelligence, and Natural Language Computing.

Here are few research papers just from 2024:

Enhancing human annotation: Leveraging large language models and efficient batch processing
This is about using AI for classifying search queries.

Structured Entity Extraction Using Large Language Models
This research paper discovers a way to extracting structured information from unstructured text (like webpages). It’s like turning a webpage (unstructured data) into a machine understandable format (structured data).

Improving Text Embeddings with Large Language Models (PDF version here)
This research paper discusses a way to get high-quality text embeddings that can be used for information retrieval (IR). Text embeddings is a reference to creating a representation of text in a way that can be used by algorithms to understand the semantic meanings and relationships between the words.

The above research paper explains the use:

“Text embeddings are vector representations of natural language that encode its semantic information. They are widely used in various natural language processing (NLP) tasks, such as information retrieval (IR), question answering…etc. In the field of IR, the first-stage retrieval often relies on text embeddings to efficiently recall a small set of candidate documents from a large-scale corpus using approximate nearest neighbor search techniques.”

There’s more research by Microsoft that relates to search, but these are the ones that are specifically related to search together with large language models (like GPT-4.5).

Following the trail of breadcrumbs leads directly to Microsoft as the technology powering any search engine that OpenAI is supposed to be planning… if that rumor is true.

5. Is Rumor Meant To Steal Spotlight From Gemini?

The rumor that OpenAI is launching a competing search engine was published on February 14th. The next day on February 15th Google announced the launch of Gemini 1.5, after announcing Gemini Advanced on February 8th.

Is it a coincidence that OpenAI’s announcement completely overshadowed the Gemini announcement the next day? The timing is incredible.

At this point the OpenAI search engine is just a rumor.

Featured Image by Shutterstock/rafapress

Revolutionizing SEO With Google’s Search Generative Experience via @sejournal, @VincentTerrasi

The advent of Google’s Search Generative Experience (SGE) is revolutionizing online search, ushering in a new era of contextualization and intuition in information discovery.

This technological advancement is fundamentally changing SEO strategies, requiring professionals to adopt a new approach to content creation.

The impact on users is equally significant, with AI greatly facilitating access to search results.

This article proposes an advanced thematic mapping approach to maximize the effectiveness of these technologies in SEO.

It also discusses the knowledge of large language models (LLMs) such as OpenAI’s GPT, Google’s Bard, and Microsoft’s Bing AI, highlighting their limitations and potential in SEO content creation.

The Arrival Of Google SGE

Google SGE marks a revolutionary shift in online search. This innovation sees Google take a more contextual and intuitive approach to information retrieval.

This development has a significant impact on how SEO professionals need to think and plan their content strategies.

The user experience is also changing, as AI-guided search results are easier to find.

Answers are quickly accessible without having to sift through multiple tabs and pages.

Understanding how this AI works and extracting its knowledge using new methods is essential to position yourself effectively and understand its limitations.

Understanding Large Language Models (LLMs)

LLMs such as GPT, Bard, and Bing AI are powerful tools with impressive natural language generation and understanding capabilities.

However, these models have limitations, particularly when it comes to understanding specific contexts and updating information.

Ontologies around bikes: LLM exampleImage from author, January 2024

SEO project staff need to understand these limitations in order to maximize their content creation efficiency.

There are two types of knowledge: that which comes from the data used for training and that which is in the search engine index and is used as part of the answers.

To illustrate this, I’d like to show you how we can map this knowledge.

Thematic Mapping Importance

Thematic mapping is a critical tool in SEO that organizes and structures content in a logical and intuitive way.

It ensures that all facets of a topic are covered, increasing the relevance and quality of the content. Using an LLM for thematic mapping offers unique advantages in generating new ideas and perspectives.

Structuring A Thematic Map

Topic mapping is the practice of grouping related ideas and topics into clusters to facilitate the creation of coherent and comprehensive content.

This approach not only helps to organize ideas in a logical way but also to identify gaps in existing content.

Topic Map Architecture

Choice Of Topic And Keywords

Choose a niche topic and identify the relevant keywords. You can start with the LLM of your choice, but I prefer one approach.

The first is to use Google’s AI, such as PaLM 2 if you have a good command of Google’s tools (for your information, I have set up a training course on Data Marketing Labs).

Here’s a very simplified prompt to get the ontologies present in ChatGPT: 

  • Give me a list in a table of the ontologies around “YOUR CONCEPT.”
ChatGPT 4 exampleScreenshot from ChatGPT 4, January 2024

Brainstorming

For each expression, you’ll ask the LLM to brainstorm by creating several passages related to the expression.

I’m often asked why I make several passages. The answer is simply because, depending on the creativity threshold and the answers in Google’s index, the LLM may have slightly different answers, and this allows the field of possibilities to be included.

Here’s a picture where I’m using PaLM 2 to generate topics that the AI knows perfectly well and that are searched for by web users. The more topics you ask for, the better the coverage of your topic.

SGE Simulator Screenshot from SGE Simulator, January 2024

Extraction

Next, you can extract all the entities mentioned in relation to a concept.

In the beginning, I used Python libraries, but now you can use an LLM because the task is very simple for it.

In the end, everything is stored in an array, and you can count the number of times a concept appears. I’d like to explain this part in more detail.

In this image, you can see that for each of the previous topics, I’m doing five generations to try to recover as many relevant elements as possible, which will help me generate the mind map.

For each generated text, I use PaLM 2 to extract the important terms, and everything is stored in a table.

Palm2 exampleScreenshot from PaLM 2, January 2024

As you can see, each step takes place in a Google Colab, which provides ready-to-use tables and makes it easy to sort your data with a few clicks.

Google Colab tables of data from Palm2Screenshot from Google Colab, January 2024

Next, you can build the thematic map from this prompt.

thematic mapScreenshot from author, January 2024

Here’s a possible result:

**Step 1: Topical Map Creation**

**Topic:** Bike

**Nouns/Predicates:**

**1. Bike Types**
– Peloton bike
– Electric bike
– Mountain bike
– Road bike

**2. Bike Components**
– Sound bike boy bell
– Bike clips
– Bike pedals
– Road bike pedals

**3. Bike Activities**
– Run walk bike regularly
– Human fully aware bike

**4. Bike Accessories**
– Mountain bike tires

**5. Bike Safety**

**Step 2: Semantic Graph**

| Subtopic 1 | Subtopic 2 | Semantic Connection |
|—|—|—|
| Peloton bike | Electric bike | Both are types of bikes that are powered by electricity. |
| Electric bike | Sound bike boy bell | Both are components that can be added to a bike. |
| Sound bike boy bell | Human fully aware bike | Both are related to the safety of the rider. |
| Human fully aware bike | Run walk bike regularly | Both are related to the physical activity of riding a bike. |
| Run walk bike regularly | Mountain bike tires | Both are related to the terrain that a bike can be ridden on. |
| Mountain bike tires | Road bike pedals | Both are components that can be added to a bike. |
| Road bike pedals | Bike clips | Both are components that can be added to a bike. |

**Step 3: Mind Map**

**Topic:** Bike

**Subcategories:**

1. Bike Types
– Peloton bike
– Electric bike
– Mountain bike
– Road bike
– BMX bike

2. Bike Components
– Sound bike boy bell
– Bike clips
– Bike pedals
– Road bike pedals
– Bike seat

3. Bike Activities
– Run walk bike regularly
– Human fully aware bike
– Bike racing
– Bike touring
– Bike commuting

4. Bike Accessories
– Mountain bike tires
– Bike helmet
– Bike lights
– Bike pump
– Bike rack

5. Bike Safety
– Bike brakes
– Bike reflectors
– Bike locks
– Bike insurance
– Bike maintenance

6. Bike Brands
– Specialized
– Trek
– Giant
– Cannondale
– Scott

7. Bike Shops
– REI
– Performance Bike
– Dick’s Sporting Goods
– Walmart
– Target

8. Bike Events
– Tour de France
– Giro d’Italia
– Vuelta a España
– Tour of California
– USA Cycling National Championships

9. Bike Clubs
– The League of American Bicyclists
– Sierra Club
– National Bike Club
– Adventure Cycling Association
– Bikers Against Child Abuse

10. Bike Advocacy
– People for Bikes
– Rails-to-Trails Conservancy
– Bike Walk Alliance
– National Center for Safe Routes to School
– International Bicycle Fund

Visualization

You can visualize the whole thing using mind-mapping tools, and the results are very instructive.

For easy visualization, there is a free Assistant in ChatGPT called AI Diagrams.

I invite you to search for it, and then you can request the generation of your mind map from the table from the previous steps.

ChatGPT 4 bike mind map  Screenshot from ChatGPT 4, January 2024
Bike mind map results from promptScreenshot from ChatGPT 4, January 2024

Now I’d like to suggest some advanced actions to anticipate Google SGE.

Advanced Strategies For Google SGE

  • Use Google’s PaLM 2 to check that the map covers all aspects of your topic.
  • Integrate the map into your content research and writing process.
  • Turn each sub-theme into a web page, blog post, or other form of content and link them together to create an interconnected web of content.

This methodology provides a robust way to understand the role of generative AI in search engines and optimize for generative AI search features.

It’s enriched with detailed examples and explanations and focuses not just on topic optimization but on prioritizing content quality and targeting specific search intentions.

Human intervention remains critical to search intent and content quality.

The combination of a skilled writer and AI can enhance content optimization, using tools to maximize the efficiency and relevance of your ecosystem.

With the advent of generative AI, any SEO professional can build their own tool.

More resources: 


Featured Image: Summit Art Creations/Shutterstock