OpenAI content deal will enhance ChatGPT with the ability to show real-time content with links in response to queries. OpenAI quietly took steps to gaining more search engine type functionality as part of a content licensing deal that may have positive implications for publishers and SEO.
Content Licensing Deal
OpenAI agreed to content licensing with the Financial Times, a global news organization with offices in London, New York, across continental Europe and Asia.
Content licensing deals between AI organizations and publishers are generally about getting access to high quality training data. The training data is then used by language models to learn connections between words and concepts. This deal goes far beyond that use.
ChatGPT Will Show Direct Quotes With Attribution
What makes this content licensing deal between The Financial Times and OpenAI is that there is a reference to giving attribution to content within ChatGPT.
The announced licensing deal explicitly mentions the use of the licensed content so that ChatGPT could directly quote it and provide links to the licensed content.
Further, the licensing deal is intended to help improve ChatGPT’s “usefulness”, which is vague and can mean many things, but it takes on a slightly different meaning when used in the context of attributed answers.
The Financial Times agreement states that the licensing deal is for use in ChatGPT when it provides “attributed content” which is content with an attribution, commonly a link to where the content appeared.
This is the part of the announcement that references attributed content:
“The Financial Times today announced a strategic partnership and licensing agreement with OpenAI, a leader in artificial intelligence research and deployment, to enhance ChatGPT with attributed content, help improve its models’ usefulness by incorporating FT journalism, and collaborate on developing new AI products and features for FT readers. “
And this is the part of the announcement that mentions ChatGPT offering users attributed quotes and links:
“Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and links to FT journalism in response to relevant queries.”
The Financial Times Group CEO was even more explicit about OpenAI’s intention to show content and links in ChatGPT:
“This is an important agreement in a number of respects,” said FT Group CEO John Ridding. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI. …this partnership will help keep us at the forefront of developments in how people access and use information.
OpenAI understands the importance of transparency, attribution, and compensation…”
Brad Lightcap, COO of OpenAI directly referenced showing real-time news content in ChatGPT but more important he referenced OpenAI exploring new ways to show content to its user base.
Lastly, the COO stated that they embraced disruption, which means innovation that creates a new industry or paradigm, usually at the expense of an older one, like search engines.
Lightcap is quoted:
“We have always embraced new technologies and disruption, and we’ll continue to operate with both curiosity and vigilance as we navigate this next wave of change.”
Showing direct quotes of Financial Times content with links to that content is very similar to how search engines work. This is a big change to how ChatGPT works and could be a sign of where ChatGPT is going in the future, a functionality that incorporates online content with links to that content.
Something Else That Is Possibly Related
Someone on Twitter recently noticed a change that is related to “search” in relation to ChatGPT.
This change involves an SSL security certificate that was added for a subdomain of ChatGPT.com. ChatGPT.com is a domain name that was snapped up by someone to capitalize on the 2022 announcement of ChatGPT by OpenAI. OpenAI eventually acquired the domain and it’s been redirecting to ChatGPT.
The change that was noticed is to the subdomain: search.chatgpt.com.
This is significant news for publishers and search marketers ChatGPT will become a source of valuable traffic if OpenAI takes ChatGPT in the direction of providing attributed summaries and direct quotes.
How Can Publishers Get Traffic From ChatGPT?
Questions remain about attributed quotes with links in response to relevant queries. Here are X unknowns about ChatGPT attributed links.
Does this mean that only licensed content will be shown and linked to in ChatGPT?
Will ChatGPT incorporate and use most web data without licensing deals in the same way that search engines do?
OpenAI may incorporate an Opt-In model where publishers can use a notation in Robots.txt or in meta data to opt-in to receiving traffic from ChatGPT.
Would you opt into receiving traffic from ChatGPT in exchange for allowing your content to be used for training?
How would SEOs and publisher’s equation on ChatGPT change if their competitors are all receiving traffic from ChatGPT?
Appeasing the advertising algorithms on social media and Google has always been a mix of art and science – but there are plenty of rewards, not only in terms of active engagement but also in terms of lowered ad costs.
So far, AI tools have proved useful as an algorithm-cracker for the biggest digital marketing and ad agencies.
The problem is that small businesses and boutique agencies are still working to find the entry point with AI to pinpoint the recipe for ad success – from creation, targeting, testing, and reporting. This raises some thorny questions:
What are the key AI tools marketers should be using?
How can they be used between creative, ad deployment, and measurement?
How much should a business plan to invest in AI tools?
Is this more or less expensive than traditional advertising strategies?
I recently interviewed Logan Welbaum, an ex-Google and Meta employee who worked directly with the ad platforms at both companies.
Today, he is the founder and CEO of Plai, a Y Combinator-backed platform focused on advertising automation.
Welbaum has a precise understanding of social media ad algorithms and, specifically, how AI can be used to optimize ad strategies in line with them.
Image from Logan Welbaum, April 2024
Below is a lightly edited transcript of my questions and Logan Welbaum’s answers.
How Google And Meta Leverage User Interaction Data
Greg Jarboe: “From your experience at Google and Meta, how were large companies utilizing AI for ad optimization? What key functionalities did these solutions offer?”
Logan Welbaum: “Platforms such as Google and Facebook leverage user interaction data to optimize their ad serving. Their AI technologies offer advertisers the highest level of performance for their advertising campaigns.
Their approach is different from that of ad agencies and Plai, which own inventory data and optimize campaign creation. They offer features that allow for the uploading of additional data and the targeting of more specific audiences, because the inclusion of more data and signals enhances the performance of their AI systems.”
How Can Small Businesses Leverage AI-powered Solutions?
Jarboe: “How can smaller businesses, without the resources of large corporations, leverage similar AI-powered solutions for their advertising needs? Are there affordable, accessible options available?”
Welbaum: “Understanding ad metrics and following changes in social media ad algorithms requires experience. Plai trains our AI model to learn our experience in digital marketing and observation from our existing campaigns, so that it can assist small businesses.”
Jarboe: “Beyond basic automation, what unique functionalities can AI offer smaller businesses in ad creation, targeting, and measurement that traditional methods lack?”
Welbaum: “Ultimately, AI can save small businesses and agencies time and money. AI allows them to generate and analyze ad content in a much shorter amount of time and with fewer people than it would normally take. AI can build detailed plans and lists for targeting and help track the success of a campaign.
With Plai, small businesses can use our tools to launch things like video or Facebook ads in seconds and keep track of each ad’s progress all the way through its lifecycle. Our tech is a text-to-advertising approach, so a small business can just plug in their criteria and Plai will generate an ad with relevant keywords and images or videos.”
Jarboe: “Can you elaborate on how your platform specifically uses AI to address the needs of smaller businesses in ad creation, deployment, and measurement?”
Welbaum: “Taking Google Ads as an example, Google Ad Manager gives customers full control over a campaign but requires experience and knowledge to create a performing campaign. On the other hand, they have a product like Smart campaigns for those with little experience in digital marketing, but it lacks control and transparency.
Plai utilizes AI to guide small businesses to create a performing campaign. This allows advertisers to save time and reduce mistakes, but also gives control if they want to update AI’s recommendations.
Similarly, in terms of measurement, Google Ad Manager offers a comprehensive view of a campaign through an extensive array of metrics, whereas Smart campaign provides a more limited set of metrics. Our AI analyzes all available metrics, interprets them, and then delivers actionable insights in language that doesn’t require any digital marketing expertise.”
What Are The Challenges For Small Businesses Arising From Social Media Ad Algorithms?
Jarboe: “What are the key challenges small businesses face when navigating social media ad algorithms? How does Plai utilize AI to help them overcome these challenges?”
Welbaum: “SMBs want to grow revenue and run their businesses. Ads that are relevant and creative perform better and win and SMBs a cheaper ad cost because of it.”
Jarboe: “Can you provide concrete examples of how smaller businesses have achieved success using Plai’s AI-powered advertising solutions? What metrics demonstrate the effectiveness of this approach?”
Welbaum: “Plai customers have seen 90% decrease in ad-cost as well as 105% higher click-through rates than the industry standard. Not only do Plai customers spend less on ads and see better click-through results, but the platform also allows them to generate ads more quickly and effectively, reaching their desired target audience ultimately generating more business.”
How Will AI-Powered Advertising Tools Evolve In The Next Few Years?
Jarboe: “In your opinion, how will the landscape of AI-powered advertising tools evolve in the coming years? What new functionalities can we expect?”
Welbaum: “Ad creative will improve dramatically for brands of any size with any budget.”
Jarboe: “While AI offers significant advantages, are there any potential drawbacks or limitations smaller businesses should be aware of when employing AI for advertising?”
Welbaum: “We automate everything from a single prompt, but we still provide features and options for customers to take over and control or make edits. Still having that control is essential.”
How Can Small Businesses Prepare To Integrate AI Into Marketing Strategies?
Jarboe: “Looking ahead, how can smaller businesses best prepare to fully integrate AI into their overall marketing strategies?”
Welbaum: “Small businesses can prepare to implement AI into their marketing strategies through the creative process and general optimizations – these two elements of advertising will naturally be a first step for AI to fit within SMBs.”
Jarboe: “ChatGPT, while impressive, represents just one facet of AI. Can you elaborate on other AI applications beyond language models that can be beneficial for smaller businesses in advertising?”
Welbaum: “Recommendation algorithms, such as Amazon’s ‘customers also bought’ feature, serve as one example. Plai has a wide range of clients. This enables our model to identify successful strategies among a subset of customers and then recommend those insights across our entire customer base.”
Jarboe: “How important is it for smaller businesses to possess a basic understanding of social media ad algorithms to effectively utilize AI-powered advertising tools?”
Welbaum: “SMBs want to grow revenue and run their businesses. Ads that are relevant and creative perform better and win and SMBs a cheaper ad cost because of it.”
AI Is Leveling The Playing Field In Advertising
When it comes to advertising, AI offers significant advantages – and it’s not just for big businesses.
With advertising platforms leaning into AI technology, incorporating it into your process can help you navigate their algorithms. Automated campaigns are just part of the equation. AI can power more efficient analysis and unlock new customer insights. As the technology improves and more platforms and service providers lean in, it will become more accessible.
Incorporating AI into your marketing approach can help you stay competitive as a small business.
DeepL, the makers of the DeepL Translator, announced a new product called DeepL Write, an AI real-time editor that is powered by their own Large Language Model (LLM) that improves content at the draft stage, preserving the tone and voice of the writer.
Unlike many other AI writing tools, DeepL Write is not a content generator, it’s an editor that offers suggestions for what words to choose, how best to phrase ideas, proofreading your documents so that they sound professional and in the right tone and voice. Plus the usual spelling, grammar, and punctuation improvements.
According to DeepL:
“Unlike common generative AI tools that auto-populate text, or rules-based grammar correction tools, DeepL Write Pro acts as a creative assistant to writers in the drafting process, elevating their text with real-time, AI-powered suggestions on word choice, phrasing, style, and tone.
This unique approach sparks a creative synergy between the user and the AI that transforms text while preserving the writer’s authentic voice. DeepL Write Pro’s strength lies in its ability to give writers a sophisticated boost in their communication, regardless of language proficiency—empowering them to find the perfect words for any situation or audience.”
Enterprise Grade Security
DeepL write also comes with TLS (Transport Layer Security) encryption. TLS is a is a protocol that’s used to encrypt data sent between an app and a server, commonly used for email and instant messaging and it’s also the technology that is behind HTTPS which keeps websites secure.
In addition to keeping the documents secure DeepL write also comes with a text deletion feature to ensure that all documents are secure and nothing is stored online.
Standalone and With DeepL Translator Integration
DeepL Write is available as a standalone app and as part of a suite together with DeepL Translator. The integration with DeepL Translator makes it an advanced tool for creating documentation that can be rewritten into another language in the right tone and style.
At this time DeepL Write Pro is available in English and German, with more languages becoming available soon.
The standalone product is available in a free version with limited text improvements and a Pro version that costs $10.99 per month.
DeepL Write Pro comes with the following features:
Maximum data security
Unlimited text improvements
Unlimited use of alternatives
Unlimited use of writing styles
Team administration
There is also an Enterprise level named DeepL Write for Business which is for organizations that need it for 50 or more users.
DeepL Write Pro
Many publishers and search marketers who depended on AI for generating content have reported having lost rankings during the last Google Core Algorithm update in March. Naturally many publishes are hesitant to give AI a try for generating content.
DeepL Write Pro offers an alternative use of AI for content in the form of a virtual editor that can help to polish up a human’s writing and help make it more concise, professional and in the correct style and tone.
One of the things that stands between a passable document and a great document is good writing, something that an editor is useful for elevating content to a higher quality. Given the modest price and the value that a good editor provides, the timing for this kind of product couldn’t be better.
Google DeepMind published a research paper that proposes language model called RecurrentGemma that can match or exceed the performance of transformer-based models while being more memory efficient, offering the promise of large language model performance on resource limited environments.
The research paper offers a brief overview:
“We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.”
Connection To Gemma
Gemma is an open model that uses Google’s top tier Gemini technology but is lightweight and can run on laptops and mobile devices. Similar to Gemma, RecurrentGemma can also function on resource-limited environments. Other similarities between Gemma and RecurrentGemma are in the pre-training data, instruction tuning and RLHF (Reinforcement Learning From Human Feedback). RLHF is a way to use human feedback to train a model to learn on its own, for generative AI.
Griffin Architecture
The new model is based on a hybrid model called Griffin that was announced a few months ago. Griffin is called a “hybrid” model because it uses two kinds of technologies, one that allows it to efficiently handle long sequences of information while the other allows it to focus on the most recent parts of the input, which gives it the ability to process “significantly” more data (increased throughput) in the same time span as transformer-based models and also decrease the wait time (latency).
The Griffin research paper proposed two models, one called Hawk and the other named Griffin. The Griffin research paper explains why it’s a breakthrough:
“…we empirically validate the inference-time advantages of Hawk and Griffin and observe reduced latency and significantly increased throughput compared to our Transformer baselines. Lastly, Hawk and Griffin exhibit the ability to extrapolate on longer sequences than they have been trained on and are capable of efficiently learning to copy and retrieve data over long horizons. These findings strongly suggest that our proposed models offer a powerful and efficient alternative to Transformers with global attention.”
The difference between Griffin and RecurrentGemma is in one modification related to how the model processes input data (input embeddings).
Breakthroughs
The research paper states that RecurrentGemma provides similar or better performance than the more conventional Gemma-2b transformer model (which was trained on 3 trillion tokens versus 2 trillion for RecurrentGemma). This is part of the reason the research paper is titled “Moving Past Transformer Models” because it shows a way to achieve higher performance without the high resource overhead of the transformer architecture.
Another win over transformer models is in the reduction in memory usage and faster processing times. The research paper explains:
“A key advantage of RecurrentGemma is that it has a significantly smaller state size than transformers on long sequences. Whereas Gemma’s KV cache grows proportional to sequence length, RecurrentGemma’s state is bounded, and does not increase on sequences longer than the local attention window size of 2k tokens. Consequently, whereas the longest sample that can be generated autoregressively by Gemma is limited by the memory available on the host, RecurrentGemma can generate sequences of arbitrary length.”
RecurrentGemma also beats the Gemma transformer model in throughput (amount of data that can be processed, higher is better). The transformer model’s throughput suffers with higher sequence lengths (increase in the number of tokens or words) but that’s not the case with RecurrentGemma which is able to maintain a high throughput.
The research paper shows:
“In Figure 1a, we plot the throughput achieved when sampling from a prompt of 2k tokens for a range of generation lengths. The throughput calculates the maximum number of tokens we can sample per second on a single TPUv5e device.
…RecurrentGemma achieves higher throughput at all sequence lengths considered. The throughput achieved by RecurrentGemma does not reduce as the sequence length increases, while the throughput achieved by Gemma falls as the cache grows.”
Limitations Of RecurrentGemma
The research paper does show that this approach comes with its own limitation where performance lags in comparison with traditional transformer models.
The researchers highlight a limitation in handling very long sequences which is something that transformer models are able to handle.
According to the paper:
“Although RecurrentGemma models are highly efficient for shorter sequences, their performance can lag behind traditional transformer models like Gemma-2B when handling extremely long sequences that exceed the local attention window.”
What This Means For The Real World
The importance of this approach to language models is that it suggests that there are other ways to improve the performance of language models while using less computational resources on an architecture that is not a transformer model. This also shows that a non-transformer model can overcome one of the limitations of transformer model cache sizes that tend to increase memory usage.
This could lead to applications of language models in the near future that can function in resource-limited environments.
Read the Google DeepMind research paper:
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models (PDF)
Featured Image by Shutterstock/Photo For Everything
Brave announced their new privacy-focused AI search engine called Answer with AI that works with its own search index of billions of websites. Their current search engine already serves 10 billion search queries per year which means that Brave’s AI-powered search engine is now one of the largest AI search engines online.
Many in the search marketing and ecommerce communities have expressed anxiety about the future of the web because of AI search engines. Brave’s AI search engine still shows links and most importantly it does not by default answer commercial or transactional queries with AI, which should be good news for SEOs and online businesses. Brave values the web ecosystem and will be monitoring website visit patterns.
Search Engine Journal spoke with Josep M. Pujol, Chief of Search at Brave who answered questions about the search index, how it works with AI and most importantly, he shared what SEOs and business owners need to know in order to improve rankings.
Answer With AI Is Powered By Brave
Unlike other AI search solutions, Brave’s AI search engine is powered completely by its own search index of crawled and ranked websites. The entire underlying technology, from the search index to the Large Language Models (LLMs) and even the Retrieval Augmented Generation (RAG) technology is all developed by Brave. This is especially good from a standpoint of privacy and it also makes the Brave search results unique, further distinguishing it from other me-too search engine alternatives.
Search Technology
The search engine itself is all done in-house. According to Josep M. Pujol, Chief of Search at Brave:
“We have query-time access to all our indexes, more than 20 billion pages, which means we are extracting arbitrary information in real-time (schemas, tables, snippets, descriptions, etc.). Also, we go very granular on what data to use, from whole paragraphs or texts on a page to single sentences or rows in a table.
Given that we have an entire search engine at our disposal, the focus is not on retrieval, but selection and ranking. Additionally, to pages in our index, we do have access to the same information used to rank, such as scores, popularity, etc. This is vital to help select which sources are more relevant.”
Retrieval Augmented Generation (RAG)
The way the search engine works is it has a search index and large language models plus Retrieval Augmented Generation (RAG) technology in between that keeps the answers fresh and fact-based. I asked about RAG and Josep confirmed that’s how it works.
He answered:
“You are correct that our new feature is using RAG. As a matter of fact, we’ve already been using this technique on our previous Summarizer feature released in March 2023. However, in this new feature, we are expanding both the quantity and quality of the data used in the content of the prompt.”
Large Language Models Used
I asked about the language models in use in the new AI search engine and how they’re deployed.
“Models are deployed on AWS p4 instances with VLLM.
We use a combination of Mixtral 8x7B and Mistral 7B as the main LLM model.
However, we also run multiple custom trained transformer models for auxiliary tasks such as semantic matching and question answering. Those models are much smaller due to strict latency requirements (10-20 ms).
Those auxiliary tasks are crucial for our feature, since those are the ones that do the selection of data that will end up being on the final LLM prompt; this data can be query-depending snippets of text, schemas, tabular data, or internal structured data coming from our rich snippets. It is not a matter of being able to retrieve a lot of data, but to select the candidates to be added to the prompt context.
For instance, the query “presidents of france by party” processes 220KB of raw data, including 462 rows selected from 47 tables, 7 schemas. The prompt size is around 6500 tokens, and the final response is a mere 876 bytes.
In short, one could say that with “Answer with AI” we go from 20 billion pages to a few thousand tokens.”
How AI Works With Local Search Results
I next asked about how the new search engine will surface local search. I asked Josep if he could share some scenarios and example queries where the AI answer engine will surface local businesses. For example, if I query for best burgers in San Francisco will the AI answer engine provide an answer for that and links to it? Will this be useful for people making business or vacation travel plans?
Josep answered:
“The Brave Search index has more than 1 billion location-based schemas, from which we can extract more than 100 million businesses and other points of interest.
Answer with AI is an umbrella term for Search + LLMs + multiple specialized machine learning models and services to retrieve, rank, clean, combine and represent information. We mention this because LLMs do not make all the decisions. As of now, we use them predominantly to synthesize unstructured and structured information, which happens in offline operations as well as in query-time ones.
Sometimes the end result feels very LLM-influenced (this is the case when we believe the answer to the user question is a single Point of Interest, e.g. “checkin faro cuisine”, and other times their work is more subtle (e.g.”best burgers sf”), generating a business description across different web references or consolidating a category for the business in a consistent taxonomy.”
Tips For Ranking Well
I next asked if using Schema.org structured data was useful for helping a site rank better in Brave and if he had any other tips for SEO and online businesses.
He answered:
“Definitely, we pay special attention to schema.org structured data when building the context of the LLM prompt. The best is to have structured data about their business (standard schemas from schema.org). The more comprehensive those schemas are, the more accurate the answer will be.
That said, our Answer with AI will be able to surface data about the business not in those schemas too, but it is always advisable to repeat information in different formats.
Some businesses only rely on aggregators (Yelp, Tripadvisor, Yellow Pages) for their business information. There are advantages to adding schemas to the business web site even if only for crawling bots.”
Plans For AI Search In The Brave Browser
Brave shared that at some point in the near future they will integrate the new AI search functionality directly in the Brave Browser.
Josep explained:
“We plan to integrate the AI answer engine with Brave Leo (the AI assistant embedded in the Brave browser) very soon. Users will have the option to send the answer to Leo and continue the session there.”
Other Facts
Brave’s announcement also shared these facts about the new search engine:
“Brave Search’s generative answers are not just text. The deep integration between the index and model makes it possible for us to combine online, contextual, named entities enrichments (a process that adds more context to a person, place, or thing) as the answer is generated. This means that answers combine generative text with other media types, including informational cards and images.
The Brave Search answer engine can even combine data from the index and geo local results to provide rich information on points of interest. To date, the Brave Search index has more than 1 billion location-based schemas, from which we can extract more than 100 million businesses and other points of interest. These listings—larger than any public dataset—mean the answer engine can provide rich, instant results for points of interest all over the world.”
Google’s CEO Sundar Pichai recently discussed the future of search, affirming the importance of websites (good news for SEO). But how can that be if AI is supposed to make search engines obsolete (along with SEO)?
Search vs Chatbots vs Generative Search
There’s a lot of discussion about AI search but what’s consistently missing is a delineation of what is meant by that phrase.
There are three ways to think about what is being discussed:
Search Engines
Chatbots like Gemini or ChatGPT
Generative Search (which are chatbots stacked on top of a traditional search engine like Perplexity.ai and Bing)
Traditional Search Is A Misnomer
The word misnomer means an inaccurate name, description or label that’s given to something. We still talk about traditional search, perhaps out of habit. The reality that must be acknowledged is that traditional search no longer exists. It’s a misnomer to refer to Google as traditional search.
Sundar Pichai made the point that Google has been using AI for years and we know this is true because of systems like RankBrain, SpamBrain, Helpful Content System (aka HCU) and the Reviews System. AI is involved at virtually every step of Google search from the backend to the frontend in the search results.
“First, we have systems that can detect spam when we crawl pages or other content. …Some content detected as spam isn’t added to the index.
These systems also work for content we discover through sitemaps and Search Console. …We observed spammers hacking into vulnerable sites, pretending to be the owners of these sites, verifying themselves in the Search Console and using the tool to ask Google to crawl and index the many spammy pages they created. Using AI, we were able to pinpoint suspicious verifications and prevented spam URLs from getting into our index this way.”
AI is involved in the indexing process and all the way through to the ranking process and lastly in the search results themselves.
The most recent March 2024 update is described by Google as complex and is still not over in April 2024. I suspect that Google has transitioned to a more AI-friendly infrastructure in order to accommodate doing things like integrating the AI signals formerly associated with the HCU and the Reviews System straight into the core algorithm.
People are freaking out because the AI search of the future will summarize answers. Well, Google already does that in featured snippets and knowledge graph search results.
Let’s be real: traditional Search no longer exists, it’s a misnomer. Google is more accurately described as an AI search engine and this important to acknowledge because as you’ll shortly see, it directly relates to Sundar Pichai’s means when he talks about what search will look like in ten years.
Blended Hybrid Search AKA Generative Search
What people currently call AI Search is also a misnomer. The more accurate label is Generative Search. Bing and Perplexity.ai are generative AI chatbots stacked on top of a search index with something in the middle that coordinates between the two, generally referred to as Retrieval-Augmented Generation (RAG), a technology that was created in 2020 by Facebook AI researchers
Chatbots
Chatbots are a lot of things including ChatGPT and Gemini. No need to belabor this point, right?
Search Vs Generative Search Vs Chatbots: Who Wins?
Generative search is an awkward mix of a chatbot and a search engine with a somewhat busy interface. It’s awkward because it wants to do your homework and tell you the phone number of the local restaurant but it’s mediocre at both. But even if generative search improves does anyone really want a search engine that can also write an essay? It’s almost a given that those awkwardly joined capabilities are going to drop off and it’ll eventually come to resemble what Google already is.
Chatbots and Search Engines
That leaves us with a near-future of chatbots and search engines. Sam Altman said that an AI chatbot search that shows advertising is dystopian.
Google is pursuing both strategies by tucking the Gemini AI chatbot into Android as an AI assistant that can make your phone calls, phone the local restaurant for you and offer suggestions for the best pizza in town. CEO Sundar Pichai is on record stating that the web is an important resource that they’d like to continue using.
But if the chatbot doesn’t show ads, that’s going to significantly cut into Google’s ad revenue. Nevertheless, the SEO industry is convinced that SEO is over because search engines are going to be replaced by AI.
It’s possible that Google at some point makes a lot of money from cloud services and SaaS products and it will be able to walk away from search-based advertising revenue if everyone migrates towards AI chatbots.
Query Deserves Advertising
But if there’s money in search advertising, why go through all the trouble to crawl the web, develop the technology and not monetize it? Who leaves money on the table? Not Google.
There’s a search engine algorithm called Query Deserves Freshness. The algorithm determines if a search query is trending or is newsworthy and will choose a webpage on the topic that is recently published, fresh.
Similarly, I believe at some point that chatbots are going to differentiate when a search query deserves ads and switch over to a search result.
Google’s CEO Pichai contradicts the SEO narrative of the decline and disappearance of search engines. Pichai says that the future of search includes websites because search needs the diversity of opinions inherent in the web. So where is this all leading toward?
Google Search already surfaces answers for non-money queries that are informational like the weather and currency conversions. There are no ads for those queries so Google is not losing anything by showing informational queries in a chatbot.
But for shopping and other transactional types of search queries, the best solution is Query Deserves Advertising.
If a user asks a shopping related search query there’s going to come a time where the chatbot will “helpfully” decide that the Query Deserves Advertising and switch over to the search engine inventory that also includes advertising.
That may explain why Google’s CEO sees a future where the web is not replaced by an AI but rather they coexist. So if you think about it, Query Deserves Advertising may be how search engines preserve their lucrative advertising business in the age of AI.
Query Deserves Search
An extension of this concept is to think about search queries where comparisons, user reviews, expert human reviews, news, medical, financial and other queries that require human input will need to be surfaced. Those kinds of queries may also switch over to a search result. The results may not look like today’s search results but they will still be search results.
People love reading reviews, reading news, reading gossip and other human generated topic and that’s not going away. Insights matter. Personality matters.
Query Deserves SEO
So maybe the SEO knee jerk reaction that SEO is dead is premature. We’re still at the beginning of this and as long as there’s money to be made off of search there will still be a need for websites, search engines and SEO.
Google and Alphabet’s CEO Sundar Pichai sat down for a discussion on AI that inevitably focused on the future of search. He explained his of search and the role of websites, insisting that the only thing different is the technology.
These Are The Early Days
The interviewer asked if Sundar was caught by surprise by how fast AI has progressed recently. Google’s CEO made it clear that Google was indeed at the forefront of AI and that they have been creating the infrastructure for it since 2016. He also reminded the interviewer that the world is at the very beginning of the AI age and that there’s a lot more coming.
Sundar answered:
“…one of the main things I did as a CEO is to really pivot the company towards working on AI and I think that’ll serve us well for the next decade ahead.
For example now I look back and compute is the hot currency now. We built TPUs, we started really building them at scale in 2016 right, so we have definitely been thinking about this for a long time.
…we’ve always had a sense for uh the trajectory ahead and in many ways we’ve been preparing the company for that and so I think foundationally a lot of our R&D …a lot of it has gone into AI for a long time and so I feel incredibly well positioned for what’s coming.
We’re still in the very early days I think people will be surprised at the level of progress we’re going to see and I feel like we’ve scratched the tip of the iceberg.”
The Only Thing Different Is The Technology
Sundar was also asked about the future of search and what it would look like. There’s a lot of anxiety by publishers and search marketers that AI will replace search entirely and that websites will fall into decline, taking the SEO industry down with it.
So it may come as a relief that Google’s CEO anticipates a future in which people and websites continue playing an important role in search just as it does today.
He starts by asserting that AI has been a part of search for many years and that web ecosystem still plays a role in making search useful. He also underlines the point that the ten blue links hasn’t been a thing for 15 years (people also ask, videos, top news, carousels), that Google has long given direct answers (featured snippets, etc).
This is the question asked:
How are things going to evolve? Like how will people access information in 10 years?
Sundar answers that the only thing different is the technology:
“Look, I think it’s one of the common myths around that Google has been ten blue links for a long time. You know, when mobile came we knew Google search had to evolve a lot. We call it Featured Snippets, but for almost ten years now, you go to Google for many questions we kind of use AI to answer them right, we call it web answers internally.
And so, we’ve always answered questions where we can but we always felt when people come and look for information people in certain cases want answers but they also want the richness and the diversity of what’s out there in the world and it’s a good balance to be had and we’ve always, I think, struck that balance pretty well.
To me all that is different is now the technology by which you can answer is progressing, so we will continue doing that. But this evolution has been underway in search for a long long time.”
People Trust Search
Sundar observed that search has always evolved and despite it being different today than what it was like fifteen years ago, it’s still about surfacing information from the web.
He continued:
“Search used to be text and 10 blue links maybe 15 years ago but you know be it images, be it videos, be it finding answers for your questions, those are all changes you know …to to my earlier point people kind of shrug and …we’ve done all this in Google search for a long time and people like it, people engage with it, people trust it.
So to me, I view it as a more natural continuation, obviously with LLMs and AI. I think you have a more powerful tool to do that and so which is what we are putting in search, you know with Search Generative Experience and so we’ll continue evolving it in that direction too.”
Search Engines And The Web Go Together
He was next asked about the question of political and cultural biases in search engines, mentioning that Google’s output has been accused of reflecting the liberal biases of its employees. He was asked, how do you think about what answer to give to questions?
Sundar’s answer returned to referencing the value of information created by people as found on websites as the best source of answers. He said that even with Search Generative Experience, they still want to point users to websites.
This is how he explained it:
“Let’s talk about search for a second here, you’re asking a very important question. I think you know the the work we have done over many many years making sure, from a search standpoint, in search we try to reflect what’s out in the web. And we want to give trustworthy high quality information. We’ve had to navigate all of this for a long time.
I think we’ve always struck the balance, that’s what I’m saying, it’s not about giving an answer, there are certain times you give an answer, what’s the population of the United States, yes it’s an answerable question. There are times you want to surface the breadth of opinions out there on the web which is what search does and does it well.
Just because you’re saying we are summarizing it on top doesn’t mean we veer from those principles. The summary can still point you to the range of opinions out there right, and we do that today all the time.”
SGE Is Not A Chatbot Experience
This next part is very important because it emphasizes the word “search” in the phrase, Search Generative Experience in order to contrast that with talking to a chatbot.
There are a lot of articles predicting the decline of search traffic due to SGE, but there are many reasons why that’s not what’s happening and Sundar explains that by differentiating the search experience from the chatbot experience. This is super important because it’s a point that’s lost on those who kneejerk react that SGE is going to replace websites. According to Sundar, that’s not the case because search and chatbots are two different things.
His answer:
“And so I think that’s different from when you’re in a chatbot and I think that’s the more active area of research where sometimes it has its voice so how do you get those moments right and you know again for us I think it’s an area where we will be deeply committed to getting it right.
How do you do it in a way that which you represent the wide range of views that are held by people around the world and I think there are many aspects to it, the issues with AI models are not just at Google you see it across other models.”
AI Improves Search (Not Replaces It)
Near the end of the discussion, Sundar describes AI as a technology that improves current technologies (and not as something that replaces them). This too is an important point to consider when thinking about how AI will impact search and SEO.
His explanation of how AI improves but not necessarily replaces:
“…of course as a company you want to make sure you’re capitalizing on those innovations and building successful products, businesses, but I think we’ve long demonstrated that we can do it. The thing that excites me about AI is it’s the same underlying piece of technology for the first time in our history we have one leveraged piece of technology which can improve search, can improve YouTube, can improve Waymo and we put it all as cloud to our customers outside and so I feel good about that.”
Takeaways
There was a lot of important information in this interview that provides the most comprehensive picture of what the state of search will look like in the future.
Some of the important points:
AI is not new. It’s been a part of Google for many years now.
Google has provided answers and summaries for years
Websites are important to search
SGE is not a chatbot experience, it’s a search experience
The new oil isn’t data or attention. It’s words. The differentiator to build next-gen AI models is access to content when normalizing for computing power, storage, and energy.
But the web is already getting too small to satiate the hunger for new models.
Some executives and researchers say the industry’s need for high-quality text data could outstrip supply within two years, potentially slowing AI’s development.
Even fine-tuning doesn’t seem to work as well as simply building more powerful models. A Microsoft research case study shows that effective prompts can outperform a fine-tuned model by 27%.
We were wondering if the future will consist of many small, fine-tuned, or a few big, all-encompassing models. It seems to be the latter.
There is no AI strategy without a data strategy.
Hungry for more high-quality content to develop the next generation of large language models (LLMs), model developers start to pay for natural content and revive their efforts to label synthetic data.
For content creators of any kind, this new flow of money could carve the path to a new content monetization model that incentivizes quality and makes the web better.
Image Credit: Lyna ™
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
KYC: AI
If content is the new oil, social networks are oil rigs. Google invested $60 million a year in using Reddit content to train its models and surface Reddit answers at the top of search. Pennies, if you ask me.
YouTube CEO Neal Mohan recently sent a clear message to OpenAI and other model developers that training on YouTube is a no-go, defending the company’s massive oil reserves.
The New York Times, which is currently running a lawsuit against OpenAI, published an article stating that OpenAI developed Whisper to train models on YouTube transcripts, and Google uses content from all of its platforms, like Google Docs and Maps reviews, to train its AI models.
Generative AI data providers like Appen or Scale AI are recruiting (human) writers to create content for LLM model training.
Make no mistake, writers aren’t getting rich writing for AI.
For $25 to $50 per hour, writers perform tasks like ranking AI responses, writing short stories, and fact-checking.
Applicants must have a Ph.D. or master’s degree or are currently attending college. Data providers are clearly looking for experts and “good” writers. But the early signs are promising: Writing for AI could be monetizable.
Image Credit: Kevin Indig
Image Credit: Kevin Indig
Model developers look for good content in every corner of the web, and some are happy to sell it.
Content platforms like Photobucket sell photos for five cents to one dollar a piece. Short-form videos can get $2 to $4; longer films cost $100 to $300 per hour of footage.
With billions of photos, the company struck oil in its backyard. Which CEO can withstand such a temptation, especially as content monetization is getting harder and harder?
Publishers are getting squeezed from multiple sides:
Few are prepared for the death of third-party cookies.
Social networks send less traffic (Meta) or deteriorate in quality (X).
Most young people get news from TikTok.
SGE looms on the horizon.
Ironically, labeling AI content better might help LLM development because it’s easier to separate natural from synthetic content.
In that sense, it’s in the interest of LLM developers to label AI content so they can exclude it from training or use it the right way.
Labeling
Drilling for words to train LLMs is just one side of developing next-gen AI models. The other one is labeling. Model developers need labeling to avoid model collapse, and society needs it as a shield against fake news.
A new movement of AI labeling is rising despite OpenAI dropping watermarking due to low accuracy (26%). Instead of labeling content themselves, which seems futile, big tech (Google, YouTube, Meta, and TikTok) pushes users to label AI content with a carrot/stick approach.
Google uses a double-pronged approach to fight AI spam in search: prominently showing forums like Reddit, where content is most likely created by humans, and penalties.
Google is surfacing more content from forums in the SERPs is to counter-balance AI content. Verification is the ultimate AI watermarking. Even though Reddit can’t prevent humans from using AI to create posts or comments, chances are lower because of two things Google search doesn’t have: Moderation and Karma.
Yes, Content Goblins have already taken aim at Reddit, but most of the 73 million daily active users provide useful answers.1 Content moderators punish spam with bans or even kicks. But the most powerful driver of quality on Reddit is Karma, “a user’s reputation score that reflects their community contributions.” Through simple up or downvotes, users can gain authority and trustworthiness, two integral ingredients in Google’s quality systems.
Google recently clarified that it expects merchants not to remove AI metadata from images using the IPTC metadata protocol.
When an image has a tag like compositeSynthetic, Google might label it as “AI-generated” anywhere, not just in shopping. The punishment for removing AI metadata is unclear, but I imagine it like a link penalty.
IPTC is the same format Meta uses for Instagram, Facebook, and WhatsApp. Both companies give IPTC metatags to any content coming out from their own LLMs. The more AI tool makers follow the same guidelines to mark and tag AI content, the more reliable detection systems work.
When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them.
The downsides of AI content are small when the content looks like AI. But when AI content looks real, we need labels.
While advertisers try to get away from the AI look, content platforms prefer it because it’s easy to recognize.
For commercial artists and advertisers, generative AI has the power to massively speed up the creative process and deliver personalized ads to customers on a large scale – something of a holy grail in the marketing world. But there’s a catch: Many images AI models generate feature cartoonish smoothness, telltale flaws, or both.
Consumers are already turning against “the AI look,” so much so that an uncanny and cinematic Super Bowl ad for Christian charity He Gets Us was accused of being born from AI –even though a photographer created its images.
YouTube started enforcing new guidelines for video creators that say realistic-looking AI content needs to be labeled.
Challenges posed by generative AI have been an ongoing area of focus for YouTube, but we know AI introduces new risks that bad actors may try to exploit during an election. AI can be used to generate content that has the potential to mislead viewers – particularly if they’re unaware that the video has been altered or is synthetically created. To better address this concern and inform viewers when the content they’re watching is altered or synthetic, we’ll start to introduce the following updates:
Creator Disclosure: Creators will be required to disclose when they’ve created altered or synthetic content that’s realistic, including using AI tools. This will include election content.
Labeling: We’ll label realistic altered or synthetic election content that doesn’t violate our policies, to clearly indicate for viewers that some of the content was altered or synthetic. For elections, this label will be displayed in both the video player and the video description, and will surface regardless of the creator, political viewpoints, or language.
The biggest imminent fear is fake AI content that could influence the 2024 U.S. presidential election.
No platform wants to be the Facebook of 2016, which saw lasting reputational damage that impacted its stock price.
Chinese and Russian state actors have already experimented with fake AI news and tried to meddle with the Taiwanese and coming U.S. elections.
Now that OpenAI is close to releasing Sora, which creates hyperrealistic videos from prompts, it’s not a far jump to imagine how AI videos can cause problems without strict labeling. The situation is tough to get under control. Google Books already features books that were clearly written with or by ChatGPT.
Image Credit: Kevin Indig
Takeaway
Labels, whether mental or visual, influence our decisions. They annotate the world for us and have the power to create or destroy trust. Like category heuristics in shopping, labels simplify our decision-making and information filtering.
Lastly, the idea of category heuristics, numbers customers focus on to simplify decision-making, like megapixels for cameras, offers a path to specify user behavior optimization. An ecommerce store selling cameras, for example, should optimize their product cards to prioritize category heuristics visually. Granted, you first need to gain an understanding of the heuristics in your categories, and they might vary based on the product you sell. I guess that’s what it takes to be successful in SEO these days.
Soon, labels will tell us when content is written by AI or not. In a public survey of 23,000 respondents, Meta found that 82% of people want labels on AI content. Whether common standards and punishments work remains to be seen, but the urgency is there.
There is also an opportunity here: Labels could shine a spotlight on human writers and make their content more valuable, depending on how good AI content becomes.
On top, writing for AI could be another way to monetize content. While current hourly rates don’t make anyone rich, model training adds new value to content. Content platforms could find new revenue streams.
Web content has become extremely commercialized, but AI licensing could incentivize writers to create good content again and untie themselves from affiliate or advertising income.
Sometimes, the contrast makes value visible. Maybe AI can make the web better after all.
A student and researcher who leaks hidden Android features discovered a setting deep within the Android root files that enables Google Gemini directly from Google search in a way that resembles Apple iOS, raising questions about why that’s in there and if it could be connected to a general rollout of AI in search rumored to be happening in May 2024.
Gemini: What SEO Could Be Up Against
There are only rumors that some form of AI search will be rolled out. But if Google rolls out Gemini access as a standard feature then the following gives an idea of what the search community would have to look forward to.
Gemini is Google’s most powerful AI model that contains advanced training, technology and features that in many ways go far beyond existing models.
For example, Gemini is the first AI model to be natively trained to be multimodal. Multimodal means that ability to work with images, text, video and audio and pull knowledge from each of the different forms of media. All previous AI models were trained to be multimodal with separate components and then the separate parts were joined together. According to Google the old way of training for multimodality didn’t work well for complex reasoning tasks. Gemini however is pre-trained with multimodality which enables it to have complex reasoning abilities that exceed those of all previous models.
Another example of the advanced capabilities of Gemini is the unprecedented scale of the context window. A context window is the amount of data a language model can consider simultaneously in order to make a decision. The context window is one measure of how powerful the language model is. Context windows is measured in “tokens” which represent the smallest unit of information.
Comparison Of Context Windows
ChatGPT has a maximum context window of 32k
GPT-4 Turbo has a context window of 128k
Gemini 1.5 pro has a context window of one million tokens.
To put that context window into perspective, Gemini’s context window allows it to process the entire text of the three Lord of the Rings books or ten hours of videos and ask it any question about it. In comparison, OpenAI’s best context window of 128k is able to consider the 198 page Robinson Crusoe book or approximately 1600 tweets.
Internal Google research has shown that their advanced technologies enables context windows as high as 10 million tokens.
Leaked Functionality Resembles iOS Implementation
What was discovered is that Android contains a way to access the Gemini AI directly from the search bar in the Google App in the same way as it’s available in Apple mobile devices.
The official directions for the Apple device mirror the functionality that the researcher discovered hidden in Android.
“On iPhones, you can chat with Gemini in the Google app. With a tap of the Gemini tab , unlock a whole new way to learn, create images and get help while you’re on the go. Interact with it through text, voice, images, and your camera to get help in new ways.”
The researcher who leaked the Gemini functionality in Google search discovered it hidden within Android. Enabling this function caused a toggle to appear in the Google search bar that makes it easy for users to swipe to directly access Gemini AI functionality exactly the same way as in iOS.
Enabling this functionlity requires rooting an Android phone, which means accessing the operating system at the most fundamental level of files.
According to the person who leaked the information, one of the requirements for the toggle is that Gemini should already be enabled as the mobile assistant. An app called GMS Flags must also be installed in order to obtain the ability to toggle Google app features on and off.
The requirements are:
“Required things –
Rooted devices running Android 12+
Google App latest beta version from Play Store or Apkmirror
GMS Flags app installed with root permission granted. (GitHub)
Gemini should be available for you already in your Google app.”
Screenshot Of New Search Toggle
A screenshot highlighting the ‘toggle’ button in a user interface with a red arrow pointing towards it, with a google search bar visible in the background and a snippet of a finance-related application at the bottom.
Screenshot Of Gemini Activated In Google Search
The person who uncovered this functionality tweeted:
“Google app for Android to soon get toggle to switch between Gemini and Search [just like on iOS]”
Google app for Android to soon get toggle to switch between Gemini and Search [just like on iOS]
There have been rumors that Google is set to announce the official rollout of Google Search Generative Experience at the May 2024 I/O conference where Google regularly announces new features coming to search (among other announcements).
Eli Schwartz recently posted on LinkedIn about the rumored SGE rollout:
“That date did not come from Google PR; however, as of last week, that is the current planned launch date internally. Of course, the timeline could still change, given that it’s still 53 days away. Throughout the last year, multiple launch dates have been missed.
…Also, it’s important to elaborate on what exactly “launch” means.
Right now, the only way to see SGE, unless you’re in the beta experiment, is if you’re opted into the labs.
Launching means that they’ll show SGE to people who have not opted in, but the scale of that could vary widely.”
It’s unknown if this hidden toggle is a place marker for a future version of the Google search app or if it’s something that enables the rollout of SGE at a future data.
However this hidden toggle does offer a possible clue for those who are curious about how Google may roll out an AI-based front end to search and if this toggle is a connector in some way to that function.
Read how to root to enable Gemini in Android search:
In January, Google laid off hundreds of workers as it shifted its investments and focus to AI development. The tech giant is not alone; brands like UPS and Duolingo, to name a few, are doing the same thing.
Is this a new trend, or is it something to be really concerned about?
Let’s explore how AI is unlikely to replace SEO specialists completely, but it will certainly transform how we work.
A Closer Look At How AI Is Transforming The Field Of SEO
Before AI went mainstream, much SEO work was manual and required much time to perform certain tasks.
For example, optimizing a landing page could take thirty minutes to a couple of hours, depending on your experience and skill level.
Producing a content strategy took a good amount of time (i.e., a week or more), depending on the site, competition, search engine results pages (SERPs), etc. But now, with AI, SEO pros can do things quickly and more efficiently.
Here’s how AI can help us become more efficient. But be careful to also acknowledge the limitations of AI. A humanized approach, incorporating AI where appropriate, is a win-win situation.
Enhancement Of Tools To Drive Better Efficiency
AI has definitely enhanced some of the tools we use to perform our jobs, making tasks like keyword research, competitor analysis, and content optimization more efficient and effective.
AI algorithms can process copious amounts of data faster than humans, providing insights that can inform our SEO strategies.
For example, AI tools can help SEO specialists discover new keyword opportunities, analyze the performance of their content, and identify gaps and areas for improvement more quickly and easily than we previously did in the past.
AI tools can also automate some tedious and repetitive tasks that SEO specialists perform, such as generating titles and metadata, checking for broken links, optimizing images, finding the semantic relationships between keywords, identifying search trends, and predicting user behavior.
Content Creation And Optimization
One of the biggest benefits I have seen with AI is that it is particularly good at ideating content topics and even helping to draft content.
However, human oversight is crucial to ensure the content remains high-quality, accurate, and relevant to users while adhering to brand voice and E-E-A-T principles.
AI tools can help SEO specialists generate content ideas based on user intent, search trends, and competitor analysis. They can also help provide suggestions for headlines, subheadings, images, content briefs, and links.
Humans must still create and review content to avoid potential legal and ethical issues, negative PR outcomes, and factual inaccuracies. With the March update, Google took aim at “scaled content abuse” and applied manual actions to many websites producing a large amount of AI content without human input.
SEO and content editors still need to review, edit, and approve any output from generative AI tools to ensure that it meets the expectations and needs of their target audience.
You can’t just take the content from your AI platform – not make it useful, relevant, factual – and hope it will rank because it probably won’t, especially for competitive phrases.
Changing The SEO Landscape
With the rise of AI and AI powering Google’s Search Generative Experiences (SGE), SEO could potentially go through one of the biggest changes that ever happened to the industry.
As search engines increasingly use AI to refine their algorithms, SEO specialists need to adapt their strategies. AI can help them stay ahead of these changes by predicting trends and identifying new optimization opportunities, such as SGE snippets.
For example, AI tools can help SEO specialists not only monitor and analyze the impact of algorithm updates, but also provide recommendations for adjusting SEO tactics accordingly. They can also help leverage new features and formats that search engines introduce, such as SGE featured snippets.
By leveraging AI tools, SEO specialists can optimize content for these new formats, increasing their chances of ranking higher and attracting more qualified traffic to their clients and their own sites. This success hinges on interpreting the data and putting together a winning SEO strategy.
Human Insight And Creativity
Despite the advancements in AI, human insight and creativity remain essential. Understanding audience needs, crafting compelling messages, and strategic thinking are areas where humans excel and are critical in SEO.
AI tools can provide data and insights but cannot replace the human ability to interpret and apply them.
SEO specialists still need to use their judgment and experience to decide which SEO strategies and tactics are best suited for their goals and context.
They also need to use their creativity and storytelling skills to create content that engages and persuades their audience and builds trust and loyalty.
AI tools cannot replicate the human emotion and connection vital for a successful SEO strategy.
Ethical Considerations And Best Practices
AI tools must be used responsibly and in accordance with search engine guidelines. SEO specialists play a key role in ensuring the ethical use of AI and adherence to best practices to avoid penalties.
As SEO professionals, we need to be aware of the potential risks and challenges of using AI tools, such as data privacy, bias, and quality issues. We also must ensure that the data we use and the content we generate with AI tools are accurate, relevant, and trustworthy.
AI’s Enhancements And Boundaries In SEO
AI has certainly made it easier and more efficient to complete SEO tasks, such as on-page optimization and coding, which frees up some of our time to work on strategic growth opportunities.
These advancements are not perfect and do have some limitations, including:
AI is dependent on being trained on pre-existing information and data. It lacks the ability to innovate beyond known information unless it has been trained on it.
The lack of human experience and wisdom. AI cannot match the nuanced understanding and contextual insight in a way that experienced SEO professionals do.
Requirement for direct inputs. AI’s effectiveness is contingent on the quality of the inputs it receives, and it can struggle with subtle strategy shifts that we humans can easily navigate.
Wrapping Up
AI will continue to become an invaluable tool for SEO specialists, but it won’t replace the need for human expertise, creativity, and strategic thinking.
The role of SEO specialists will evolve, with a greater emphasis on managing and interpreting AI-generated data and insights – and less on manual and repetitive tasks that the machines can now do with human oversight.
SEO specialists who actively learn and embrace AI with a human-centric approach to refine their skill sets will gain a competitive edge and a brighter future in the SEO industry.