OpenAI Releases ChatGPT o1, ‘World’s Smartest Language Model” via @sejournal, @martinibuster

Today OpenAI rolled out what Sam Altman says is the world’s smartest language model in the world plus a brand new Pro tier that comes with unlimited usage limits and with a higher level of computing resources.

OpenAI ChatGPT o1 Model

Sam Altman announced on X (formerly Twitter) that their new AI model is now live and available in ChatGPT right now and will be arriving to the API soon.

He tweeted:

“o1, the smartest model in the world. smarter, faster, and more features (eg multimodality) than o1-preview. live in chatgpt now, coming to api soon.

chatgpt pro. $200/month. unlimited usage and even-smarter mode for using o1. more benefits to come!”

Screenshot Of ChatGPT 01 Model Availability

ChatGPT Pro Mode $200/Month

ChatGPT Pro Mode is a new tier that has more “thinking power” than the standard version of o1, which increases it’s reliability. Answers in Pro mode take longer to generate, displaying a progress bar and triggering an in-app notification if the user navigates to a different conversation.

OpenAI describes the new ChatGPT Pro Mode:

“ChatGPT Pro provides access to a version of our most intelligent model that thinks longer for the most reliable responses. In evaluations from external expert testers, o1 pro mode produces more reliably accurate and comprehensive responses, especially in areas like data science, programming, and case law analysis.

Compared to both o1 and o1-preview, o1 pro mode performs better on challenging ML benchmarks across math, science, and coding.”

The new tier is not a price increase from the regular plan, which is called Plus. It’s an entirely new plan called Pro.

OpenAI’s new o1 Pro plan provides unlimited access to its new o1 model, along with o1-mini, GPT-4o, and Advanced Voice. It also includes o1 Pro Mode, which has access to increased computational power to generate more refined and insightful responses to complex queries.

Read more about OpenAI’s new pro plan and O1 model:

Introducing ChatGPT Pro

Featured Image by Shutterstock/One Artist

ChatGPT Search Shows 76.5% Error Rate In Attribution Study via @sejournal, @MattGSouthern

OpenAI’s ChatGPT Search is struggling to accurately cite news publishers, according to a study by Columbia University’s Tow Center for Digital Journalism.

The report found frequent misquotes and incorrect attributions, raising concerns among publishers about brand visibility and control over their content.

Additionally, the findings challenge OpenAI’s commitment to responsible AI development in journalism.

Background On ChatGPT Search

OpenAI launched ChatGPT Search last month, claiming it collaborated extensively with the news industry and incorporated publisher feedback.

This contrasts with the original 2022 rollout of ChatGPT, where publishers discovered their content had been used to train the AI models without notice or consent.

Now, OpenAI allows publishers to specify via the robots.txt file whether they want to be included in ChatGPT Search results.

However, the Tow Center’s findings suggest publishers face the risk of misattribution and misrepresentation regardless of their participation choice.

Accuracy Issues

The Tow Center evaluated ChatGPT Search’s ability to identify sources of quotes from 20 publications.

Key findings include:

  • Of 200 queries, 153 responses were incorrect.
  • The AI rarely acknowledged its mistakes.
  • Phrases like “possibly” were used in only seven responses.

ChatGPT often prioritized pleasing users over accuracy, which could mislead readers and harm publisher reputations.

Additionally, researchers found ChatGPT Search is inconsistent when asked the same question multiple times, likely due to the randomness baked into its language model.

Citing Copied & Syndicated Content

Researchers find ChatGPT Search sometimes cites copied or syndicated articles instead of original sources.

This is likely due to publisher restrictions or system limitations.

For example, when asked for a quote from a New York Times article (currently involved in a lawsuit against OpenAI and blocking its crawlers), ChatGPT linked to an unauthorized version on another site.

Even with MIT Technology Review, which allows OpenAI’s crawlers, the chatbot cited a syndicated copy rather than the original.

The Tow Center found that all publishers risk misrepresentation by ChatGPT Search:

  • Enabling crawlers doesn’t guarantee visibility.
  • Blocking crawlers doesn’t prevent content from showing up.

These issues raise concerns about OpenAI’s content filtering and its approach to journalism, which may push people away from original publishers.

OpenAI’s Response

OpenAI responded to the Tow Center’s findings by stating that it supports publishers through clear attribution and helps users discover content with summaries, quotes, and links.

An OpenAI spokesperson stated:

“We support publishers and creators by helping 250M weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution. We’ve collaborated with partners to improve in-line citation accuracy and respect publisher preferences, including enabling how they appear in search by managing OAI-SearchBot in their robots.txt. We’ll keep enhancing search results.”

While the company has worked to improve citation accuracy, OpenAI says it’s difficult to address specific misattribution issues.

OpenAI remains committed to improving its search product.

Looking Ahead

If OpenAI wants to collaborate with the news industry, it should ensure publisher content is represented accurately in ChatGPT Search.

Publishers currently have limited power and are closely watching legal cases against OpenAI. Outcomes could impact content usage rights and give publishers more control.

As generative search products like ChatGPT change how people engage with news, OpenAI must demonstrate a commitment to responsible journalism to earn user trust.


Featured Image: Robert Way/Shutterstock

Coca-Cola’s AI Holiday Campaign Fails To Engage Viewers Emotionally via @sejournal, @gregjarboe

Decision-makers at brands and agencies know that the new AI-generated holiday ads from Coca-Cola have attracted a lot of criticism.

Others have described the three new AI versions of the classic “Holidays Are Coming” campaign as “a soulless and creepy, dystopian nightmare” and “the biggest branding blunder of the year,” with others saying the AI campaign “destroyed the spirit of Christmas” and “earns Coca-Cola a lump of coal.”

Strong words. But has Manuel “Manolo” Arroyo, the executive vice president and global chief marketing officer for the company, just made a career-damaging move?

In testing for festive campaigns globally by DAIVID, none of Coke’s new AI-generated holiday ads made the top 30 most effective holiday campaigns of 2024 against 90 other Christmas ads.

Watch the new AI-generated holiday ads, which were created by three different ad agencies, and form your own opinion.

Secret Santa

Secret Level created “Coca-Cola – Secret Santa (AI-Generated Christmas Ad 2024).”

Holidays Are Coming

Silverside created “Coca Cola – Holidays Are Coming.”

Unexpected Santa

Wildcard created “Coca-Cola – Unexpected Santa (AI-Generated Christmas Ad 2024).”

Holidays Are Coming 2020

While you’re reviewing these new versions, you should also watch the version that was uploaded to Coca-Cola Great Britain & Ireland’s YouTube channel back in 2020.

How Do Coke’s New AI Versions Compare To The Classic 2020 Ad?

What do you notice? What do you wonder?

Attention

All the new AI versions generated above-average attention from the start.

However, the classic version, which starts with a boy ringing a bell, captures more attention than any of the AI versions, which mostly start with shots of snowy landscapes.

People will generally attract more attention than images of trees and lakes.

Prevalence Of Intense Emotions

According to testing by DAIVID, none of the AI ads generate the same levels of intense positive emotions as the 2020 version, and all of them are below the industry average.

The 2020 version generates almost twice as much warmth as the norm, while the AI versions are level or slightly above.

The AI version that generated the most warmth was still 38% less likely to make people feel warmth than the 2020 version.

The AI versions were less relatable and less – for want of a better word – real.

Brand Recall

All of the new AI versions predictably scored above the industry average for correct brand recall.

This is not surprising, considering that people know the ad well, and the brand is present throughout and integral to the storyline (Coke Trucks).

The classic scores higher than the AI versions, though. This, again, is possibly due to the familiarity of the ad, but also the fact the famous “Holidays Are Coming” track kicks in much quicker.

Next Step Intents

One of the emotions that the AI versions consistently scored higher than the 2020 ad for is feelings of craving. All are around two to three times higher than average.

This is probably due to the close-ups of someone opening a cold bottle of Coke, which wasn’t included in the 2020 version.

What Was The Most Effective AI Version?

Ian Forrester of DAIVID reported:

“The AI versions of Coke’s classic ‘Holidays Are Coming’ campaign were strong for attention in the first second and brand recall, but were let down by their evocation of intense positive emotions, which were all below the industry norm.

The difference between the AI and the original was most stark in their evocation of warmth, a mainstay of Christmas advertising. The original evoked intense warmth among 33.0% of viewers, whereas the AI versions were significantly below this.

So, while the AI is producing images which on the face of it seem cute and heart-warming, the human viewer to some degree discerns their synthetic nature, which detracts from their impact.”

How Can Brands Avoid AI Negative Backlash?

After analyzing the data published by DAIVID, I reached out directly and spoke to their Chief Growth Officer, Barney Worfolk-Smith:

GJ: Why does AI have such a negative perception?

BWS: It’s not surprising that the use of generative AI, especially jazzing up familiar Christmas traditions like Coke’s truck, garners some negative opinions.

As the introduction of generative AI into processes is nascent and messy at best, none of us really know exactly how it will play out.

So, some in the advertising community who feel a sense of ominous threat will instantly adopt a negative stance. I don’t blame them, but the reality is, the toothpaste is out of the tube, so we should all have a hand on the wheel of a human-AI hybrid Christmas Coke truck to have a stake in the future.

GJ: Can brands navigate carefully to avoid backlash?

BWS: Generative AI is present – or at least coming down the chimney – in almost all aspects of advertising. It’s actually incumbent upon brands to try bits of it out.

Sure, it’s going to be bumpy, but the backlashes will frequently be confined to the advertising community.

As a result, as long as they’re doing measured introductory human AI experiments and not dismissing the agency of record, I think they’ll avoid a hit on the share price.

GJ: Why was the original video such a classic?

BWS: The original was a glorious confluence: strong, familiar emotions, which Coca-Cola evokes generally, the shared history of Santa and Coca-Cola’s colors, and a palpable, relatable sense of anticipation that even the “Grinchiest” of us feel in the run-up to Christmas.

GJ: Why has AI failed to replicate the success of the first campaign?

BWS: At DAIVID, we understand the importance emotions play in advertising effectiveness – and the AI versions all garnered below-average U.S. positive emotional responses.

Without a doubt, the uncanny valley plays a part here, especially with an advert that is so recognizable to so many of us.

GJ: What must marketers do when using AI in video or images?

BWS: Marketers need to take their eyes off the spreadsheet and on to the creative process.

Of course, AI can drive efficiencies, but it can also open up new avenues of creativity, and that will happen when creatives are empowered to use AI, not be threatened with it.

Embrace AI Cautiously In Holiday Ads

Holiday ads are notoriously tricky to navigate and strike the right sentiment, with the best intention often missing the mark.

Feelings of warmth and nostalgia are at the heart of the festive season. Perhaps AI just can’t replicate the nuance of human emotion – or more likely, humans don’t like the idea of AI trying to replicate that.

Coca-Cola’s new ads emphasizes the challenge for brands to cultivate emotional authenticity when engaging with their audience as AI becomes more integrated into advertising campaigns.

It reminds us to embrace AI cautiously while upholding the human elements that underpin marketing campaigns – holiday ads, in particular.


Methodology

DAIVID used its AI-powered platform that predicts the emotions an ad will generate, and its likely impact on brand and business metrics – enabling advertisers to measure the effectiveness of their ad campaigns at scale.

They tested 90 Christmas ads for 39 different emotions. The strength of emotions people feel is ranked from 1-10, with 8-10 considered “intense.” Data for the chart was compiled at 7:00 AM on November 15, 2024. 


More resources:


Featured Image: Evgeny Karandaev/Shutterstock

GraphRAG Update Improves AI Search Results via @sejournal, @martinibuster

Microsoft announced an update to GraphRAG that improves AI search engines’ ability to provide specific and comprehensive answers while using less resources. This update speeds up LLM processing and increases accuracy.

The Difference Between RAG And GraphRAG

RAG (Retrieval Augmented Generation) combines a large language model (LLM) with a search index (or database) to generate responses to search queries. The search index grounds the language model with fresh and relevant data. This reduces the possibility of AI search engine providing outdated or hallucinated answers.

GraphRAG improves on RAG by using a knowledge graph created from a search index to then generate summaries referred to as community reports.

GraphRAG Uses A Two-Step Process:

Step 1: Indexing Engine
The indexing engine segments the search index into thematic communities formed around related topics. These communities are connected by entities (e.g., people, places, or concepts) and the relationships between them, forming a hierarchical knowledge graph. The LLM then creates a summary for each community, referred to as a Community Report. This is the hierarchical knowledge graph that GraphRAG creates, with each level of the hierarchical structure representing a summarization.

There’s a misconception that GraphRAG uses knowledge graphs. While that’s partially true, it leaves out the most important part: GraphRAG creates knowledge graphs from unstructured data like web pages in the Indexing Engine step. This process of transforming raw data into structured knowledge is what sets GraphRAG apart from RAG, which relies on retrieving and summarizing information without building a hierarchical graph.

Step 2: Query Step
In the second step the GraphRAG uses the knowledge graph it created to provide context to the LLM so that it can more accurately answer a question.

Microsoft explains that Retrieval Augmented Generation (RAG) struggles to retrieve information that’s based on a topic because it’s only looking at semantic relationships.

GraphRAG outperforms RAG by first transforming all documents in its search index into a knowledge graph that hierarchically organizes topics and subtopics (themes) into increasingly specific layers. While RAG relies on semantic relationships to find answers, GraphRAG uses thematic similarity, enabling it to locate answers even when semantically related keywords are absent in the document.

This is how the original GraphRAG announcement explains it:

“Baseline RAG struggles with queries that require aggregation of information across the dataset to compose an answer. Queries such as “What are the top 5 themes in the data?” perform terribly because baseline RAG relies on a vector search of semantically similar text content within the dataset. There is nothing in the query to direct it to the correct information.

However, with GraphRAG we can answer such questions, because the structure of the LLM-generated knowledge graph tells us about the structure (and thus themes) of the dataset as a whole. This allows the private dataset to be organized into meaningful semantic clusters that are pre-summarized. The LLM uses these clusters to summarize these themes when responding to a user query.”

Update To GraphRAG

To recap, GraphRAG creates a knowledge graph from the search index. A “community” refers to a group of related segments or documents clustered based on topical similarity, and a “community report” is the summary generated by the LLM for each community.

The original version of GraphRAG was inefficient because it processed all community reports, including irrelevant lower-level summaries, regardless of their relevance to the search query. Microsoft describes this as a “static” approach since it lacks dynamic filtering.

The updated GraphRAG introduces “dynamic community selection,” which evaluates the relevance of each community report. Irrelevant reports and their sub-communities are removed, improving efficiency and precision by focusing only on relevant information.

Microsoft explains:

“Here, we introduce dynamic community selection to the global search algorithm, which leverages the knowledge graph structure of the indexed dataset. Starting from the root of the knowledge graph, we use an LLM to rate how relevant a community report is in answering the user question. If the report is deemed irrelevant, we simply remove it and its nodes (or sub-communities) from the search process. On the other hand, if the report is deemed relevant, we then traverse down its child nodes and repeat the operation. Finally, only relevant reports are passed to the map-reduce operation to generate the response to the user. “

Takeaways: Results Of Updated GraphRAG

Microsoft tested the new version of GraphRAG and concluded that it resulted in a 77% reduction in computational costs, specifically the token cost when processed by the LLM. Tokens are the basic units of text that are processed by LLMs. The improved GraphRAG is able to use a smaller LLM, further reducing costs without compromising the quality of the results.

The positive impacts on search results quality are:

  • Dynamic search provides responses that are more specific information.
  • Responses makes more references to source material, which improves the credibility of the responses.
  • Results are more comprehensive and specific to the user’s query, which helps to avoid offering too much information.

Dynamic community selection in GraphRAG improves search results quality by generating responses that are more specific, relevant, and supported by source material.

Read Microsoft’s announcement:

GraphRAG: Improving global search via dynamic community selection

Featured Image by Shutterstock/N Universe

Microsoft’s AI SEO Tips: New Guidance For AI Search Optimization via @sejournal, @MattGSouthern

Microsoft has provided guidance on how to optimize content for AI-powered search engines.

This advice is timely now that OpenAI has launched ChatGPT Search, which uses Bing’s search index.

Understanding user intent is everything in this new era of search, Microsoft says:

“In the past, digital marketing strategies often relied heavily on demographic data and broad customer segments. But in this era of generative AI, the focus now shifts from who the customer is to what they are looking for—in real-time.”

Microsoft explains several ways websites can optimize content for AI-powered search.

AI SEO Recommendations

Intent-Based Content

Content should address the underlying purpose of user queries, Microsoft says:

“Focus on the intent behind the search query rather than just the keywords themselves. For example, if based on your keyword research, you find that users are searching for “how to choose eco-friendly coffee makers,” provide detailed, step-by-step guides rather than just general information.”

Natural Language Processing (NLP)

Websites should leverage NLP techniques to align content with how AI systems process and understand language.

Microsoft states:

“Generative engines, such as Bing Generative Search, deliver content to searchers by understanding and generating human language through Natural Language Processing (NLP). By analyzing vast amounts of text data to learn language patterns, context, and semantics, they’re able to provide relevant and accurate responses to user queries.”

Additionally, Microsoft emphasized the following sentence in italics:

“Leveraging these same NLP strategies in creating your content can optimize it to rank higher, increase its relevance, and enhance its authority, ultimately boosting its visibility and effectiveness.”

Strategic Keyword Implementation

To improve your website and landing pages for AI search engines, Microsoft recommends these keyword strategies:

  • Long-tail keywords for specific user interests
  • Conversational phrases matching natural speech patterns
  • Semantic keywords providing contextual relevance
  • Question-based keywords addressing common user queries

Freshness

Microsoft encourages keeping content updated and suggests using the IndexNow protocol to quickly notify search engines about website changes.

This helps maintain search rankings and ensures AI systems have the latest information.

Microsoft states:

“While it can be tempting to set it and forget it, AI systems depend on the latest, freshest information to determine the most relevant content to display to searchers. Regularly updating your content not only helps maintain your rankings but also keeps your audience engaged with current and valuable information. This practice can significantly influence how AI systems perceive and rank your website.”

Why This Matters

ChatGPT Search now uses Bing’s index, making these optimization strategies vital for websites seeking better visibility in AI-powered searches.

While this can help you create more optimized content, Microsoft acknowledges there’s no “secret sauce” for AI search systems.

How To Get Indexed In ChatGPT Search

Refer to our article on ChatGPT search indexing to ensure your content is indexed in ChatGPT’s real-time search engine.

You can also watch the short video I recorded on this topic below:


Featured Image: jomel alos/Shutterstock

ChatGPT Search May Have A Shot At Google via @sejournal, @Kevin_Indig

ChatGPT Search (CGS) is a landmark launch in the shift from traditional to AI Search.

Now, OpenAI competes with Google (Search) heads-on. Note the subtle elbow hit between the lines in the announcement:

Getting useful answers on the web can take a lot of effort. It often requires multiple searches and digging through links to find quality sources and the right information for you.

The positioning is clear: ChatGPT Search is a way to get a straight answer without digging through cluttered search results or browsing websites.

CGS, which is directly integrated with ChatGPT instead of a standalone search engine, decides whether a query benefits from web results or not, and you can rerun queries through other models like o1 preview to compare the answers:

ChatGPT will choose to search the web based on what you ask, or you can manually choose to search by clicking the web search icon.

It keeps the context of your search going in a conversation interface (bolding from me):

Go deeper with follow-up questions, and ChatGPT will consider the full context of your chat to get a better answer for you.

ChatGPT Search’s interface features prominent links to sources (Image Credit: Kevin Indig)

OpenAI has a strategic advantage, as I explained in Search GPT:

The Information reports that OpenAI loses $5 billion a year in expenses. Just capturing 3% of Google’s $175 billion Search business would allow OpenAI to recoup expenses.

OpenAI has a strategic advantage over Google: Search GPT can provide a very different, maybe less noisy, user experience than Google because it’s not reliant on ad revenue. In any decision regarding Search, Google needs to take ads into account.

CGS marks the entry to a new paradigm where traditional search engines like Google or Bing compete with AI chatbots.

They solve the same problems for users as search engines but with lower friction. But it also marks a critical event that should lead you to evaluate your strategy.

Companies face a choice to invest and “be early” to AI Search or ignore the noise and stay the course. What makes this decision hard:

  1. Divided opinions about Chat GPT’s chance to take significant market share from Google.
  2. Rapidly changing mechanisms of AI Search platforms.
  3. Confusion about what to do.

The first search engines didn’t represent the model (Google) that eventually won.

In the same vein, the AI Search experience we’re seeing today might be completely different in a few years. However, there is little doubt that search is fundamentally changing.

As a result, my recommendation is to invest in AI Search. It is not capital-intensive (yet), but the upside to finding a playbook is high.

If CGS grabs significant Google market share, you’re in a good position. If it fails, no harm is done.

Collision Course

Based on recent traffic trends, ChatGPT could catch up to Google in 2 years. (Image Credit: Kevin Indig)

In the chart above, I extrapolated ChatGPT’s and Google’s total traffic over the next two years if the trend from the last six months remains constant.

This chart will probably outrage or scare you, but the chance that events unfold exactly as depicted in this chart is low.

The reason I bring it up here is to consider the fact that many structural changes start slowly based on the saying “first gradually, then suddenly.”

It took Google about three to four years to beat Yahoo, Altavista, and Lycos. Given that new technology gets to critical mass ever faster, I’m not surprised ChatGPT could do it faster (in theory).

ChatGPT’s traffic has already passed the No. 3 search engine, Bing (YouTube is second).

When you look at comments and posts on social media, more and more people report using ChatGPT instead of Google for various purposes, but that could be availability bias.

Image Credit: Kevin Indig

One point a lot of people miss when looking at the traffic comparison between ChatGPT and Bing is that they’re not the same, and yet this is a fair comparison.

ChatGPT is more than a search engine. People use it for all sorts of things. But that’s exactly the point: a search engine that looks like Google never stood a chance to compete with Google or Bing.

CGS is something new, and that’s why it stands a chance. So, when you see chatgpt.com passing bing.com, the critical argument is not that both do different things but that they’re used to accomplish the same goal.

After all, search is just a way to solve problems or achieve goals, not to search for the sake of searching.

To clarify, I don’t think Google or Alphabet as a company is at risk of dying. I do think CGS has a chance to capture significant market share, and too many people underestimate how fast this can go.

Referral Traffic Skyrockets

ChatGPT’s outgoing referral traffic is skyrocketing (Image Credit: Kevin Indig)

AI Search marks a new paradigm where users get a direct answer without having to browse websites. So, how should companies think about pivoting their strategy?

Here’s what I’m telling my clients when they ask me whether they should pivot their SEO roadmap: For now, no. Reserve 10 to 20% of capacity to establish visibility in AI Search and for experimentation.

Look for signal: If you’re hesitant to invest more in AI Search right now, at least monitor traffic to and from ChatGPT. Base your decision on how long ChatGPT can keep its current traffic trend up.

Establishing visibility: This referral dashboard from Flow Agency is great for monitoring referral traffic.

With a few tweaks, you can monitor conversions in GA4 as well. You should also monitor site crawls from LLMs and your performance on Bing.

Then, experiment with content tweaks to improve your AI Search visibility. Keep investing in traditional SEO because it forms the basis of AI Search and answers.

Place a bet: The big question in this is whether you’re willing to take a bet or play it safe.

Being a first-mover to SEO had massive benefits as the incumbents tend to stay incumbents, mainly caused by strong backlink profiles, robust user signals, and brand familiarity.

For now, ChatGPT uses Bing search results to ground and weigh answers, which means sites with strong visibility on Bing also have a high chance of being very visible in CGS.

However, there is a chance that using Search for RAG (grounding) is just a jumping-off point until AI Search platforms have gathered enough of their data (queries and user behavior).

Early in this transition period, not much changes. Content that ranks well in traditional search engines, specifically Bing, gets a higher weighting in CGS, which means traditional SEO has a big impact on visibility in AI Search.

AI Chatbot referral traffic is skyrocketing, and ChatGPT’s new search capability could accelerate that growth even more.

Outgoing referral traffic from chatgpt.com increased massively in August and September, according to Similarweb.

Image Credit: Kevin Indig

Noticeable call-outs:

  • YouTube’s referral traffic increased from .17% in July to 3.9% in September.
  • Bing grew from 0% in April to 1.8% in September.
  • Amazon grew from 0% in July to 1.1% in September.

If referral traffic keeps growing at the same rate, it will get interesting in the next six to 12 months. It’s not just the volume but also the quality of traffic.

People use longer and more complex questions when they engage with AI answers, according to Sundar Pichai. Length is a way to be more specific.

Longer questions allow search engines, LLMs, and marketers to better understand and serve users what they want.

Based on conversations and observations, referral traffic from AI chatbots isn’t consistently higher than search traffic in every case, but in most.

Looking Forward

I’m leaving you with two interesting questions:

1. Is it a coincidence that ChatGPT Search came out three days after Apple Intelligence launched publicly?

Apple launched Apple Intelligence, which uses ChatGPT in certain situations:

Apple is integrating ChatGPT access into experiences within iOS 18, iPadOS 18, and macOS Sequoia, allowing users to access its expertise — as well as its image- and document-understanding capabilities — without needing to jump between tools. Siri can tap into ChatGPT’s expertise when helpful.

Users are asked before any questions are sent to ChatGPT, along with any documents or photos, and Siri then presents the answer directly.

Additionally, ChatGPT will be available in Apple’s systemwide Writing Tools, which help users generate content for anything they are writing about. With Compose, users can also access ChatGPT image tools to generate images in a wide variety of styles to complement what they are writing.

We also know how valuable Google’s exclusive search deal with Apple is.

From Monopoly:

Apple’s impact on Google Search is massive. The court documents reveal that 28% of Google searches (US) come from Safari and make up 56% of search volume. Consider that Apple sees 10 billion searches per week across all of its devices, with 8 billion happening on Safari and 2 billion from Siri and Spotlight.

“Google receives only 7.6% of all queries on Apple devices through user-downloaded Chrome” and “10% of its searches on Apple devices through the Google Search App (GSA).” Google would take a big hit without the exclusive agreement with Apple.

Since Search is part of ChatGPT, any API request could trigger the new Search feature.

As a result, ChatGPT has a direct line to searches and actions on Apple devices whenever Apple Intelligence uses ChatGPT. Is that integration the new version of Google’s deal with Apple?

I speculated that OpenAI could work on a browser in Search GPT:

If the main benefit or Search GPT for OpenAI is a revenue stream and access to more user data, the next logical step for OpenAI is to build a (AI-powered) browser.

Browser data is incredibly valuable for understanding user behavior, personalization and LLM training. Best of all, it’s app-agnostic, so OpenAI could learn from users even when they use Perplexity or Google.

We’ve seen the power of browser data in the Google lawsuit, where it turned out Google relied on Chrome data all along for ranking. The only layer that’s more powerful is the operating system and device layer.

OpenAI seems to be very aware of the importance of being the default when we look at how hard it pushes its Chrome extension, which changes the default browser search engine to ChatGPT.

2. As it’s likely that more users don’t browse the web but get answers from ChatGPT, Gemini, Perplexity, etc. directly, will the open web become a place primarily for bots instead of humans? And how would that change the purpose and look of websites?


1 Introducing ChatGPT search

2 Introducing Apple Intelligence, the personal intelligence system that puts powerful generative models at the core of iPhone, iPad, and Mac


Featured Image: Paulo Bobita/Search Engine Journal

ChatGPT Search Indexing: Essential Steps For Websites via @sejournal, @MattGSouthern

As the availability of ChatGPT Search expands, understanding its indexing mechanics will be vital for digital visibility.

While Bing’s index plays a key role, OpenAI’s system surfaces content using its own crawlers and attribution methods.

Here is a breakdown of the technical requirements for ensuring your website is indexed correctly.

Technical Framework

ChatGPT Search combines Bing’s search index with OpenAI’s proprietary technology.

According to OpenAI’s technical documentation, the platform utilizes a fine-tuned version of GPT-4o, enhanced with synthetic data generation techniques and integration with their o1-preview system.

The platform employs three distinct crawlers, each serving different purposes.

The OAI-SearchBot serves as the primary crawler for search functionality, while ChatGPT-User handles real-time user requests and enables direct interaction with external applications.

The third crawler, GPTBot, manages AI model training and can be blocked without affecting search visibility.

Implementation

Proper indexing begins with robots.txt configuration.

Your website’s robots.txt should specifically allow OAI-SearchBot while maintaining separate permissions for different OpenAI crawlers.

In addition to this basic configuration, websites must ensure proper indexing by Bing and maintain a clear site architecture.

It’s worth noting that allowing OAI-SearchBot doesn’t automatically mean the content will be used for AI training.

It can take approximately 24 hours for OpenAI’s systems to adjust to new crawling directives after a site’s robots.txt update.

Content Attribution

ChatGPT Search includes several key features for content publishers:

  • Source Attribution: All referenced content includes proper citation
  • Source Sidebar: Provides reference links for verification
  • Multiple Citation Opportunities: A single query can generate multiple source citations
  • Locations: Searches for specific locations will return an interactive map, as shown below.
Image Credit: OpenAI

Additional Considerations

Recent testing has revealed several important factors:

  • Content freshness affects visibility
  • Pages behind paywalls can still be cited
  • URLs returning 404 errors may still appear in citations
  • Multiple pages from the same domain can be referenced in a single response

Recommendations

Indexing in ChatGPT requires ongoing attention to technical health, including regular verification of the robots.txt file and crawler access.

Publishers should prioritize maintaining factual accuracy and up-to-date information while implementing a clear content structure.

This ensures that pages remain accessible across traditional search engines and AI-powered platforms, helping websites achieve broader visibility.


Featured Image: designkida/Shutterstock

OpenAI Reddit AMA And SEO For ChatGPT Search via @sejournal, @martinibuster

CEO Sam Altman and OpenAI executives held a Reddit AMA to answer questions, including those about ChatGPT Search, providing an inside look at how it works. Their answers offer insights into what SEO may look like in the immediate future.

The people from OpenAI answering the questions:

  • Sam Altman, CEO
  • Kevin Weil, Chief Product Officer
  • Mark Chen, SVP of Research
  • ​​Srinivas Narayanan, VP Engineering

Why ChatGPT Search Is Important

ChatGPT Search is not a search engine, it’s an AI chatbot with search, which means it doesn’t compete with Google as a search engine, it simply replaces it with something else that people already use for work and play. Now it has additional utility as an assistant in daily life and search.

Another advantage to ChatGPT Search is that it doesn’t show advertising nor does it follow users around the Internet. Users already trust ChatGPT with personal and business information so it’s already has goodwill with users.

What makes ChatGPT Search a threat to Google is that Users are already familiar with ChatGPT and have good feelings about it. Because it’s already in use there is no switching away from Google to break the habit of searching with Google.

Sam Altman On Why ChatGPT Search Is Better

In the Ask Me Anything (AMA) session on Reddit, a Redditor asked OpenAI CEO what the value of ChatGPT Search is over other search engines.

The person asked:

“My question is about the value ChatGPT Search offers compared to popular search engines. What are the unique advantages or key differentiators of ChatGPT Search that would make it worthwhile for a typical search engine user to choose it?

Sam Altman answered:

“For many queries, I find it to be a way faster/easier way to get the information I’m looking for. I think we’ll see this especially for queries that require more complex research. I also look forward to a future where a search query can dynamically render a custom web page in response!”

That bit about a “custom web page” is something to look out for because it hints at personalization based on what a user is searching for.

Complex Queries Are ChatGPT’s Advantage

Altman’s response about ChatGPT Search’s handling of complex queries calls attention to an advantage over Google. ChatGPT users are accustomed to using natural language, whereas Google users habitually use keyword searches. Keyword searches disadvantages Google because it’s harder to understand those queries, which is why Google displays People Also Ask features in Search.

Natural language queries is the way users interact with ChatGPT and that is an advantage for ChatGPT Search.

Grounding For Better Answers

The next question was about OpenAI’s progress on preventing ChatGPT from making things up (aka hallucinations) and also about how it’s going to incorporate fresh data to the index.

Both problems are generally approached with a technology and technique called Retrieval-Augmented Generation (RAG) which selects data from an up to date database like a search index or a knowledge graph and then provides that to the LLM-based chatbot to summarize and use as a base for an answer.

This is the question:

“Are hallucinations going to be a permanent feature? Why is it that even o1-preview, when approaching the end of a “thought” hallucinates more and more?

How will you handle old data (even 2-years old) that is now no longer “true”? Continuously train models or some sort of garbage collection? It’s a big issue in the truthfulness aspect.”

The answer was given by Mark Chen, SVP of Research

“We’re putting a lot of focus on decreasing hallucinations, but it’s a fundamentally hard problem – our models learn from human-written text, and humans sometimes confidently declare things they aren’t sure about.”

Mark Chen continued his answer by saying that they are getting better by the use of “grounding” which is something that Retrieval-Augmented Generation (RAG) helps large language models with. Chen also reveals that they believe that using Reinforcement Learning (RL) may help models stop hallucinating.

Reinforcement Learning (RL) is a way to teach a machine with experience, rewarding it when it’s correct and withholding the reward when it’s not correct, thus reinforcing good answers. The machine “learns” by making choices that maximizes rewards. In the context of hallucinations, a reward could be a score or signal that indicates that the answer is factual (and it could also be provided by human feedback scores).

Mark Chen continued his response:

“Our models are improving at citing, which grounds their answers in trusted sources, and we also believe that RL will help with hallucinations as well – when we can programmatically check whether models hallucinate, we can reward it for not doing so.”

Does ChatGPT Search Use Bing?

The next question is about what search data does ChatGPT Search use.

The question asked:

“Is ChatGPT Search still using Bing as the search engine behind scenes?”

The answer was provided by Rinivas Narayanan, VP Engineering at OpenAI:

“We use a set of services and Bing is an important one.”

That’s an interesting answer because it’s commonly assumed that Bing is the only search engine. The answer indicated that ChatGPT Search uses multiple “services” and that Bing is the most important. What are the other services that ChatGPT might use? That’s an open question.

What Does OpenAI Say About SEO For ChatGPT Search?

Someone asked the important question about how to optimize content for ChatGPT Search in order to improve rankings. The question was answered by Kevin Weill who said that they were still figuring it out, which could mean that they don’t know or that they’re still figuring out what to say about optimization.

Kevin Weill, Chief Product Officer responded:

“This is a great question—the product just launched today so there’s a lot to figure out still about where search will be similar and where it will be different in an AI world. Would love any feedback you have!”

Takeaways – SEO For ChatGPT Search

Chief Product Officer Kevin Weill is right, these are still the early days of their search and much can still change. The OpenAI Reddit AMA offers first hints at what SEO is growing into.

Other insights:

  • Bing is the main service ChatGPT Search uses but there are other services it uses as well. That makes Bing an important search engine to rank in.
  • ChatGPT users are accustomed to natural language interactions and may during the course of their work day use ChatGPT Search.
  • OpenAI may use Reinforcement Learning at some point to get a better handle on hallucinations.
  • Personalization may be arriving at some point in the future in the form of a dynamically rendered web page.

Beyond those takeaways is the consideration that OpenAI is not directly competing against Google with a standalone search engine, it has created a completely different experience for searching the web.

Featured Image by Shutterstock/Vitor Miranda

Streamlining PPC Workflows With AI: How Efficiency Meets Effectiveness via @sejournal, @brookeosmundson

In the fast-paced world of PPC advertising, marketers are constantly seeking ways to streamline their workflows and improve performance.

Managing PPC campaigns efficiently requires a delicate balancing act of multiple tasks:

  • Analyzing data.
  • Optimizing bid strategies.
  • Testing creatives.
  • Reporting performance.
  • And so much more.

While AI and machine learning have been around in PPC for years, a new wave of AI tools for streamlining productivity and workflows has made its way into the PPC scene.

Whether it’s automating repetitive tasks, enhancing audience targeting, or analyzing vast datasets, AI tools are reshaping how PPC professionals work.

Who doesn’t want to save time doing repetitive, busy work tasks?

In this article, we’ll explore several unconventional ways AI tools can help PPC marketers save time, increase efficiency, and make smarter decisions.

Using AI To Automate Data Interpretation And Trend Insights

PPC campaigns can generate enormous amounts of data that need to be consistently analyzed and interpreted.

AI tools outside of the standard Google and Microsoft Ads platforms can help streamline this process by helping with tasks like:

  • Quickly summarizing key trends.
  • Look for patterns in performance data.
  • Identify any data anomalies for further analysis.

These insights can enable marketers to move from data to action faster.

Using AI Tools For Trend Identification And Insights

If you’d rather not manually sift through reports identifying changes in performance metrics changes, you can actually feed campaign data into ChatGPT (or similar AI tools) to receive summaries that highlight performance trends.

For example, they can help identify seasonal changes in performance or pinpoint potential issues, such as a sudden dip in conversion rate.

Say you run 20 different campaigns in Google Ads and start to see a significant drop in conversion rates from the platform. It can be daunting to immediately pinpoint the cause of the issue.

By processing raw performance data from your campaigns, these AI tools can quickly analyze the data and provide insight into not only where the problem(s) can lie, but also glean insights as to why performance has shifted, like:

  • Ad fatigue.
  • Increased competition.
  • A shift in consumer behavior.

Using AI tools in this capacity helps marketers cut down on analysis time while helping to identify core issues faster, allowing for quicker optimization.

This automation saves hours of manual work, enabling you to focus on more strategic decision-making instead of spending time analyzing large datasets.

Enhancing Competitor Analysis And Strategy Development

Keeping up with competitors is crucial in the PPC landscape, but the task at hand can be time-consuming and complex.

AI tools simplify this process by providing insights into competitors’ strategies, allowing you to stay one step ahead.

There are plenty of tools to help drive competitor insights, whether in the Google Ads platform, third-party tools, or AI tools.

If you’re looking to take the analysis a step further, you can input reports from other competitive analysis tools into ChatGPT (or a similar tool) to receive a quick summary that highlights a competitor’s recent actions.

For example, this could include information like:

  • Shifts in bidding strategies.
  • Introduction of new ad copies.
  • Keywords being targeted.

Based on this data, the AI tools can suggest ways to adjust your own campaigns or suggest counter-strategies to stay competitive.

By automating competitor analysis tasks, you can gain valuable insights faster, which allows for quicker, more informed decision-making and strategic actions.

Simplifying Multi-Account And Cross-Platform Reporting

Managing campaigns across multiple platforms – whether it’s Google Ads, Microsoft Ads, Meta, or others – means compiling huge data sets from different sources.

Trying to put together a compelling, holistic story about your marketing campaigns can take up a lot of time as you navigate from platform to platform.

This is where the power of AI tools can come in to help aggregate reports and create cohesive summaries.

Streamlining Cross-Platform Reporting

Multi-channel reporting is often a daunting task, especially when managing accounts across Google, Microsoft, and social platforms.

By inputting performance data from these platforms into ChatGPT, marketers can receive a single, unified report that summarizes key performance indicators (KPIs) across channels.

For example, say you manage several campaigns across Google Ads, Microsoft Ads, and Meta Ads.

Instead of switching between dashboards and manually pulling data, you can input the performance metrics from each platform into your AI tool of choice.

The tool can summarize the top-performing platforms, highlight underperforming campaigns, and suggest where to reallocate budgets to maximize ROI.

AI’s ability to consolidate multi-channel data helps reduce reporting time, enabling marketers to spend more time optimizing campaigns and less time on administrative tasks.

Keyword Research And Expansion With AI

Keyword research is at the core of every PPC strategy, and expanding keyword lists can be labor-intensive.

AI tools can make the process more efficient by identifying relevant keywords, negative keywords, and keyword variations that are often missed in traditional tools.

While tools like the Google Keyword Planner are great at providing keyword recommendations, AI tools can take it a step further.

They can generate items like long-tail keyword variations and help identify opportunities for new targeting strategies.

Additionally, they can analyze an existing keyword list and suggest related keywords that reflect user intent or emerging trends.

For example, say you manage PPC campaigns for an ecommerce retailer. You input a list of current top-performing keywords with your latest KPI performance data into your AI tool of choice.

From there, the tool can generate suggestions for new long-tail keywords that may have lower volume, but higher intent to purchase.

Additionally, you can ask the tool to suggest negative keywords to eliminate irrelevant traffic, which improves both relevance and cost efficiency.

To really kick this into high gear, you can then ask the tool to format these new keywords and negative keywords into a format that allows you to upload them into Google Ads Editor, saving you hours of manual work adding each one individually.

Using AI tools beyond the ad platforms can help marketers discover new opportunities faster, ensuring more comprehensive targeting with minimal manual effort.

AI-Assisted Testing And Creative Optimization

There’s no debate that A/B testing is critical to campaign optimization, but interpreting results and making decisions about the next steps is where most people fall flat.

Using AI tools to streamline this process can aid you in analyzing test data and suggest optimizations based on performance.

Say you want to test two different versions of a headline in a PPC campaign. You can upload your test performance data into an AI tool for analysis.

Not only will it summarize which headline performed better, but it goes a step further to help answer why one headline outperformed the other.

By providing insights into which elements contributed to success, it can save you time in the long run and help keep those driving factors top of mind for the next test.

AI For PPC Budget Allocation And Forecasting

Effective budget management is essential for optimizing PPC performance.

The ad platforms are great at automating tasks like changing daily budgets based on scripts, but what about strategic budget allocation decisions?

Using AI tools to assist budget allocation across campaigns or platforms by forecasting potential outcomes based on past performance data can streamline the process of deciding where to invest – and when.

For example, a retail client has an upcoming holiday sale and they want to know if they can expect a higher return than last year’s sale.

Inputting last year’s campaign performance into AI tools like ChatGPT can help analyze performance, while also taking into consideration current market trends.

The output could be to suggest how much of the budget should be allocated to high-performing keywords or certain product categories.

It can also provide a forecast of expected returns based on historical data, current CPC trends, and consumer behavior trends to help you make informed budget decisions ahead of time.

AI-driven budget forecasting helps ensure that resources are allocated to the right areas, reducing wasted spend and improving overall campaign performance.

Automating Market Trend Exploration And Forecasting

Market trends can shift quickly, and staying ahead of these changes is key to successful PPC campaigns.

AI tools can analyze search trends, consumer behavior, and historical campaign data to predict future shifts in demand and help marketers prepare.

For instance, AI tools can identify trends in consumer searches in real time, helping you adjust your campaign strategies proactively.

For example, you manage Google Ads campaigns for a fitness brand, and you’re noticing a seasonal uptick in searches for [home workout equipment].

By using AI tools to analyze Google Trends data, you can forecast how that demand will continue to rise or fall in the coming months, and even if certain geographical areas are driving the high demand.

This allows you to adjust bids based on location, increase overall budgets if necessary to help capture demand, and create relevant ad copy that speaks directly to the emerging trend.

Conclusion

AI is revolutionizing PPC workflows, allowing marketers to work smarter, not harder.

Whether you’re leveraging Google Ads’ AI capabilities, like Gemini’s conversational ad creation or integrating third-party tools for deeper insights, AI is becoming indispensable in managing and optimizing PPC campaigns.

From automating bid management and audience targeting to optimizing ad creatives and providing actionable insights, AI offers opportunities to boost efficiency without sacrificing effectiveness.

As AI tools continue to evolve, those who embrace these technologies will find themselves better equipped to deliver superior results, whether managing in-house campaigns or serving clients.

By integrating both Google’s AI features and powerful third-party tools, you can unlock new levels of performance, save time on manual tasks, and focus on strategy and innovation.

More resources:


Featured Image: 3rdtimeluckystudio/Shutterstock

Meta Takes Step To Replace Google Index In AI Search via @sejournal, @martinibuster

Meta is reportedly developing a search engine index for its AI chatbot to reduce reliance on Google for AI-generated summaries of current events. Meta AI appears to be evolving to the next stage of becoming a fully independent AI search engine.

Meta-ExternalAgent

Meta has been crawling the Internet since at least this past summer from a user agent called, Meta-ExternalAgent. There have been multiple reports in various forums about excessive amounts of crawling with one person on Hacker News reporting having received 50,000 hits by the bot. A post in the WebmasterWorld bot crawling forum notes that although the documentation for Meta-ExternalAgent says it respects robots.txt it wouldn’t have made a difference because the bot never visited the file.

It may be that the bot wasn’t fully ready earlier this year and that it’s poor behavior has settled down.

The purpose of the bot is to summarize search results and according to the results it’s to reduce reliance on Google and Bing for search results.

Is This A Challenge To Google?

It may be possible that this is indeed a the prelude to a challenge to Google (and other search engines) in AI search. The information at this time supports that this is about creating a search index to complement their Meta AI. As reported in The Verge, Meta is crawling sites for search summaries to be used within the Meta AI Chatbot:

“The search engine would reportedly provide AI-generated search summaries of current events within the Meta AI chatbot.”

The Meta AI chatbot looks like a search engine and it’s clear that it’s still using Google’s search index.

For example, a search t Meta AI about the recent game four of the World Series showed a summary with an accurate answer that had a link to Google.

Screenshot Of Meta AI With Link To Google Search

Here’s a close up showing the link to Google search results and a link to the sources:

Screenshot Of Close-Up Of Meta AI Results

Clicking on the View Sources button spawns a popup with links to Google Search.

Screenshot Of Meta AI View Sources Pop-Up

Read the original reports:

A report was posted in The Verge, based on another reported published on The Information.

See also:

Featured Image by Shutterstock/Skorzewiak