OpenAI Brings GPT-4o Back For Paid ChatGPT Users via @sejournal, @MattGSouthern

OpenAI has restored GPT-4o to the ChatGPT model picker for paid accounts and says it will give advance notice before removing models in the future.

The company made the change after pushback over GPT-5’s rollout and confirmed it alongside new speed controls for GPT-5 that let you choose Auto, Fast, or Thinking.

What’s New

GPT-4o Returns

If you are on a paid plan, GPT-4o now appears in the model picker by default.

You can also reveal additional options in Settings by turning on Show additional models, which exposes legacy models such as o3, o4-mini, and GPT-4.1 on Plus and Team, and adds GPT-4.5 on Pro.

This addresses the concern that model choices disappeared without warning during the initial GPT-5 launch.

New GPT-5 Modes

OpenAI’s mode picker lets you trade response time for reasoning depth.

CEO Sam Altman states:

“You can now choose between ‘Auto’, ‘Fast’, and ‘Thinking’ for GPT-5. Most users will want Auto, but the additional control will be useful for some people.”

Higher Capacity Thinking Mode

For heavier tasks, GPT-5 Thinking supports up to 3,000 messages per week and a 196k-token context window.

After you hit the weekly cap, chats can continue with GPT-5 Thinking mini, and OpenAI notes limits may change over time.

This helps when you are reviewing long reports, technical documents, or many content assets in one session.

Personality Updates

OpenAI says it’s working on GPT-5’s default tone to feel “warmer than the current personality but not as annoying (to most users) as GPT-4o.”

The company acknowledges the need for more per-user personality controls.

How To Use

To access the extra models: Open ChatGPT, go to Settings, then General, and enable Show additional models.

That toggles the legacy list and Thinking mini alongside GPT-5. GPT-4o is already in the picker for paid users.

Looking Ahead

OpenAI promises more notice around model availability while giving you clearer controls over speed and depth.

In practice, try Fast for quick checks, keep Auto for routine chats, and use Thinking where accuracy and multi-step reasoning matter most.

If your workflows depended on 4o’s feel, bringing it back reduces disruption while OpenAI tunes GPT-5’s personality and customization.


Featured Image: Adha Ghazali/Shuterstock

YouTube Lets Creators Pick Exact CTAs In Promote Website Ads via @sejournal, @MattGSouthern

YouTube has updated its Promote feature, giving you more control over campaigns designed to drive website traffic.

When you set a campaign goal of “more website visits,” you can now choose a specific call to action, such as “Book now,” “Get quote,” or “Contact us.”

The change was announced during YouTube’s weekly news update for creators:

More Targeted Campaign Goals

Previously, Promote campaigns for website traffic used broader objectives. Now, you can define a more granular outcome that better matches your business goals.

For example, a consulting service might pair its campaign with a “Get quote” button, while an event organizer could use “Book now.”

By letting you choose the intended action, YouTube is making it easier to connect video promotion with measurable results.

How YouTube Promote Works

Promote is YouTube’s built-in ad creation tool, available directly in YouTube Studio.

It allows you to run ads for Shorts and videos without going through the Google Ads interface. You can create campaigns to:

  • Gain more subscribers
  • Increase video views
  • Drive visits to your website

Campaign creation and management happen entirely within YouTube Studio’s Promotions tab, keeping the process straightforward for creators who may not have experience with traditional advertising platforms.

Why This Matters

For creators promoting services, products, or events, the ability to align ads with a specific action could improve return on investment and make performance tracking easier.

Marketing teams managing YouTube channels for clients can now link spend to clear outcomes, strengthening the case for campaign value.

Looking Ahead

This update is part of YouTube’s push to give creators accessible yet more powerful monetization and promotion tools.

For marketers, it creates another measurable step in the customer journey, offering insight into how video campaigns contribute to broader marketing goals.


Featured Image: Roman Samborskyi/Shutterstock

From Ranking to Reasoning: Philosophies Driving GEO Brand Presence Tools via @sejournal, @Dixon_Jones

Since the turn of the Millennium, marketers have mastered the science of search engine optimization.

We learned the “rules” of ranking, the art of the backlink, and the rhythm of the algorithm. But, the ground has shifted to generative engine optimization (GEO).

The era of the 10 blue links is giving way to the age of the single, synthesized answer, delivered by large language models (LLMs) that act as conversational partners.

The new challenge isn’t about ranking; it’s about reasoning. How do we ensure our brand is not just mentioned, but accurately understood and favorably represented by the ghost in the machine?

This question has ignited a new arms race, spawning a diverse ecosystem of tools built on different philosophies. Even the words to describe these tools are part of the battle: “GEO“, “GSE”, “AIO“, “AISEO”, just more “SEO.” The list of abbreviations continues to grow.

But, behind the tools, different philosophies and approaches are emerging. Understanding these philosophies is the first step toward moving from a reactive monitoring posture to a proactive strategy of influence.

School Of Thought 1: The Evolution Of Eavesdropping – Prompt-Based Visibility Monitoring

The most intuitive approach for many SEO professionals is an evolution of what we already know: tracking.

This category of tools essentially “eavesdrops” on LLMs by systematically testing them with a high volume of prompts to see what they say.

This school has three main branches:

The Vibe Coders

It is not hard, these days, to create a program that simply runs a prompt for you and stores the answer. There are myriad weekend keyboard warriors with offerings.

For some, this may be all you need, but the concern would be that these tools do not have a defensible offering. If everyone can do it, how do you stop everyone from building their own?

The VC Funded Mention Trackers

Tools like Peec.ai, TryProfound, and many more focus on measuring a brand’s “share of voice” within AI conversations.

They track how often a brand is cited in response to specific queries, often providing a percentage-based visibility score against competitors.

TryProfound adds another layer by analyzing hundreds of millions of user-AI interactions, attempting to map the questions people are asking, not just the answers they receive.

This approach provides valuable data on brand awareness and presence in real-world use cases.

The Incumbents’ Pivot

The major players in SEO – Semrush, Ahrefs, seoClarity, Conductor – are rapidly augmenting their existing platforms. They are integrating AI tracking into their familiar, keyword-centric dashboards.

With features like Ahrefs’ Brand Radar or Semrush’s AI Toolkit, they allow marketers to track their brand’s visibility or mentions for their target keywords, but now within environments like Google’s AI Overviews, ChatGPT, or Perplexity.

This is a logical and powerful extension of their current offerings, allowing teams to manage SEO and what many are calling generative engine optimization (GEO) from a single hub.

The core value here is observational. It answers the question, “Are we being talked about?” However, it’s less effective at answering “Why?” or “How do we change the conversation?”.

I have also done some maths on how many queries a database might need to be able to have enough prompt volume to be statistically useful and (with the help of Claude) came up with a database requirement of 1-5 billion prompt responses.

This, if achievable, will certainly have cost implications that are already reflected in the offerings.

School Of Thought 2: Shaping The Digital Soul – Foundational Knowledge Analysis

A more radical approach posits that tracking outputs is like trying to predict the weather by looking out the window. To truly have an effect, you must understand the underlying atmospheric systems.

This philosophy isn’t concerned with the output of any single prompt, but with the LLM’s foundational, internal “knowledge” about a brand and its relationship to the wider world.

GEO tools in this category, most notably Waikay.io and, increasingly, Conductor, operate on this deeper level. They work to map the LLM’s understanding of entities and concepts.

As an expert in Waikay’s methodology, I can detail the process, which provides the “clear bridge” from analysis to action:

1. It Starts With A Topic, Not A Keyword

The analysis begins with a broad business concept, such as “Cloud storage for enterprise” or “Sustainable luxury travel.”

2. Mapping The Knowledge Graph

Waikay uses its own proprietary Knowledge Graph and Named Entity Recognition (NER) algorithms to first understand the universe of entities related to that topic.

What are the key features, competing brands, influential people, and core concepts that define this space?

3. Auditing The LLM’s Brain

Using controlled API calls, it then queries the LLM to discover not just what it says, but what it knows.

Does the LLM associate your brand with the most important features of that topic? Does it understand your position relative to competitors? Does it harbor factual inaccuracies or confuse your brand with another?

4. Generating An Action Plan

The output isn’t a dashboard of mentions; it’s a strategic roadmap.

For example, the analysis might reveal: “The LLM understands our competitor’s brand is for ‘enterprise clients,’ but sees our brand as ‘for small business,’ which is incorrect.”

The “clear bridge” is the resulting strategy: to develop and promote content (press releases, technical documentation, case studies) that explicitly and authoritatively forges the entity association between your brand and “enterprise clients.”

This approach aims to permanently upgrade the LLM’s core knowledge, making positive and accurate brand representation a natural outcome across a near-infinite number of future prompts, rather than just the ones being tracked.

The Intellectual Divide: Nuances And Necessary Critiques

A non-biased view requires acknowledging the trade-offs. Neither approach is a silver bullet.

The Prompt-Based method, for all its data, is inherently reactive. It can feel like playing a game of “whack-a-mole,” where you’re constantly chasing the outputs of a system whose internal logic remains a mystery.

The sheer scale of possible prompts means you can never truly have a complete picture.

Conversely, the Foundational approach is not without its own valid critiques:

  • The Black Box Problem: Where proprietary data is not public, the accuracy and methodology are not easily open to third-party scrutiny. Clients must trust that the tool’s definition of a topic’s entity-space is correct and comprehensive.
  • The “Clean Room” Conundrum: This approach primarily uses APIs for its analysis. This has the significant advantage of removing the personalization biases that a logged-in user experiences, providing a look at the LLM’s “base” knowledge. However, it can also be a weakness. It may lose focus on the specific context of a target audience, whose conversational history and user data can and do lead to different, highly personalized AI outputs.

Conclusion: The Journey From Monitoring To Mastery

The emergence of these generative engine optimization tools signals a critical maturation in our industry.

We are moving beyond the simple question of “Did the AI mention us?” to the far more sophisticated and strategic question of “Does the AI understand us?”

Choosing a tool is less important than understanding the philosophy you’re buying into.

A reactive, monitoring strategy may be sufficient for some, but a proactive strategy of shaping the LLM’s core knowledge is where the durable competitive advantage will be forged.

The ultimate goal is not merely to track your brand’s reflection in the AI’s output, but to become an indispensable part of the AI’s digital soul.

More Resources:


Featured Image: Rawpixel.com/Shutterstock

AI Search Changes Everything – Is Your Organization Built To Compete? via @sejournal, @billhunt

Search has changed. Have you?

Search is no longer about keywords and rankings. It’s about relevance, synthesis, and structured understanding.

In the AI-powered era of Google Overviews, ChatGPT-style assistants, and concept-level rankings, traditional SEO tactics fall short.

Content alone won’t carry you. If your organization isn’t structurally and strategically aligned to compete in this new paradigm, you’re invisible even if you’re technically “ranking.”

This article builds on the foundation laid in my earlier article, From Building Inspector To Commissioning Authority,” where I argued that SEO must shift from reactive inspection to proactive orchestration.

It also builds upon my exploration of the real forces reshaping search, including the rise of Delphic Costs, where brands are extracted from the customer journey without attribution, and the organizational imperative to treat visibility as everyone’s responsibility, not just a marketing key performance indicator (KPI).

And increasingly, it’s not just about your monetization. It’s about the platform.

The Three Shifts Reshaping Search

1. Google AI Overviews: The Answer Layer Supersedes The SERP

Google is bypassing traditional listings with AI-generated answers. These overviews synthesize facts, concepts, and summaries across multiple sources.

Your content may power the answer, but without attribution, brand visibility, or clicks. In this model, being the source is no longer enough; being the credited authority is the new battle.

2. Generative Assistants: New Gatekeepers To Discovery

Tools like ChatGPT, Perplexity, and Gemini collapse the search journey into a single query/answer exchange. They prioritize clarity, conceptual alignment, and structured authority.

They don’t care about the quantity of backlinks; they care about structured understanding. Organizations relying on domain authority or legacy SEO tactics are being leapfrogged by competitors who embrace AI-readable content.

3. Concept-Based Ranking: From Keywords To Entities And Context

Ranking is no longer determined by exact-match phrases. It’s determined by how well your content reflects and reinforces the concepts, entities, and context behind a query.

AI systems think in knowledge graphs, not spreadsheets. They interpret meaning through structured data, relationships between entities, and contextual signals.

These three shifts mean that success now depends on how well your organization can make its expertise machine-readable and contextually integrated into AI ecosystems.

A New Era Of Monetization And Data Harvesting

Search platforms have evolved from organizing information to owning outcomes. Their mission is no longer to guide users to your site; it’s to keep users inside their ecosystem.

The more they can answer in place, the more behavioral data they collect, and the more control they retain over monetization.

Today, your content competes not just with other brands but with the platforms themselves. They’re generating “synthetic content” derived from your data – packaged, summarized, and monetized within their interfaces.

As Dotdash Meredith CEO Neil Vogel put it: “We were in the business of arbitrage. We’d buy traffic for a dollar, monetize it for two. That game is over. We’re now in the business of high-quality content that platforms want to reward.”

Behavioral consequence: If your content can’t be reused, monetized, or trained against, it’s less likely to be shown.

Strategic move: Make your content AI-friendly, API-ready, and citation-worthy. Retain ownership of your core value. Structured licensing, schema, and source attribution matter more than ever.

This isn’t just about visibility. It’s about defensibility.

The Strategic Risks

Enterprises that treat search visibility as a content problem – not a structural one – are walking blind into four key risks:

  • Disintermediation: You lose traffic, attribution, and control when AI systems summarize your insights without directing users to you. In an AI-mediated search world, your value can be extracted while your brand is excluded.
  • Market Dilution: Nimbler competitors who better align with AI content requirements will surface more often, even if they have less experience or credibility. This creates a reverse trust dynamic: newcomers gain exposure by leveraging the machine’s strengths, while legacy players lose visibility.
  • Performance Blind Spots: Traditional KPIs no longer capture the real picture. Traffic may appear stable while influence and presence erode behind the scenes. Executive dashboards often miss this erosion because they’re still tuned to clicks, not concept penetration or AI inclusion.
  • Delphic Costs: This, as defined by Andrei Broder and Preson McAfee, refers to the expenses incurred when AI systems extract your expertise without attribution or downstream benefits, resulting in brand invisibility despite active contributions. Being referenced but not represented becomes a strategic liability.

Are You Built To Compete?

Here’s a five-pillar diagnostic framework to assess your organization’s readiness for AI search:

1. Content Structure

  • Do you use schema markup to define your content’s meaning?
  • Are headings, tables, lists, and semantic formats prioritized?
  • Is your content chunked in ways AI systems can easily digest?
  • Are your most authoritative explanations embedded into the page using clear, concise writing and answer-ready?

2. Relevance Engineering

  • Do you map queries to concepts and entities?
  • Is your content designed for entity resolution, not just keyword targeting?
  • Are you actively managing topic clusters and knowledge structures?
  • Have you audited your internal linking and content silos to support knowledge graph connectivity?

3. Organizational Design (Shared Accountability)

  • Who owns “findability” in your organization?
  • Are SEO, content, product, and dev teams aligned around structured visibility?
  • Is there a commissioning authority that ensures strategy alignment from the start?
  • Do product launches and campaign rollouts include a visibility readiness review?
  • Are digital visibility goals embedded in executive and departmental KPIs?

In one example, a SaaS company I advised implemented monthly “findability sprints,” where product, dev, and content teams worked together to align schema, internal linking, and entity structure.

The result? A 30% improvement in AI-assisted surfacing – without publishing a single new page.

4. AI Feedback Loops

  • Are you tracking where and how your content appears in AI Overviews or assistants?
  • Do you have visibility into lost attribution or uncredited brand mentions?
  • Are you using tools or processes to monitor AI surface presence?
  • Have you incorporated AI visibility into your reporting cadence and strategic reviews?

5. Modern KPIs

  • Do your dashboards still prioritize traffic volume over influence?
  • Are you measuring presence in AI systems as part of performance?
  • Do your teams know what “visibility” actually means in an AI-dominant world?
  • Are your KPIs evolving to include citations, surface presence, and non-click influence metrics?

The Executive Mandate: From Visibility Theater To Strategic Alignment

Organizations must reframe search visibility as digital infrastructure, not a content marketing afterthought.

Just as commissioning authorities ensure a building functions as designed, your digital teams must be empowered to ensure your knowledge is discoverable, credited, and competitively positioned.

AI-readiness isn’t about writing more content. It’s about aligning people, process, and technology to match how AI systems access and deliver value. You can’t fix this with marketing alone. It requires a leadership-driven transformation.

Here’s how to begin:

  1. Reframe SEO as Visibility Engineering: Treat it as a cross-functional discipline involving semantics, structure, and systems design.
  2. Appoint a Findability or Answers Leader: This role connects the dots across content, code, schema, and reporting to ensure you are found and answering the market’s questions.
  3. Modernize Metrics: Track AI visibility, entity alignment, and concept-level performance – not just blue links.
  4. Run an AI Exposure Audit: Understand where you’re showing up, how you’re credited, and most critically, where and why you’re not. Just ask the AI system, and it will tell you exactly why you were not referenced.
  5. Reward Structural Alignment: Incentivize teams not just for publishing volume, but for findability performance. Celebrate contributions to visibility the same way you celebrate brand reach or campaign success. Make visibility a cross-team metric.

Final Thought: You Can’t Win If You’re Not Represented

AI is now the front end of discovery. If you’re not structured to be surfaced, cited, and trusted by machines, you’re losing silently.

You won’t fix this with a few blog posts or backlinks.

You fix it by building an organization designed to compete in the era of machine-mediated relevance.

This is your commissioning moment – not just to inspect the site after it’s built, but to orchestrate the blueprint from the start.

Welcome to the new search. Let’s build for it.

More Resources:


Featured Image: Master1305/Shutterstock

Earn 1,000+ Links & Boost Your SEO Visibility [Webinar] via @sejournal, @lorenbaker

Build the Authority You Need for AI-Driven Visibility

Struggling to get backlinks, even when your content is solid? 

You’re not alone. With Google’s AI Overviews and generative search dominating the results, traditional link-building tactics just don’t cut it anymore.

It’s time to earn the trust that boosts your brand’s visibility across Google, ChatGPT, and AI search engines.

Join Kevin Rowe, Founder & Head of Digital PR Strategy at PureLinq, on August 27, 2025, for an exclusive webinar. Learn the exact strategies Kevin’s team used to earn 1,000+ links and how you can replicate them without needing a massive budget or PR team.

What You’ll Learn:

  • How to identify media trends where your expertise is in demand.
  • The step-by-step process to create studies that can earn links on autopilot.
  • How to craft a story angle journalists will want to share.

Why This Webinar is Essential:

Earned links and citations are now key to staying visible in AI search results. This session will provide you with a proven, actionable playbook for boosting your SEO visibility and building the authority you need to succeed in this new era.

Register today to get the playbook for link-building success. Can’t attend live? Don’t worry, sign up anyway, and we’ll send you the full recording.

Is GEO the Same as SEO?

“Generative engine optimization” refers to tactics for increasing visibility in and traffic from AI answers. “Answer engine optimization” is synonymous with GEO, as are references to large language models, such as “LLM optimization.”

Whatever the name, optimizing for generative AI is different from traditional search engines. The distinction lies in the underlying technology:

  • LLM platforms don’t always perform a search to produce answers. The platforms use training data, which doesn’t typically have sources or URLs. It’s a knowledge base for accessing answers without necessarily knowing the origin.
  • Unlike search engines, LLMs don’t have an index or a cache of URLs. When they search, LLMs use external search engines, likely Google for ChatGPT.
  • After searching, AI crawlers go to the page, read it, and pull answers from it. AI crawlers are much less advanced than those of search engines and, accordingly, cannot render a page as easily as a Google crawler.

GEO-specific tactics include:

  • A brand in AI training data has long-term exposure in answers, but appearing in that data requires an approach beyond SEO. The keys are concise, relevant, problem-solving content on-site, and off-site exposure in reviews, forums, and other reputable mentions.
  • Being indexed by Google is more or less essential for AI answers, to a point. Additional optimization steps include (i) ensuring the site is accessible and crawlable by AI bots, (ii) structuring content to enable AI to pull answers easily, and (iii) optimizing for prompts, common needs, and, yes, keywords.
  • Keywords remain critical (and evolving) for GEO and SEO, although the former “fans out” to also answer likely follow-up prompts.

SEO

Reliance on Google varies by the genAI platform. ChatGPT, again, taps Google’s index. AI Overviews mostly summarize top-ranking URLs for the initial and fan-out queries. Higher rankings in organic search will likely directly elevate visibility in AI Overviews and Perplexity.

Google Search remains the most powerful discovery and visibility engine. And a brand that ranks high in Google is typically prominent, which drives visibility in AI Answers. As such, organic rankings also drive AI indirectly, through brand signals.

GEO

Thus GEO and SEO overlap. Pages that rank highly in organic search results will almost certainly end up in training data with elevated chances of appearing in AI answers.

Yet for training data, AI platforms continuously crawl sites with their own, limited bots and those of third-party providers, such as Common Crawl.

Hence AI platforms crawl pages via two paths: from links in organic search results and independently with their own (or outsourced) bots.

GEO kicks in when the bots reach a page. The sophistication of AI crawlers is much less than Google’s. GEO requires concise, relevant page content that’s easily accessed, without JavaScript, and succinctly summarizes a need and then answers it directly.

Structured data markup, such as from Schema.org, likely helps, too.

In short, a GEO-ready page has a clear purpose and clear answers, easily crawled.