Google Brings Gemini 3 To Search’s AI Mode via @sejournal, @MattGSouthern

Google has integrated Gemini 3 in Search’s AI Mode. This marks the first time Google has shipped a Gemini model to Search on its release date.

Google AI Pro and Ultra subscribers in the U.S. can access Gemini 3 Pro by selecting “Thinking” from the model dropdown in AI Mode.

Robby Stein, VP and GM of Google Search, wrote on X:

“Gemini 3, our most intelligent model, is landing in Google Search today – starting with AI Mode. Excited that this is the first time we’re shipping a new Gemini model in Search on day one.”

Google plans to expand Gemini 3 in AI Mode to all U.S. users soon, with higher usage limits for Pro and Ultra subscribers.

What’s New

Search Updates

Google describes Gemini 3 as a model with state-of-the-art reasoning and deep multimodal understanding.

In the context of Search, it’s designed to explain advanced concepts, work through complex questions, and support interactive visuals that run directly inside AI Mode responses.

With Gemini 3 in place, Google says AI Mode has effectively re-architected what a “helpful response” looks like.

Stein explains:

“Gemini 3 is also making Search smarter by re-architecting what a helpful response looks like. With new generative UI capabilities, Gemini 3 in AI Mode can now dynamically create the overall response layout when it responds to your query – completely on the fly.”

Instead of only returning a block of text, AI Mode can design a response layout tailored to your query. That includes deciding when to surface images, tables, or other structured elements so the answer is clearer and easier to work with.

In the coming weeks, Google will add automatic model selection, Stein continues:

“Search will intelligently route tough questions in AI Mode and AI Overviews to our frontier model, while continuing to use faster models for simpler tasks.”

Enhanced Query Fan-Out

Gemini 3 upgrades Google’s query fan-out technique.

According to Stein, Search can now issue more related searches in parallel and better interpret what you’re trying to do.

A potential benefit, Stein adds, is that Google may find content it previously missed:

“It now performs more and much smarter searches because Gemini 3 better understands you. That means Search can now surface even more relevant web content for your specific question.”

Generative UI

Gemini 3 in AI Mode introduces generative UI features that build dynamic visual layouts around your query.

The model analyzes your question and constructs a custom response using visual elements such as images, tables, and grids. When an interactive tool would help, Gemini 3 can generate a small app in real time and embed it directly in the answer.

Examples from Google’s announcement include:

  • An interactive physics simulation for exploring the three-body problem
  • A custom mortgage loan calculator that lets you compare different options and estimate long-term savings

All of these responses include prominent links to high-quality content across the web so you can click through to source material.

See a demonstration in Google’s launch video below:

Why This Matters

Gemini 3 changes how your content is discovered and used in AI Mode. With deeper query fan-out, Google can access more pages per question, which might influence which sites are cited or linked during long, complex searches.

The updated layouts and interactive features change how links appear on your screen. On-page tools, explainers, and visualizations could now compete directly with Google’s own interface.

As Gemini 3 becomes available to more people, it will be important to watch how your content is shown or referenced in AI responses, in addition to traditional search rankings.

Looking Ahead

Google says it will continue refining these updates based on feedback as more people try the new tools. Automatic model selection is set to arrive in the coming weeks for Google AI Pro and Ultra subscribers in the U.S., with broader U.S. access to Gemini 3 in AI Mode planned but not yet scheduled.

Selling AI Search Strategies To Leadership Is About Risk via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

AI search visibility isn’t “too risky” to invest in for executives to buy-in. Selling AI search strategies to leadership is about risk.

Image Credit: Kevin Indig

A Deloitte survey of +2,700 leaders reveals that getting buy-in for an AI search strategy isn’t about innovation, but risk.

SEO teams keep failing to sell AI search strategies for one reason: They’re pitching deterministic ROI in a probabilistic environment.

The old way: Rankings → traffic → revenue. But that event chain doesn’t exist in AI systems.

LLMs don’t rank. They synthesize. And Google’s AI Overviews and AI Mode don’t “send traffic.” They answer.

Yet most teams still walk into a leadership meeting with a deck built on a decaying model. Then, executives say no – not because AI search “doesn’t work,” but because the pitch asks them to fund an outcome nobody can guarantee.

In AI search, you cannot sell certainty. You can only sell controlled learning.

1. You Can’t Sell AI Search With A Deterministic ROI Model

Everyone keeps asking the wrong question: “How do I prove my AI search strategy will work so leadership will fund it?” You can’t; there’s no traffic chain you can model. Randomness is baked directly into the outputs.

You’re forcing leadership to evaluate your AI search strategy with a framework that’s already decaying. Confusion about AI search vs. traditional SEO metrics and forecasting is blocking you from buy-in. When SEO teams try to sell an AI search strategy to leadership, they often encounter several structural problems:

  1. Lack of clear attribution and ROI: Where you see opportunity, leadership sees vague outcomes and deprioritizes investment. Traffic and conversions from AI Overviews, ChatGPT, or Perplexity are hard to track.
  2. Misalignment with core business metrics: It’s harder to tie results to revenue, CAC, or pipeline – especially in B2B.
  3. AI search feels too experimental: Early investments feel like bets, not strategy. Leadership may see this as a distraction from “real” SEO or growth work.
  4. No owned surfaces to leverage: Many brands aren’t mentioned in AI answers at all. SEO teams are selling a strategy that has no current baseline.
  5. Confusion between SEO and AI search strategy: Leadership doesn’t understand the distinction between optimizing for classic Google Search vs. LLMs vs. AI Overviews. Clear differentiation is needed to secure a new budget and attention.
  6. Lack of content or technical readiness: The site lacks the structured content, brand authority, or documentation to appear in AI-generated results.

2. Pitch AI Search Strategy As Risk Mitigation, Not Opportunity

Executives don’t buy performance in ambiguous environments. They buy decision quality. And the decision they need you to make is simple: Should your brand invest in AI-driven discovery before competitors lock in the advantage – or not?

Image Credit: Kevin Indig

AI search is still an ambiguous environment. That’s why your winning strategy pitch should be structured for fast, disciplined learning with pre-set kill criteria instead of forecasting traffic → revenue. Traditionally, SEO teams pitch outcomes (traffic, conversions), but leadership needs to buy learning infrastructure (testing systems, measurement frameworks, kill criteria) for AI search.

Leadership thinks you’re asking for “more SEO budget” when you’re actually asking them to buy an option on a new distribution channel.

Everyone treats the pitch as “convince them it will work” when it should be “convince them the cost of not knowing is higher than the cost of finding out.” Executives don’t need certainty about impact – they need certainty that you’ll produce a decision with their money.

Making stakes crystal clear:

Your Point of View + Consequences = Stakes. Leaders need to know what happens if they don’t act.

Image Credit: Kevin Indig

The cost of passing on an AI search strategy can be simple and brutal:

  1. Competitors who invest early in AI search visibility will build entity authority and brand presence.
  2. Organic traffic stagnates and will drop over time while cost-per-click rises.
  3. AI Overviews and AI Mode outputs will replace queries your brand used to win in Google.
  4. Your influence on the next discovery channel will be decided without you.

AI search strategy builds brand authority, third-party mentions, entity relationships, content depth, pattern recognition, and trust signals in LLMs. These signals compound. They also freeze into the training data of future models.

If you aren’t shaping that footprint now, the model will rely on whatever scraps already exist based on whatever your competitors are feeding it.

3. Sell Controlled Experiments – Small, Reversible, And Time-Boxed

You’re asking for resources to discover the truth before the market makes the decision for you. This approach collapses resistance because it removes the fear of sunk cost and turns ambiguity into manageable, reversible steps.

A winning AI search strategy proposal sounds like:

  • “We’ll run x tests over 12 months.”
  • “Budget: ≤0.3% of marketing spend.”
  • “Three-stage gates with Go/No-Go decisions.”
  • “Scenario ranges instead of false-precision forecasts.”
  • “We stop if leading indicators don’t move by Q3.”

45% of executives rely more on instinct than facts. Balance your data with a compelling narrative – focus on outcomes and stakes, not technical details.

I covered how to build a pitch deck and strategic narrative in how to explain the value of SEO to executives, but focus on selling learning as a deliverable under the current AI search landscape.

When presenting to leaders, they focus on three things only: money (revenue, profit, cost), market (market share, time-to-market), and exposure (retention, risk). Structure every pitch around these.

The SCQA framework (Minto Pyramid) guides you:

  • Situation: Set the context.
  • Complication: Explain the problem.
  • Question: What should we do?
  • Answer: Your recommendation.

This is the McKinsey approach – and executives expect it.


Featured Image: Paulo Bobita/Search Engine Journal

Cloudflare Outage Triggers 5xx Spikes: What It Means For SEO via @sejournal, @MattGSouthern

A Cloudflare incident is returning 5xx responses for many sites and apps that sit behind its network, which means users and crawlers may be running into the same errors.

From an SEO point of view, this kind of outage often looks worse than it is. Short bursts of 5xx errors usually affect crawl behavior before they touch long-term rankings, but there are some details worth paying attention to.

What You’re Likely Seeing

Sites that rely on Cloudflare as a CDN or reverse proxy may currently be serving generic “500 internal server error” pages or failing to load at all. In practice, everything in that family of responses is treated as a server error.

If Googlebot happens to crawl while the incident is ongoing, it will record the same 5xx responses that users see. You may not notice anything inside Search Console immediately, but over the next few days you could see a spike in server errors, a dip in crawl activity, or both.

Keep in mind that Search Console data is rarely real-time and often lags by roughly 48 hours. A flat line in GSC today could mean the report hasn’t caught up yet. If you need to confirm that Googlebot is encountering errors right now, you will need to check your raw server access logs.

This can feel like a ranking emergency. It helps to understand how Google has described its handling of temporary server problems in the past, and what Google representatives are saying today.

How Google Handles Short 5xx Spikes

Google groups 5xx responses as signs that a server is overloaded or unavailable. According to Google’s Search Central documentation on HTTP status codes, 5xx and 429 errors prompt crawlers to temporarily slow down, and URLs that continue to return server errors can eventually be dropped from the index if the issue remains unresolved.

Google’s “How To Deal With Planned Site Downtime” blog post gives similar guidance for maintenance windows, recommending a 503 status code for temporary downtime and noting that long-lasting 503 responses can be treated as a sign that content is no longer available.

In a recent Bluesky post, Google Search Advocate John Mueller reinforced the same message in plainer language. Mueller wrote:

“Yeah. 5xx = Google crawling slows down, but it’ll ramp back up.”

He added:

“If it stays at 5xx for multiple days, then things may start to drop out, but even then, those will pop back in fairly quickly.”

Taken together, the documentation and Mueller’s comments draw a fairly clear line.

Short downtime is usually not a major ranking problem. Already indexed pages tend to stay in the index for a while, even if they briefly return errors. When availability returns to normal, crawling ramps back up and search results generally settle.

The picture changes when server errors become a pattern. If Googlebot sees 5xx responses for an extended period, it can start treating URLs as effectively gone. At that point, pages may drop from the index until crawlers see stable, successful responses again, and recovery can take longer.

The practical takeaway is that a one-off infrastructure incident is mostly a crawl and reliability concern. Lasting SEO issues tend to appear when errors linger well beyond the initial outage window.

See additional guidance from Google regarding 5xx errors:

Analytics & PPC Reporting Gaps

For many sites, Cloudflare sits in front of more than just HTML pages. Consent banners, tag managers, and third-party scripts used for analytics and advertising may all depend on services that run through Cloudflare.

If your consent management platform or tag manager was slow or unavailable during the outage, that can show up later as gaps in GA4 and ad platform reporting. Consent events may not have fired, tags may have timed out, and some sessions or conversions may not have been recorded at all.

When you review performance, you might see a short cliff in GA4 traffic, a drop in reported conversions in Google Ads or other platforms, or both. In many cases, that will reflect missing data rather than a real collapse in demand.

It’s safer to annotate today’s incident in your analytics and media reports and treat it as a tracking gap before you start reacting with bid changes or budget shifts based on a few hours of noisy numbers.

What To Do If You Were Hit

If you believe you’re affected by today’s outage, start by confirming that the problem is really tied to Cloudflare and not to your origin server or application code. Check your own uptime monitoring and any status messages from Cloudflare or your host so you know where to direct engineering effort.

Next, record the timing. Note when you first saw 5xx errors and when things returned to normal. Adding an annotation in your analytics, Search Console, and media reporting makes it much easier to explain any traffic or conversion drops when you review performance later.

Over the coming days, keep an eye on the Crawl Stats Report and index coverage in Search Console, along with your own server logs. You’re looking for confirmation that crawl activity returns to its usual pattern once the incident is over, and that server error rates drop back to baseline. If the graphs settle, you can treat the outage as a contained event.

If, instead, you continue to see elevated 5xx responses after Cloudflare reports the issue as resolved, it’s safer to treat the situation as a site-specific problem.

What you generally do not need to do is change content, internal linking, or on-page SEO purely in response to a short Cloudflare outage. Restoring stability is the priority.

Finally, resist the urge to hit ‘Validate Fix’ in Search Console the moment the site comes back online. If you trigger validation while the connection is still intermittent, the check will fail, and you will have to wait for the cycle to reset. It is safer to wait until the status page says ‘Resolved’ for a full 24 hours before validating.

Why This Matters

Incidents like this one are a reminder that search visibility is tied to reliability as much as relevance. When a provider in the middle of your stack has trouble, it can quickly look like a sudden drop, even when the root cause is outside your site.

Knowing how Google handles temporary 5xx spikes and how they influence analytics and PPC reports can help you communicate better with your clients and stakeholders. It allows you to set realistic expectations and recognize when an outage has persisted long enough to warrant serious attention.

Looking Ahead

Once Cloudflare closes out its investigation, the main thing to watch is whether your crawl, error, and conversion metrics return to normal. If they do, this morning’s 5xx spike is likely to be a footnote in your reporting rather than a turning point in your organic or paid performance.

B2B Content Marketing Has Changed: Principles Of Good Strategy

This edited excerpt is from B2B Content Marketing Strategy by Devin Bramhall, ©2025, and is reproduced and adapted with permission from Kogan Page Ltd.

Modern content strategy is no longer about being a brand megaphone, shouting messages across digital space.

Modern content strategy that works is a blended approach designed to create community around shared experiences, build lasting relationships, and establish genuine trust and influence. It’s about leaning into individuality within niche communities by creating content that resonates with individuals and small groups rather than trying to appeal to the masses.

And it’s definitely not a pursuit of ubiquity, in the ways brands used to do it by creating a dominant presence on every platform and community space.

Instead, it’s about taking fewer actions to accomplish more. Playing a supporting role in the community sometimes by elevating others. It’s about building relationships that motivate action rather than force it. Mostly, it’s about creating frameworks and principles to guide and evaluate your decisions so you can develop your own “playbook” that works for your company and community.

Principles Of Good Content Marketing Strategy

Content marketing exists to serve business goals by solving customer pain points. It accomplishes this through education and relationship-building:

Education attracts potential buyers and influencers by providing immediate value in the form of short-term solutions (awareness and affinity).

Establishing trust allows your brand to become an ongoing part of your community’s lives by speaking their language, empathizing with their challenges, and solving their problems (nurture and engage).

Relationship formation creates alignment between external promises and internal experiences – the product delivers on the expectations set by content (convert, grow LTV, and upsell).

The goal is to help first and sell second – at which point customers often feel they reached decisions independently. They become eager to invest in both the product and the relationship. This is how content marketing works organically based on human behavior.

It’s also the stuff you already know.

Content marketing teams guided by the following principles consistently achieve superior results.

Create Unique Advantage

No other company exists with your exact combination of product, people, and resources. Your first job as a marketer is to identify what you already have that can be leveraged for growth.

This could be your founder’s network, your CMO’s substantial LinkedIn following that overlaps with your target buyers, or a product feature that solves a previously unaddressed problem. It might be an upcoming conference where your CEO is speaking to 300 decision-makers who gather only once per year.

Other advantages might include:

  • Budget, software, and technological resources.
  • Existing audiences, email lists, or content archives.
  • Market position (whether as an established leader or disruptive newcomer).
  • Opportunistic events like funding announcements or key hires.
  • Your own unique talents, experiences, and connections.

The goal is to create a content strategy that:

  1. Competitors can’t easily duplicate because they lack your specific advantages.
  2. Generates exponential impact by leveraging opportunistic events, efficient execution, and activities that serve multiple outcomes simultaneously.
  3. Is scalable with repeatable elements that compound over time and can expand with relative ease.

A prime example comes from Gong, the revenue intelligence platform. While competitors focused on standard SaaS marketing playbooks, Gong leveraged their unique advantage: Access to millions of sales conversations and the data patterns within them. By sharing insights from this proprietary data, they created content no competitor could replicate, establishing themselves as the definitive source of sales intelligence while simultaneously demonstrating their product’s value.

Serve Outcomes It Can Logically Impact (Better Than Other Approaches)

Strategy that serves business goals does need to be measured to ensure it’s serving those outcomes, and ideally, how well it achieves them. Yes, I’m talking about ROI.

The benefit of having clearly defined, quantifiable, time-based outcomes is twofold:

  • It helps you narrow down tactics.
  • It gives you a target to “bump up against” to extract learnings for continuous improvement.

This principle forces you to evaluate each potential marketing activity against a simple standard: Is this the best way to reach the business outcome we want, or are we doing it because it’s the way we’ve always done it?

Can Be Executed With Existing Resources

A strategy is only as good as your ability to execute it.

Your plan is only strategic if you factor in all constraints, including budget and resources. If you come up with a “brilliant” idea that you know is unlikely to be funded, then it’s not brilliant in the context in which you want to apply it.

So, if you come up with something that could really move the needle and you want to get funding for it, come up with an MVP and call it a test. Once you’ve shown impact and dazzled the purse-holders, then it’ll be easier to get budget to expand and do more. So start by getting buy-in on only those resources you need to execute a bare minimum version that demonstrates enough impact to justify additional investment. One approach that has worked for me (though it’s not a silver bullet) is to treat it like a sales activity. All I need is enough of the right kind of information that whoever I’m pitching to will:

  • Understand without a complex explanation.
  • See a type of business impact they recognize as valuable.
  • Not care too much about it (i.e., the investment is negligible to them).

Your best-case scenario at this stage is not enthusiasm; it’s disinterest. You want them to feel like saying yes is an errand, almost like it’s a waste of their time.

This requires keeping a ton of details to yourself – especially the ones your leadership will question. Also useful, make it feel familiar and demonstrate you listened to them by pointing out areas where you intentionally factored in something they wanted or advised. Think of it like landing page copy. Your “conversion” is a yes, so what details and messaging will get you that conversion?

This doesn’t mean your strategy can’t be ambitious. Rather, it means being realistic about what you can sustain long enough to see results.

Serves Outcomes It Can Logically Impact (Better Than Other Activities)

It doesn’t matter what size your marketing team is – at some point, you’ll be tasked with showing impact beyond what seems possible with your current resources. This is where strategic thinking becomes essential.

Content marketing strategy plays a crucial role in driving business results. What sets a strategy apart from a simple plan is its ability to serve as a unified and thoughtful response to a significant challenge, as emphasized by Richard Rumelt in his book “Good Strategy, Bad Strategy.”

A plan is simply a list of activities you know you can accomplish, like running errands in a particular order to minimize time. Strategy, by contrast, is using the resources you have to show enough impact that decision-makers will recognize, making sure you remind them over and over in different ways about that impact, then using that as leverage to get the budget to do what you wanted to in the first place.

This doesn’t mean your strategy can’t be ambitious. Rather, it means being realistic about what you can sustain long enough to see results that you can use to do more later.

Grounded In Facts, Not Best Practices

Choose channels, tactics, and messages based on YOUR customers, not on what others are doing or what industry best practices dictate.

At some point, nothing we currently do in marketing existed before. SEO, for example, was once considered a growth hack. It wasn’t in the content marketing lexicon, let alone on any list of best practices. Someone discovered it could provide unique advantage for their company to appear first when people searched for specific solutions.

This principle requires you to reason from your specific facts:

  • How do YOUR customers make purchase decisions?
  • What channels do THEY genuinely use for discovery and research?
  • What unique circumstances does YOUR company face?

What might appear as constraints – limited budget, market position, team size – can often become advantages if you approach them with curiosity and objectivity.

Designed To Have Exponential Impact

Most “strategies” content marketers present are just action plans that itemize tactics they will execute over a period of time to hit a goal.

Create content, distribute, convert people, measure results, repeat.

But think about how content marketing itself came to exist. It was all about leverage. Take SEO, for example. It was essentially a “free” way to get more people to visit your site without paying for ads. And for a while, it was an ROI multiplier, meaning that the amount of investment required to execute was minuscule compared to the long-term impact it would have over time. That’s a strategic ratio.

Now, SEO is a part of B2B marketing modus operandi. The ratio is more incremental; thus, it’s not really a strategic activity, it’s more of a table stakes tactic.

The opportunity for marketers now is to come up with a scalable way to transform bespoke interactions between people from the company and community across multiple mediums into ROI for the company that they can sustain. This means designing your strategy such that some activities serve more than one purpose or outcome, as well as having “self-sustaining” elements (i.e., automations, workflows, etc.) built in.

To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code “SEJ25” at koganpage.com here.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

The Knowns And Unknowns Of Structured Data Attribution via @sejournal, @marthavanberkel

As marketers, we love a great funnel. It provides clarity on how our strategies are working. We have conversion rates and can track the customer journey from discovery through conversion. But in today’s AI-first world, our funnel has gone dark.

We can’t yet fully measure visibility in AI experiences like ChatGPT or Perplexity. While emerging tools offer partial insights, their data isn’t comprehensive or consistently reliable. Traditional metrics like impressions and clicks still don’t tell the whole story in these spaces, leaving marketers facing a new kind of measurement gap.

To help bring clarity, let’s look at what we know and don’t know about measuring the value of structured data (also known as schema markup). By understanding both sides, we can focus on what’s measurable and controllable today, and where the opportunities lie as AI changes how customers discover and engage with our brands.

Why Most ‘AI Visibility’ Data Isn’t Real

AI has created a hunger for metrics. Marketers, desperate to quantify what’s happening at the top of the funnel, are turning to a wave of new tools. Many of these platforms are creating novel measurements, such as “brand authority on AI platforms,” that aren’t grounded in representative data.

For example, some tools are trying to measure “AI prompts” by treating short keyword phrases as if they were equivalent to consumer queries in ChatGPT or Perplexity. But this approach is misleading. Consumers are writing longer, context-rich prompts that go far beyond what keyword-based metrics suggest. These prompts are nuanced, conversational, and highly personalized – nothing like traditional long-tail queries.

These synthetic metrics offer false comfort. They distract from what’s actually measurable and controllable. The fact is, ChatGPT, Perplexity, and even Google’s AI Overviews aren’t providing us with clear and comprehensive visibility data.

So, what can we measure that truly impacts visibility? Structured data.

What Is AI Search Visibility?

Before diving into metrics, it’s worth defining “AI search visibility.” In traditional SEO, visibility meant appearing on page one of search results or earning clicks. In an AI-driven world, visibility means being understood, trusted, and referenced by both search engines and AI systems. Structured data plays a role in this evolution. It helps define, connect, and clarify your brand’s digital entities so that search engines and AI systems can understand them.

The Knowns: What We Can Measure With Confidence For Structured Data

Let’s talk about what is known and measurable today with regard to structured data.

Increased Click-Through Rates From Rich Results

From data in our quarterly business review, we see, by implementing structured data on a page, the content qualifies for a rich result, and enterprise brands consistently see an increase in click-through rates. Google currently supports more than 30 types of rich results, which continue to appear in organic search.

For example, from our internal data, in Q3 2025, one enterprise brand in the home appliances industry saw click-through rates on product pages increase by 300% when a rich result was awarded. Rich results continue to provide both visibility and conversion gains from organic search.

Example of a product rich result on Google's search engine results pageExample of a product rich result on Google’s search engine results page (Screenshot by author, November 2025)

Increased Non-Branded Clicks From Robust Entity Linking

It’s important to distinguish between basic schema markup and robust schema markup with entity linking that results in a knowledge graph. Schema markup describes what’s on a page. Entity linking connects those things to other well-defined entities across your site and the web, creating relationships that define meaning and context.

An entity is a unique and distinguishable thing or concept, such as a person, product, or service. Entity linking defines how those entities relate to one another, either through external authoritative sources like Wikidata and Google’s knowledge graph or your own internal content knowledge graph.

For example, imagine a page about a physician. The schema markup would describe the physician. Robust, semantic markup would also connect to Wikidata and Google’s knowledge graph to define their specialty, while linking to the hospital and medical services they provide.

Image from author, November 2025

AIO Visibility

Traditional SEO metrics can’t yet measure AI experiences directly, but some platforms can identify some instances when a brand is mentioned in an AI Overview (AIO) result.

Research from a BrightEdge report found that adopting entity-based SEO practices supports stronger AI visibility. The report noted:

“AI prioritizes content from known, trusted entities. Stop optimizing for fragmented keywords and start building comprehensive topic authority. Our data shows that authoritative content is three times more likely to be cited in AI responses than narrowly focused pages.”

The Unknowns: What We Can’t Yet Measure

While we can measure the impact of entities in schema markup through existing SEO metrics, we don’t yet have direct visibility into how these elements influence large language model (LLM) performance.

How LLMs Are Using Schema Markup

Visibility starts with understanding – and understanding starts with structured data.

Evidence for this is growing. In Microsoft’s Oct. 8, 2025 blog post, “Optimizing Your Content for Inclusion in AI Search Answers (Microsoft Advertising,” Krishna Madhaven, Principal Product Manager for Microsoft Bing, wrote:

“For marketers, the challenge is making sure their content is easy to understand and structured in a way that AI systems can use.”

He added:

“Schema is a type of code that helps search engines and AI systems understand your content.”

Similarly, Google’s article, “Top ways to ensure your content performs well in Google’s AI experiences on Search,” reinforces that “structured data is useful for sharing information about your content in a machine-readable way.”

Why are Google and Microsoft both emphasizing structured data? One reason may be cost and efficiency. Structured data helps build knowledge graphs, which serve as the foundation for more accurate, explainable, and trustworthy AI. Research has shown that knowledge graphs can reduce hallucinations and improve performance in LLMs:

While schema markup itself isn’t typically ingested directly to train LLMs, the retrieval phase in retrieval-augmented generation (RAG) systems plays a crucial role in how LLMs respond to queries. In recent work, Microsoft’s GraphRAG system generates a knowledge graph (via entity and relation extraction) from textual data and leverages that graph in its retrieval pipeline. In their experiments, GraphRAG often outperforms a baseline RAG approach, especially for tasks requiring multi-hop reasoning or grounding across disparate entities.

This helps explain why companies like Google and Microsoft are encouraging enterprise brands to invest in structured data – it’s the connective tissue that helps AI systems retrieve accurate, contextual information.

Beyond Page-Level SEO: Building Knowledge Graphs

There’s an important distinction between optimizing a single page for SEO and building a knowledge graph that connects your entire enterprise’s content. In a recent interview with Robby Stein, VP of Product at Google, it was noted that AI queries can involve dozens of subqueries behind the scenes (known as query fan-out). This suggests a level of complexity that demands a more holistic approach.

To succeed in this environment, brands must move beyond optimizing pages and instead build knowledge graphs, or rather, a data layer that represents the full context of their business.

The Semantic Web Vision, Realized

What’s really exciting is that the vision for the semantic web is here. As Tim Berners-Lee, Ora Lassila, and James Hendler wrote in “The Semantic Web” (Scientific American, 2001):

“The Semantic Web will enable machines to comprehend semantic documents and data, and enable software agents roaming from page to page to execute sophisticated tasks for users.”

We’re seeing this unfold today, with transactions and queries happening directly within AI systems like ChatGPT. Microsoft is already preparing for the next stage, often called the “agentic web.” In November 2024, RV Guha – creator of Schema.org and now at Microsoft – announced an open project called NLWeb. The goal of NLWeb is to be “the fastest and easiest way to effectively turn your website into an AI app, allowing users to query the contents of the site by directly using natural language, just like with an AI assistant or Copilot.”

In a recent conversation I had with Guha, he shared that NLWeb’s vision is to be the endpoint for agents to interact with websites. NLWeb will use structured data to do this:

“NLWeb leverages semi-structured formats like Schema.org…to create natural language interfaces usable by both humans and AI agents.”

Turning The Dark Funnel Into An Intelligent One

Just as we lack real metrics for measuring brand performance in ChatGPT and Perplexity, we also don’t yet have full metrics for schema markup’s role in AI visibility. But we do have clear, consistent signals from Google and Microsoft that their AI experiences do, in part, use structured data to understand content.

The future of marketing belongs to brands that are both understood and trusted by machines. Structured data is one factor towards making that happen.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google On Generic Top Level Domains For SEO via @sejournal, @martinibuster

Google’s John Mueller answered a question about whether a generic Top Level Domain (gTLD) with a keyword in it offered any SEO advantage. His answer was in the context of a specific keyword TLD, but the topic involves broader questions about how Google evaluates TLDs in general.

Generic Top Level Domains (gTLDs)

gTLDs are domains that have a theme that relates to a topic or a purpose. The most commonly known ones are .com (generally used for commercial purposes) and .org (typically used for non-profit organizations).

The availability of unique keyword based gTLDs exploded in 2013. Now there are hundreds of gTLDs with which a website can brand themselves with and stand out.

Is There SEO Value In gTLDs?

The person asking the question on Reddit wanted to know if there’s an SEO value to registering a .music gTLD. The regular .com version of the domain name they wanted was not available but the .music version was.

The question they asked was:

“Noticed .music domains available and curious if it is relevant, growing etc or does the industry not care about it whatsoever? Is it worth reserving yours anyways just so someone else can’t have it, in case it becomes a thing?”

Are gTLDs Useful For SEO Purposes?

Google’s John Mueller limited his response to whether there gTLDs offered SEO value and his answer was no.

He answered:

“There’s absolutely no SEO advantage from using a .music domain.”

The funny thing about SEO is that Google’s standard of relevance is based on humans while SEOs think of relevance in terms of what Google thinks is relevant.

This sets up a huge disconnect between SEOs on one side who are creating websites that are keyword optimized for Google while Google itself is analyzing billions of user behavior signals because it’s optimizing search results for humans.

Optimizing For Humans With gTLDs

The thing about SEO is that it’s search engine optimization. When venturing out on the web it’s easy to forget that every website must be optimized for humans, too. Aside from spammy TLDs which can be problematic for SEO, the choice of a TLD isn’t important for SEO but it could be important for Human Optimization.

Optimizing for humans is a good idea because the signals generated by human interactions with the search engines and websites generate signals that Google uses at scale to better understand what users mean by their queries and what kinds of sites they expect to see for those queries. Some user generated signals, like searching by brand name, can send Google a signal that a particular brand is popular and is associated with a particular service, product, or keyword phrase (read about Google’s patent on branded search).

Circling back to optimizing for humans, if a particular gTLD is something that humans may associate with a brand, product, or service then there is something there that can be useful for making a site attractive to users.

I have experimented in the past with various gTLDs and found that I was able to build links more easily to .org domains than to the .com or .net versions. That’s an example of how a gTLD can be optimized for humans and lead to success.

I discovered that overtly commercial affiliate sites on .org domains ranked and converted well. They didn’t rank because of they were .org, though. The sites were top-ranked because humans responded well to the sites I created with that gTLD. It was easier to build links to them, for example. I have no doubt that people trusted my affiliate sites a little more because they were created on .org domains.

Optimizing for humans is conversion optimization. It’s super important.

Optimizing For Humans With Keyword-Based gTLDs

I haven’t played around with keyword gTLDs but I suspect that what I experienced with .org domains could happen with a keyword-based gTLD because a meaningful gTLD may communicate positive feelings or relevance to humans. You can call it branding but I think that the word “branding” is too abstract. I prefer the phrase optimizing for humans because in the end that’s what branding is really about.

So maybe it’s time we ditched bla,bla,bla-ing about branding and started talking about optimizing for humans. If that person had considered the question from the perspective of human optimization they may have been able to answer the question themselves.

When SEOs talk about relevance it seems like they’re generally referring to how relevant something is to Google. Relevance to Google is what was top of mind to the person asking the question about the .music gTLD and it might be why you’re reading this article.

Heck, relevance to search engines is what all that “entity” optimization hand waving is all about, right? Focusing on being relevant to search engines is a limited way to chase after success. For example, I cracked the code with the .org domains by focusing on humans.

At a certain point, if you’re trying to be successful online, it may be useful to take a step back and start thinking more about how relevant the content, colors, and gTLDs are to humans and you might discover that being relevant to humans makes it easier to be relevant to search engines.

Featured Image by Shutterstock/Kues

From Listings to Loyalty: The New Role of Local Search in Customer Experience

Ask yourself the following:

  • Do you reply to reviews?
  • Do you engage?
  • Do you make the interaction feel personal?
  • Do you follow through on your promises?
  • Do you keep information consistent across every platform?
  • Do you share fresh updates (ex: photos, posts, or promotions) that show you’re active?
  • Do you provide transparent details like pricing, wait times, or insurance accepted?

If you answered no to any of the aforementioned, it’s time to switch to a brand experience mentality. That shift shows up clearly in the data. Six in ten people say they at least sometimes click on Google’s AI-generated overviews, which means discovery is no longer only about traditional rankings. It’s about whether your brand shows up well when search engines pull together information in context.

Reputation follows the same logic. In Rio SEO’s latest study, three out of four consumers said they read at least four reviews before deciding where to go. And it’s not just the rating itself. Many put just as much weight on whether a business responds; silence feels like neglect, while engagement signals you’re listening.

The clock has also sped up. Nearly six in ten customers now expect a reply within 24 hours, a sharp jump from last year. For many, that means a same-day response is the expectation. Fast, human replies aren’t a nice touch anymore; they’re the baseline.

The major search platforms reinforce this reality. Google’s local pack favors businesses that post fresh photos, keep details up to date, and engage with reviews (and not just negative reviews but positive ones too). Apple Maps is becoming harder to ignore as well, Rio SEO’s research reveals about a third of consumers now use it frequently. With Siri, Safari, and iPhones all pulling from Apple Business Connect as the default, accurate profiles there can tip the balance just as much as on Google.

Put it all together, and the picture is clear: search visibility and customer experience are already intertwined. The brands thriving in 2025 treat local search as part of a unified Brand Experience strategy and Rio SEO helps brands stay visible, responsive, and trusted wherever customers are searching.

The BX Advantage: Connecting Signals to Action

Every brand gathers signals. Search clicks, review scores, survey feedback; it all piles up. The trouble is most of it never makes it past a slide deck. Customers don’t feel or see the difference.

That’s where Brand Experience (BX) comes in. BX connects visibility and reputation with actionable insights, so signals don’t just sit in a dashboard.

At Rio SEO, we put BX into motion. Our Local Experience solutions help brands connect discovery with delivery and turn what customers see in search into what they feel in real life. It’s the bridge between data and experience, helping enterprise marketers identify patterns, respond faster, and build trust at every location.

The goal isn’t to watch the numbers. It’s to quickly identify and make changes customers notice, such as faster check-ins, smoother booking, and clearer answers in search; all of which amount to better experiences and outcomes, for customers and employees alike.

Technology helps make this possible. AI platforms now tie search data, reviews, and feedback into one view. With predictive analytics layered in, teams can see trouble before it shows up at the front desk or checkout line. And with Google’s AI Overviews and Bing’s Copilot changing how people discover businesses, brands that prepare for those formats now will have an edge when others are still catching up.

Industry context shapes how this plays out. A retailer might connect “near me” searches to what’s actually on the shelf that week. A bank has to prove reliability every time someone checks a branch profile. A hospital needs to make sure that when a patient searches for “urgent care,” the hours, insurance info, and provider reviews are accurate that very day. Different settings, same principle: close the gap between what people see online and what they experience in real life.

And this isn’t just about dashboards. The real win comes from acting quickly on what the signals show. Think about two retailers with dipping review scores. One shrugs and logs it. The other digs deeper, notices the complaints all mention stockouts in one region, and shifts supply within days. Customers stay loyal because the brand responded, not because it had a prettier chart.

That’s the difference BX is designed to create. Reports tell you what already happened. Acting on those signals shapes what happens next.

The New Mandate for Marketing Leaders

In the experience economy, BX isn’t abstract; it’s actionable. And Rio SEO gives brands the tools, data, and automation to operationalize it, turning every search, review, and update into a moment that builds loyalty and long-term growth.

Today’s marketing leaders aren’t being judged on traffic spikes anymore. What matters now is whether customers stick around, how much value they bring over time, and what it costs to serve them. That shift changes everything about the role of local search and puts Brand Experience (BX) at the center of the conversation.

When search is treated as a checklist—hours updated, pin fixed, job done—brands miss the bigger opportunity. Worse, they give ground to competitors who recognize that discovery is experience, and experience drives revenue.

BX gives CMOs and marketing leaders a framework for connecting visibility, reputation, and responsiveness. It bridges the gap between what people see in search and what they experience when they engage. And that’s where Rio SEO delivers real advantage: by giving brands the unified data, automation, and insights to make BX tangible in every market, every listing, and every moment.

You can see the difference in how leaders approach it across divergent industries:

  • Retail: Linking “near me” searches directly to in-stock inventory so shoppers know what’s available before they walk in.
  • Restaurants: Connecting menu updates and “order online” links directly to local search profiles, so when a customer searches “Thai takeout near me,” they see real-time specials, accurate hours, and an easy path to order.
  • Financial Services: Displaying verified first-party reviews on branch profiles to boost credibility and reassure customers choosing where to bank.
Image by Rio SEO, Nov 2025

The common thread is dependability. Local search is no longer about being visible once. It’s about proving, again and again, that your brand can be trusted in the small but decisive moments when customers are making up their minds. BX provides the vision; Rio SEO provides the infrastructure to bring it to life: connecting discovery with loyalty in a world where customers expect precision, empathy, and instant answers.

The Strategic Case for Local Search

The business case for local search doesn’t sit on the margins anymore. It ties directly to growth, trust, and efficiency. Within a Brand Experience (BX) framework, it links customer intent with measurable business outcomes, and Rio SEO gives brands the precision tools to manage that connection at scale.

Revenue Starts Here

Local search is full of high-intent signals: someone taps “call now,” asks for directions, or books an appointment. These metrics are crucial moments that can lead to sales, often within hours. In fact, most local searchers buy within 48 hours: three-quarters of restaurant seekers and nearly two-thirds of retail shoppers. That urgency makes consistency and accessibility non-negotiable.

Trust is Built in the Details

Reviews have become a kind of reputation currency, and customers spend it carefully. Three out of four people read at least four reviews before making a choice. If the basics are wrong—a missing phone number, the wrong hours—trust evaporates. More than half of consumers say they won’t visit a business if the listing details are off. Rio SEO’s centralized platform keeps data clean and consistent, ensuring that every profile communicates reliability, the foundation of trust in BX.

Efficiency That Pays for Itself

Every time insights from search and feedback flow back into operations, friction disappears before it gets expensive. Accurate listings mean fewer misrouted calls. Quick review responses calm frustration before it snowballs. Clear online paths reduce the burden on service teams.

In healthcare, that can mean shorter call center queues. In financial services, fewer “where do I start?” calls during onboarding. For retailers, avoiding wasted trips when hours are wrong keeps customers coming back instead of leaving disappointed. Each fix trims cost-to-serve while strengthening trust—a rare double win. Rio SEO automates these workflows, saving teams time while enhancing experience quality.

Your Edge Over the Competition

Too many organizations still keep SEO and CX in separate lanes. BX unites them and Rio SEO operationalizes that unity. The ones who bring those signals together see patterns earlier, act faster, and pull ahead of rivals who are still optimizing for clicks instead of experiences.

The Power of Brand Experience

BX blends rigorous data with customer-centric urgency. It gives leaders a way to not only show up in search but to be chosen, trusted, and remembered.

Winning the Experience Economy Starts in Local Search

Search no longer waits for a typed query. With AI Overviews, predictive results, and personalized recommendations, it increasingly anticipates what people want and surfaces the businesses most likely to deliver.

That shift raises the bar. In this new environment, local search isn’t a maintenance task but rather the front line of Brand Experience (BX). Accuracy, responsiveness, and reputation aren’t side jobs anymore; they’re the signals that decide who gets noticed, who gets trusted, and who gets passed over.

The companies setting the pace already treat local presence as a growth engine, not a maintenance task. They link discovery with delivery, reviews with real replies, and feedback with action. Competitors who don’t will find themselves playing catch-up in an economy where expectations reset every day.

The message is clear: customers don’t separate search from experience, and neither can you. Local search is now where growth, trust, and efficiency intersect. Handle it as a checklist, and you’ll fall behind. Treat it as a lever for Brand Experience, and you’ll define the standard others have to meet.

That’s where Rio SEO makes the difference. We help enterprise brands connect the dots between visibility, data, and experience, empowering marketers to act on signals faster, measure impact clearly, and deliver consistency at scale. With Rio SEO, brands don’t just show up in search; they stand out, stay accurate, and turn visibility into measurable growth.

Image by Rio SEO, Nov 2025

Ready to lead in the era of AI-driven discovery?

Partner with Rio SEO to transform your local presence into a connected, data-powered experience that builds trust, drives action, and earns loyalty at every location, on every platform, every day.

Learn more about Rio SEO’s Local Experience solutions today.

What is the chance your plane will be hit by space debris?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

In mid-October, a mysterious object cracked the windshield of a packed Boeing 737 cruising at 36,000 feet above Utah, forcing the pilots into an emergency landing. The internet was suddenly buzzing with the prospect that the plane had been hit by a piece of space debris. We still don’t know exactly what hit the plane—likely a remnant of a weather balloon—but it turns out the speculation online wasn’t that far-fetched.

That’s because while the risk of flights being hit by space junk is still small, it is, in fact, growing. 

About three pieces of old space equipment—used rockets and defunct satellites—fall into Earth’s atmosphere every day, according to estimates by the European Space Agency. By the mid-2030s, there may be dozens. The increase is linked to the growth in the number of satellites in orbit. Currently, around 12,900 active satellites circle the planet. In a decade, there may be 100,000 of them, according to analyst estimates.

To minimize the risk of orbital collisions, operators guide old satellites to burn up in Earth’s atmosphere. But the physics of that reentry process are not well understood, and we don’t know how much material burns up and how much reaches the ground.

“The number of such landfall events is increasing,” says Richard Ocaya, a professor of physics at the University of Free State in South Africa and a coauthor of a recent paper on space debris risk. “We expect it may be increasing exponentially in the next few years.”

So far, space debris hasn’t injured anybody—in the air or on the ground. But multiple close calls have been reported in recent years. In March last year, an 0.7-kilogram chunk of metal pierced the roof of a house in Florida. The object was later confirmed to be a remnant of a battery pallet tossed out from the International Space Station. When the strike occurred, the homeowner’s 19-year-old son was resting in a next-door room.

And in February this year, a 1.5-meter-long fragment of SpaceX’s Falcon 9 rocket crashed down near a warehouse outside Poland’s fifth-largest city, Poznan. Another piece was found in a nearby forest. A month later, a 2.5-kilogram piece of a Starlink satellite dropped on a farm in the Canadian province of Saskatchewan. Other incidents have been reported in Australia and Africa. And many more may be going completely unnoticed. 

“If you were to find a bunch of burnt electronics in a forest somewhere, your first thought is not that it came from a spaceship,” says James Beck, the director of the UK-based space engineering research firm Belstead Research. He warns that we don’t fully understand the risk of space debris strikes and that it might be much higher than satellite operators want us to believe. 

For example, SpaceX, the owner of the currently largest mega-constellation, Starlink, claims that its satellites are “designed for demise” and completely burn up when they spiral from orbit and fall through the atmosphere.

But Beck, who has performed multiple wind tunnel tests using satellite mock-ups to mimic atmospheric forces, says the results of such experiments raise doubts. Some satellite components are made of durable materials such as titanium and special alloy composites that don’t melt even at the extremely high temperatures that arise during a hypersonic atmospheric descent. 

“We have done some work for some small-satellite manufacturers and basically, their major problem is that the tanks get down,” Beck says. “For larger satellites, around 800 kilos, we would expect maybe two or three objects to land.” 

It can be challenging to quantify how much of a danger space debris poses. The International Civil Aviation Organization (ICAO) told MIT Technology Review that “the rapid growth in satellite deployments presents a novel challenge” for aviation safety, one that “cannot be quantified with the same precision as more established hazards.” 

But the Federal Aviation Administration has calculated some preliminary numbers on the risk to flights: In a 2023 analysis, the agency estimated that by 2035, the risk that one plane per year will experience a disastrous space debris strike will be around 7 in 10,000. Such a collision would either destroy the aircraft immediately or lead to a rapid loss of air pressure, threatening the lives of all on board.

The casualty risk to humans on the ground will be much higher. Aaron Boley, an associate professor in astronomy and a space debris researcher at the University of British Columbia, Canada, says that if megaconstellation satellites “don’t demise entirely,” the risk of a single human death or injury caused by a space debris strike on the ground could reach around 10% per year by 2035. That would mean a better than even chance that someone on Earth would be hit by space junk about every decade. In its report, the FAA put the chances even higher with similar assumptions, estimating that “one person on the planet would be expected to be injured or killed every two years.”

Experts are starting to think about how they might incorporate space debris into their air safety processes. The German space situational awareness company Okapi Orbits, for example, in cooperation with the German Aerospace Center and the European Organization for the Safety of Air Navigation (Eurocontrol), is exploring ways to adapt air traffic control systems so that pilots and air traffic controllers can receive timely and accurate alerts about space debris threats.

But predicting the path of space debris is challenging too. In recent years, advances in AI have helped improve predictions of space objects’ trajectories in the vacuum of space, potentially reducing the risk of orbital collisions. But so far, these algorithms can’t properly account for the effects of the gradually thickening atmosphere that space junk encounters during reentry. Radar and telescope observations can help, but the exact location of the impact becomes clear with only very short notice.

“Even with high-fidelity models, there’s so many variables at play that having a very accurate reentry location is difficult,” says Njord Eggen, a data analyst at Okapi Orbits. Space debris goes around the planet every hour and a half when in low Earth orbit, he notes, “so even if you have uncertainties on the order of 10 minutes, that’s going to have drastic consequences when it comes to the location where it could impact.”

For aviation companies, the problem is not just a potential strike, as catastrophic as that would be. To avoid accidents, authorities are likely to temporarily close the airspace in at-risk regions, which creates delays and costs money. Boley and his colleagues published a paper earlier this year estimating that busy aerospace regions such as northern Europe or the northeastern United States already have about a 26% yearly chance of experiencing at least one disruption due to the reentry of a major space debris item. By the time all planned constellations are fully deployed, aerospace closures due to space debris hazards may become nearly as common as those due to bad weather.

Because current reentry predictions are unreliable, many of these closures may end up being unnecessary.

For example, when a 21-metric-ton Chinese Long March mega-rocket was falling to Earth in 2022, predictions suggested its debris could scatter across Spain and parts of France. In the end, the rocket crashed into the Pacific Ocean. But the 30-minute closure of south European airspace delayed and diverted hundreds of flights. 

In the meantime, international regulators are urging satellite operators and launch providers to deorbit large satellites and rocket bodies in a controlled way, when possible, by carefully guiding them into remote parts of the ocean using residual fuel. 

The European Space Agency estimates that only about half the rocket bodies reentering the atmosphere do so in a controlled way. 

Moreover, around 2,300 old and no-longer-controllable rocket bodies still linger in orbit, slowly spiraling toward Earth with no mechanisms for operators to safely guide them into the ocean.

“There’s enough material up there that even if we change our practices, we will still have all those rocket bodies eventually reenter,” Boley says. “Although the probability of space debris hitting an aircraft is small, the probability that the debris will spread and fall over busy airspace is not small. That’s actually quite likely.”

The Download: the risk of falling space debris, and how to debunk a conspiracy theory

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What is the chance your plane will be hit by space debris?

The risk of flights being hit by space junk is still small, but it’s growing.

About three pieces of old space equipment—used rockets and defunct satellites—fall into Earth’s atmosphere every day, according to estimates by the European Space Agency. By the mid-2030s, there may be dozens thanks to the rise of megaconstellations in orbit.

So far, space debris hasn’t injured anybody—in the air or on the ground. But multiple close calls have been reported in recent years.

But some estimates have the risk of a single human death or injury caused by a space debris strike on the ground at around 10% per year by 2035. That would mean a better than even chance that someone on Earth would be hit by space junk about every decade. Find out more.

—Tereza Pultarova

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read the rest of the series here.

Chatbots are surprisingly effective at debunking conspiracy theories

—Thomas Costello, Gordon Pennycook & David Rand

Many people believe that you can’t talk conspiracists out of their beliefs. 

But that’s not necessarily true. Our research shows that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.

This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution. Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is quietly expanding its remote nuclear test site
In the wake of Donald Trump announcing America’s intentions to revive similar tests. (WP $)
+ A White House memo has accused Alibaba of supporting Chinese operations. (FT $)

2 Jeff Bezos is becoming co-CEO of a new AI startup
Project Prometheus will focus on AI for building computers, aerospace and vehicles. (NYT $)

3 AI-powered toys are holding inappropriate conversations with children 
Including how to find dangerous objects including pills and knives. (The Register)
+ Chatbots are unreliable and unpredictable, whether embedded in toys or not. (Futurism)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

4 Big Tech is warming to the idea of data centers in space
They come with a lot less red tape than their Earth-bound counterparts. (WSJ $)
+ There are a huge number of data centers mired in the planning stage. (WSJ $)
+ Should we be moving data centers to space? (MIT Technology Review)

5 The mafia is recruiting via TikTok
Some bosses are even using the platform to control gangs from behind bars. (Economist $)

6 How to resist AI in your workplace
Like most things in life, there’s power in numbers. (Vox)

7 How China’s EV fleet could become a giant battery network
If economic troubles don’t get in the way, that is. (Rest of World)
+ EV sales are on the rise in South America. (Reuters)
+ China’s energy dominance in three charts. (MIT Technology Review)

8 Inside the unstoppable rise of the domestic internet
Control-hungry nations are following China’s lead in building closed platforms. (NY Mag $)
+ Can we repair the internet? (MIT Technology Review)

9 Search traffic? What search traffic?
These media startups have found a way to thrive without Google. (Insider $)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

10 Paul McCartney has released a silent track to protest AI’s creep into music
That’ll show them! (The Guardian)
+ AI is coming for music, too. (MIT Technology Review)

Quote of the day

“All the parental controls in the world will not protect your kids from themselves.”

—Samantha Broxton, a parenting coach and consultant, tells the Washington Post why educating children around the risks of using technology is the best way to help them protect themselves.

One more thing

Inside the controversial tree farms powering Apple’s carbon neutral goal

Apple (and its peers) are planting vast forests of eucalyptus trees in Brazil to try to offset their climate emissions, striking some of the largest-ever deals for carbon credits in the process.

The tech behemoth is betting that planting millions of eucalyptus trees in Brazil will be the path to a greener future. Some ecologists and local residents are far less sure.

The big question is: Can Latin America’s eucalyptus be a scalable climate solution? Read the full story.

—Gregory Barber

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Shepard Fairey’s retrospective show in LA looks very cool.
+ Check out these fascinating scientific breakthroughs that have been making waves over the past 25 years.
+ Good news—sweet little puffins are making a comeback in Ireland.
+ Maybe we should all be getting into Nordic walking.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.