The New Optimization Stack: Where SEO Meets AI Retrieval via @sejournal, @DuaneForrester

Search isn’t ending. It’s evolving.

Across the industry, the systems powering discovery are diverging. Traditional search runs on algorithms designed to crawl, index, and rank the web. AI-driven systems like Perplexity, Gemini, and ChatGPT interpret it through models that retrieve, reason, and respond. That quiet shift (from ranking pages to reasoning with content) is what’s breaking the optimization stack apart.

What we’ve built over the last 20 years still matters: clean architecture, internal linking, crawlable content, structured data. That’s the foundation. But the layers above it are now forming their own gravity. Retrieval engines, reasoning models, and AI answer systems are interpreting information differently, each through its own set of learned weights and contextual rules.

Think of it like moving from high school to university. You don’t skip ahead. You build on what you’ve already learned. The fundamentals (crawlability, schema, speed) still count. They just don’t get you the whole grade anymore. The next level of visibility happens higher up the stack, where AI systems decide what to retrieve, how to reason about it, and whether to include you in their final response. That’s where the real shift is happening.

Traditional search isn’t falling off a cliff, but if you’re only optimizing for blue links, you’re missing where discovery is expanding. We’re in a hybrid era now, where old signals and new systems overlap. Visibility isn’t just about being found; it’s about being understood by the models that decide what gets surfaced.

This is the start of the next chapter in optimization, and it’s not really a revolution. It’s more of a progression. The web we built for humans is being reinterpreted for machines, and that means the work is changing. Slowly, but unmistakably.

Image Credit: Duane Forrester

Algorithms Vs. Models: Why This Shift Matters

Traditional search was built on algorithms, sets of rules, linear systems that move step by step through logic or math until they reach a defined answer. You can think of them like a formula: Start at A, process through B, solve for X. Each input follows a predictable path, and if you run the same inputs again, you’ll get the same result. That’s how PageRank, crawl scheduling, and ranking formulas worked. Deterministic and measurable.

AI-driven discovery runs on models, which operate very differently. A model isn’t executing one equation; it’s balancing thousands or millions of weights across a multi-dimensional space. Each weight reflects the strength of a learned relationship between pieces of data. When a model “answers” something, it isn’t solving a single equation; it’s navigating a spatial landscape of probabilities to find the most likely outcome.

You can think of algorithms as linear problem-solving (moving from start to finish along a fixed path) while models perform spatial problem-solving, exploring many paths simultaneously. That’s why models don’t always produce identical results on repeated runs. Their reasoning is probabilistic, not deterministic.

The trade-offs are real:

  • Algorithms are transparent, explainable, and reproducible, but rigid.
  • Models are flexible, adaptive, and creative, but opaque and prone to drift.

An algorithm decides what to rank. A model decides what to mean.

It’s also important to note that models are built on layers of algorithms, but once trained, their behavior becomes emergent. They infer rather than execute. That’s the fundamental leap and why optimization itself now spans multiple systems.

Algorithms governed a single ranking system. Models now govern multiple interpretation systems (retrieval, reasoning, and response), each trained differently, each deciding relevance in its own way.

So, when someone says, “the AI changed its algorithm,” they’re missing the real story. It didn’t tweak a formula. It evolved its internal understanding of the world.

Layer One: Crawl And Index, Still The Gatekeeper

You’re still in high school, and doing the work well still matters. The foundations of crawlability and indexing haven’t gone away. They’re the prerequisites for everything that comes next.

According to Google, search happens in three stages: crawling, indexing, and serving. If a page isn’t reachable or indexable, it never even enters the system.

That means your URL structure, internal links, robots.txt, site speed, and structured data still count. One SEO guide defines it this way: “Crawlability is when search bots discover web pages. Indexing is when search engines analyze and store the information collected during the crawling process.”

Get these mechanics right and you’re eligible for visibility, but eligibility isn’t the same as discovery at scale. The rest of the stack is where differentiation happens.

If you treat the fundamentals as optional or skip them for shiny AI-optimization tactics, you’re building on sand. The university of AI Discovery still expects you to have the high school diploma. Audit your site’s crawl access, index status, and canonical signals. Confirm that bots can reach your pages, that no-index traps aren’t blocking important content, and that your structured data is readable.

Only once the base layer is solid should you lean into the next phases of vector retrieval, reasoning, and response-level optimization. Otherwise, you’re optimizing blind.

Layer Two: Vector And Retrieval, Where Meaning Lives

Now you’ve graduated high school and you’re entering university. The rules are different. You’re no longer optimizing just for keywords or links. You’re optimizing for meaning, context, and machine-readable embeddings.

Vector search underpins this layer. It uses numeric representations of content so retrieval models can match items by semantic similarity, not just keyword overlap. Microsoft’s overview of vector search describes it as “a way to search using the meaning of data instead of exact terms.”

Modern retrieval research from Anthropic shows that by combining contextual embeddings and contextual BM25, the top-20-chunk retrieval failure rate dropped by approximately 49% (5.7 % → 2.9 %) when compared to traditional methods.

For SEOs, this means treating content as data chunks. Break long-form content into modular, well-defined segments with clear context and intent. Each chunk should represent one coherent idea or answerable entity. Structure your content so retrieval systems can embed and compare it efficiently.

Retrieval isn’t about being on page one anymore; it’s about being in the candidate set for reasoning. The modern stack relies on hybrid retrieval (BM25 + embeddings + reciprocal rank fusion), so your goal is to ensure the model can connect your chunks across both text relevance and meaning proximity.

You’re now building for discovery across retrieval systems, not just crawlers.

Layer Three: Reasoning, Where Authority Is Assigned

At university, you’re not memorizing facts anymore; you’re interpreting them. At this layer, retrieval has already happened, and a reasoning model decides what to do with what it found.

Reasoning models assess coherence, validity, relevance, and trust. Authority here means the machine can reason with your content and treat it as evidence. It’s not enough to have a page; you need a page a model can validate, cite, and incorporate.

That means verifiable claims, clean metadata, clear attribution, and consistent citations. You’re designing for machine trust. The model isn’t just reading your English; it’s reading your structure, your cross-references, your schema, and your consistency as proof signals.

Optimization at this layer is still developing, but the direction is clear. Get ahead by asking: How will a reasoning engine verify me? What signals am I sending to affirm I’m reliable?

Layer Four: Response, Where Visibility Becomes Attribution

Now you’re in senior year. What you’re judged on isn’t just what you know; it’s what you’re credited for. The response layer is where a model builds an answer and decides which sources to name, cite, or paraphrase.

In traditional SEO, you aimed to appear in results. In this layer, you aim to be the source of the answer. But you might not get the visible click. Your content may power an AI’s response without being cited.

Visibility now means inclusion in answer sets, not just ranking position. Influence means participation in the reasoning chain.

To win here, design your content for machine attribution. Use schema types that align with entities, reinforce author identity, and provide explicit citations. Data-rich, evidence-backed content gives models context they can reference and reuse.

You’re moving from rank me to use me. The shift: from page position to answer participation.

Layer Five: Reinforcement, The Feedback Loop That Teaches The Stack

University doesn’t stop at exams. You keep producing work, getting feedback, improving. The AI stack behaves the same way: Each layer feeds the next. Retrieval systems learn from user selections. Reasoning models update through reinforcement learning from human feedback (RLHF). Response systems evolve based on engagement and satisfaction signals.

In SEO terms, this is the new off-page optimization. Metrics like how often a chunk is retrieved, included in an answer, or upvoted inside an assistant feed back into visibility. That’s behavioral reinforcement.

Optimize for that loop. Make your content reusable, designed for engagement, and structured for recontextualization. The models learn from what performs. If you’re passive, you’ll vanish.

The Strategic Reframe

You’re not just optimizing a website anymore; you’re optimizing a stack. And you’re in a hybrid moment. The old system still works; the new one is growing. You don’t abandon one for the other. You build for both.

Here’s your checklist:

  • Ensure crawl access, index status, and site health.
  • Modularize content and optimize for retrieval.
  • Structure for reasoning: schema, attribution, trust.
  • Design for response: participation, reuse, modularity.
  • Track feedback loops: retrieval counts, answer inclusion, engagement inside AI systems.

Think of this as your syllabus for the advanced course. You’ve done the high school work. Now you’re preparing for the university level. You might not know the full curriculum yet, but you know the discipline matters.

Forget the headlines declaring SEO over. It’s not ending, it’s advancing. The smart ones won’t panic; they’ll prepare. Visibility is changing shape, and you’re in the group defining what comes next.

You’ve got this.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: SvetaZi/Shutterstock

seo enhancements
Last-minute Black Friday SEO prepping for ecommerce stores

Black Friday is three weeks away, so it’s time to finalize the last adjustments. Here’s what to focus on now, based on two Yoast Black Friday coffee chats with our own principal SEOs, Carolyn Shelby and Alex Moss. Alex states, “Black Friday isn’t one day anymore, but a season. If you’re not visible to AI now, you won’t be in the results when shoppers ask for recommendations.”

Table of contents

1. Stop breaking things (seriously)

  • No major technical changes. Switching platforms, payment processors, or themes? Wait until January. Focus on optimizing what you have
  • Code freeze starts now. If it’s not broken, don’t fix it. Test changes in a staging environment first
  • Exception: Installing Yoast SEO or WooCommerce SEO add-ons is a low-risk activity. Do it if needed

Pro tip: If you must update plugins, test on staging and avoid updates one week before Black Friday.

2. Fix these right now (or regret it later)

Fraud attacks are ramping up

Fraudsters test stolen credit cards by buying cheap items (<$5). Signs you’re being targeted:

  • Sudden spike in orders for your lowest-priced item
  • High failure rates (declined payments)
  • Orders from VPNs/rotating IPs

How to fight back:

  • Raise your minimum price. Bundle items to push totals over $5 (e.g., “Buy 2 stickers, get free shipping”)
  • Add friction (carefully):
    • Enable CAPTCHA on checkout
    • Turn on Stripe Radar (if using Stripe) or velocity checks (limits orders per IP)
    • Avoid disabling guest checkout, as this will hurt conversions
    • Contact your payment processor. Say: “I’m seeing fraudulent test orders. Here’s the pattern, please help me block them.”
    • Block high-risk countries (if you don’t ship there). Use Cloudflare’s WAF (Web Application Firewall) to filter traffic.

Warning: Fulfilling fraudulent orders costs you product + shipping + time. Verify payments before shipping.

Language and search alignment

  • AI/LLMs (ChatGPT, Gemini) can’t “see” hidden text. If it’s behind tabs/toggles/accordions, they’ll miss it. Move critical info (FAQs, specs, reviews) to visible text.
  • Avoid “clever” product names. Example: A dress colored “Pristine” won’t show up in searches for “ivory dress”.
    • Fix: Add generic terms in parentheses:
      • Wrong: “Pristine Midi Dress”
      • Right: “Pristine (Ivory) Midi Dress”
  • Test your products with AI: Ask ChatGPT:
    “Find me [your product] in [color/size/price range].” If it misses your product, your descriptions need work.

Reviews are trust signals (for humans and AI)

  • Encourage detailed reviews. Generic “I love it!” won’t help.
    • Ask customers: “How do you use this product? What problem does it solve?”
    • Example: “These hiking shoes fit my wide feet—finally no blisters!”
  • Leverage brand reviews. If you sell multiple products, get reviews for your brand (e.g., via G2 or Trustpilot). LLMs pull these when answering questions like “What’s the best brand for X?”
  • Last-resort tactic: Ask friends/family to leave honest reviews. (No fake ones, because Google penalizes that.)

Pro tip: Utilize Yoast SEO’s FAQ schema for reviews and Q&As. However, please keep FAQs visible; avoid hiding them in toggles.

3. Optimize for AI and search (quick wins)

Product pages: Lead with the good stuff

  • First 100 words matter most. AI/LLMs and users skim, so put key details up top, such as price, shipping info, and bundling options
  • Plain and concise language wins over clever marketing.
    • Example:
      • Original: “Experience luxury with our artisanal ceramic mug.”
      • Optimized: “14oz ceramic mug. Dishwasher-safe, holds heat for 2 hours.”
  • Add videos. Show the product in use (e.g., flipping through a planner, wearing a dress). Yoast SEO Premium includes video SEO tools. Please use them
  • Focus on your “underdog” products. These aren’t your top three bestsellers, but they’re the items ranking lower down your sales list. They might not sell as much, but they often have higher profit margins, making them a worthwhile consideration.
  • How to optimize them:
    • Use Google Search Console to identify:
      • Products with steady sales and high profitability (promote these in bundles or via email).
      • Products that could benefit from topic clustering (group related queries to uncover hidden opportunities).
    • Give them a boost by:
      • Bundling them with bestsellers (e.g., “Buy our top-selling coffee maker, get 20% off these premium beans”).
      • Upselling or cross-selling (e.g., “Customers who bought this also loved…”).

Use email and social to seed the AI

  • Send a Black Friday teaser email this week. Include:
    • Your brand name + product names (helps AI recall you later)
    • Clear discounts (e.g., “20% off all espresso makers—no code needed”)
    • Links to product pages (not just the homepage)
  • Why? ChatGPT/Gemini now scans emails (if users connect their Gmail). If someone asks, “Where can I buy X?”, the AI may suggest your brand because it saw your email
  • Social posts: 80% useful, 20% fun. Example:
    • Wrong: [Image of pizza with caption: “Ooooh”]
    • Right: “Our Chicago deep-dish pizza—now 15% off for Black Friday! [Link] #DeepDishDeals”

Remove friction from checkout

  • Audit your checkout flow. Ask:
    • Do you need a phone number? (Many users abandon carts here.)
    • Is shipping info clear upfront? (e.g., “Free shipping on orders over $50”)
    • Can users save their cart for later?
    • Test with dummy orders. Use Shopify/WooCommerce’s test credit card numbers to simulate purchases

4. Last-minute hacks (do these soon)

Task Why it matters Log in to Merchant Center > Check for warnings.
Create a Black Friday landing page Centralizes promotions for AI/users. Use a PLP (Product Landing Page) with text like: “Gifts under $50 for sports-loving dads”. Link to it from emails/social.
Update Google Shopping feed Fix errors (missing SKUs, sizes) now. Log in to Merchant Center and check for warnings.
Add FAQ schema Helps AI answer questions like “What’s the return policy?” Use Yoast SEO’s FAQ block (visible text only!).
Check inventory Avoid selling out of bestsellers. Reorder now, because shipping delays are expected to spike in November.
Set up a backup payment processor Fraud attacks can freeze your account. Add Stripe (even if inactive) as a backup to PayPal.

5. What not to do before Black Friday

Don’t wait until the last minute to launch promotions or make critical changes. Big brands start their Black Friday campaigns in early November. If you hold off until Thanksgiving week, you’ll miss the early shoppers and the AI “training window.” LLMs prioritize brands they’ve seen mentioned in emails, social posts, or searches before the holiday rush.

Avoid hiding key details behind tabs, accordions, or images. AI tools like ChatGPT and Gemini often skip hidden text when scraping product pages, and users tend to overlook shipping costs and return policies as well. Never ignore Fake Friday (the Friday before BF), the unofficial kickoff when bargain hunters start browsing. Run a pre-sale or teaser discount to capture this traffic before competitors do.

Steer clear of overcomplicating bundles or discounts. A “Buy 5 random items, get a mystery gift” deal might sound creative, but it confuses shoppers and dilutes profits. Instead, pair high-margin items with slower sellers (e.g., “Buy a camera, get 50% off a memory card”).

Don’t assume your payment processor can handle fraud spikes. If you’re suddenly hit with stolen card tests (look for a surge in cheap, failed orders), your account could get flagged or frozen. Set up Stripe Radar or PayPal’s fraud filters now—and have a backup processor ready.

Finally, never neglect mobile checkout testing. If your “Add to Cart” button is hard to tap or forms don’t autofill on phones, you’ll lose impulse buyers. Test on a slow 3G connection to simulate real-world frustration.

Your Black Friday success starts now

The countdown is on. Black Friday will be here before you know it. But here is the good news. You still have time to make a real impact. Whether it is tightening up your product descriptions, safeguarding against fraud, or making sure your site is AI-friendly, every small tweak you make now can translate into bigger sales when the shopping frenzy hits.

If you are feeling overwhelmed, remember this. You do not have to do it all alone. Tools like Yoast SEO Premium and WooCommerce SEO can help you optimize your product pages, structure your content for both AI and search engines, and even add schema markup to ensure your products are more visible to both AI and search engines. It is like having an SEO expert in your corner, guiding you through the chaos so you can focus on what really matters. Selling more and stressing less.

So take a deep breath, tackle one task at a time, and trust that you have got this. Here is to your most successful Black Friday yet. Now go get those sales. And if you need a little extra help, you know where to find us.

Buy WooCommerce SEO now!

Unlock powerful features and much more for your online store with Yoast WooCommerce SEO!

Deploying Agentic AI For SEO: A Playbook For Technology Leaders via @sejournal, @TaylorDanRW

Search is moving from queries typed into a box to conversations held with systems that understand intent, context, and outcomes. People no longer look for pages. They look for solutions, guidance, and confidence that they are making the right choice.

Agentic AI pushes this shift further. Instead of waiting for instructions, agents act on goals. They discover information, compare options, trigger workflows, and adjust based on feedback. For digital leaders, this means visibility is no longer only a ranking problem. It becomes a problem of influence inside AI systems.

SEO now touches product, data, knowledge management, and experience design. This playbook explains how to prepare for that shift, build capability, and lead change.

Search Is Becoming AI-Mediated

AI systems have become the layer between users and the web. They read content on behalf of users, make selections instead of requiring users to browse, and influence decisions in ways that search pages once did.

This shift changes how people interact with information. Users now ask broader, more complex questions, expecting systems to understand nuance and intent. The traditional act of navigating through links is giving way to direct answers and immediate actions.

Content can no longer be designed solely for human readers. It must also be structured in ways that AI systems can interpret accurately and confidently. In this environment, trust and evidence carry more weight than keywords or search optimization tactics.

Winning in search today means becoming part of the models that shape decisions, not just appearing in the results.

What Agentic AI Means For SEO And Digital

Agentic AI is changing how people discover and choose brands. Discovery now depends on how well models learn from your content, the paths users take on your site, and the external signals that establish credibility. These systems decide when your brand is relevant, based on what they understand and trust.

During evaluation, AI compares your product, price, quality, reviews, and suitability for a given user against other options. It looks for proof, tests claims, and weighs real signals over marketing language.

When supporting decisions, AI doesn’t just provide information. It actively guides users toward what it considers the best fit. Your brand might be brought forward or quietly passed over, depending on how well it matches user needs.

In this landscape, SEO is no longer just about publishing content. It’s about shaping how AI systems perceive your brand and when they choose to recommend it.

New Operating Model For SEO

The future of search brings marketing, product, and data teams into a shared effort. Success depends on how well these areas work together to shape how AI systems perceive and present your brand.

The key is building structured knowledge that AI can easily process and apply. Instead of designing for clicks and views, focus on creating journeys that help users complete tasks through the systems guiding them. It’s also critical to train these systems with the right brand messages, supported by clear evidence and consistent proof points.

Ongoing visibility requires monitoring how models reference your brand, how they rank it, and how they reason about its relevance. This means continuously refining the signals you send, improving your content, updating product data, and reinforcing trust in every interaction.

The goal remains clear and hasn’t really changed from our technical goals for SEO. Make it easy for AI agents to understand, trust, and ultimately recommend your brand.

Maturity Model

Level Name Description Key indicators
0 Manual SEO Basic optimization and manual workflows Keyword focus, isolated content execution, minimal data alignment
1 Assisted SEO AI supports research and content creation AI‑assisted briefs, content suggestions, faster execution, manual oversight
2 Integrated AI workflows Core SEO tasks automated and structured Content pipelines, structured data adoption, automated QA, analytics integration
3 Agent‑driven operations Agents monitor, trigger, and refine SEO Automated reporting, performance triggers, self‑adjusting content modules
4 Autonomous acquisition systems Self‑improving systems tied to revenue Continuous testing, adaptive journeys, revenue‑linked triggers, real‑time optimization

The goal is not automation alone. It is intelligence and improvement at scale.

Technical And Data Foundations

To prepare for agentic SEO, organizations need more than traditional content systems built for publishing. They need strong foundations that help AI systems understand, evaluate, and act with confidence.

This starts with clarity, which means crafting messaging that is consistent, accurate, and easy for machines to interpret. Structure is also essential, requiring content, data, and signals to be organized in ways that align with how AI systems process and reason through information.

Key components of this are:

  • Structured data that turns content into machine‑readable knowledge.
  • Knowledge graphs that explain relationships between products, categories, and needs.
  • Taxonomy and naming standards to ensure consistency across pages, feeds, and assets.
  • APIs and automation for publishing and optimization, so agents can trigger updates.
  • Clean product and service data, including specifications, pricing, and availability.
  • Evaluation systems to audit AI outputs and detect hallucinations or misalignment.
  • Identity and trust signals, including reviews, authority, certifications, and product proof.

This calls for a shift from simply building web pages to creating a well-organized information architecture. The goal is to structure information in a way that AI systems can easily navigate, understand, and apply.

In practice, this means bringing together product data, content metadata, and customer intent into a single, connected system. It involves defining the key entities your business represents, such as products or services, and mapping how they relate to what users are trying to accomplish. Content feeds and structured data should reflect the actual state of the business rather than just marketing language.

Equally important is creating feedback loops that show how AI systems interpret and reference your brand. These insights help you see where your content is being used, how it is being understood, and whether it is guiding users toward your brand. With this information, you can keep refining what you share to improve how systems recognize and recommend you.

Instead of asking, “How do we rank for this query?” leaders will ask, “How do systems understand us, trust us, and act on our information?”

KPI And Measurement Model

Traditional key performance indicators still hold value, but they no longer capture the full picture. Rankings and session metrics continue to provide insight, yet they now exist within a broader framework shaped by how AI systems retrieve, interpret, and act on information. Ranking reports will sit alongside AI retrieval dashboards, and session counts will be evaluated alongside metrics focused on task completion and user outcomes.

In my opinion, you should also be looking to monitor:

  • Share of voice in AI assistants.
  • Retrieval and inclusion rate in AI answers.
  • Brand alignment and brand safety in model outputs.
  • Presence in multi‑step reasoning chains.
  • Task completion and conversion paths from AI systems.
  • Cost per automated workflow and cost per agent‑driven action.
  • Model education, data freshness, and trust scores.

As measurement evolves, the focus moves from tracking visitor numbers to understanding how AI systems shape decisions. To navigate this shift, leaders should design metrics that reflect influence within these systems. Visibility will measure whether the brand is appearing in AI-generated responses and assistant-led interactions.

Accuracy will assess whether the brand is being represented correctly and safely across touchpoints. Trust will reflect whether AI systems choose your content and signals over others when making recommendations. Action will capture whether AI-driven experiences result in tangible outcomes like leads, bookings, or purchases. Efficiency will show whether AI agents are reducing manual effort, improving speed, and delivering better user experiences.

Success will no longer be defined by visibility alone but by a brand’s ability to perform across discovery, decision support, and operational impact.

Talent And Capability Model

Agentic SEO is not a standalone skill set, it draws from a mix of disciplines that span marketing, data, and product. Success in this space requires a collaborative approach, where expertise is integrated rather than siloed.

Future-facing teams bring together SEO and content strategy, data and automation engineering, product and user experience thinking, as well as governance and prompt development. Legal and compliance awareness also play a critical role, ensuring that outputs remain responsible and aligned with brand and regulatory standards.

These teams operate in cross-functional pods, organized around delivering customer outcomes rather than managing individual channels. This structure allows them to move faster, adapt to change, and create more cohesive experiences across AI-driven platforms.

Modern SEO teams include several key roles. The SEO strategist focuses on how AI systems search, retrieve, and rank content. The data engineer manages the integrity of structured content, metadata, and live data feeds. The automation specialist builds the workflows and agents that connect information to user actions. The AI evaluator audits model outputs to ensure accuracy, brand alignment, and safety. The product partner bridges SEO efforts with real user journeys, making sure that discovery leads to meaningful interaction and conversion.

As this approach matures, teams will spend less time producing content manually and more time designing the systems, signals, and experiences that guide AI behavior and improve how users discover and engage with the brand.

The First 90 days

Days 1 To 30: Foundation And Alignment

  • Audit content, data, and search performance.
  • Map where AI already touches customer journeys.
  • Identify gaps in structure, trust signals, and data quality.
  • Set goals for AI visibility and agent‑driven workflows.

Days 31 To 60: Build And Test Pilots

  • Launch structured data and knowledge base improvements.
  • Test AI‑assisted content and QA pipelines.
  • Introduce early agent monitoring for SEO signals.
  • Create evaluation benchmarks for AI accuracy and brand safety.

Days 61 To 90: Scale And Govern

  • Deploy automation in high‑impact workflows.
  • Formalize model governance and feedback loops.
  • Train cross‑functional teams on AI‑ready processes.
  • Build dashboards for AI visibility, trust, and conversion.

Future Outlook

Search will not disappear. It will merge into tasks, journeys, and decisions across devices and interfaces. Brands that train AI systems, structure knowledge, and build agent‑ready operations will lead.

The winners will not be those who automate content. They will be those who help users and systems make better decisions at speed and scale.

More Resources:


Featured Image: Collagery/Shutterstock

OpenAI’s Sam Altman Raises Possibility Of Ads On ChatGPT via @sejournal, @martinibuster

OpenAI’s CEO Sam Altman sat for an interview where he explained that his vision for the future of ChatGPT is as a trusted assistant that’s user-aligned, saying that booking hotels is not going to be the way to monetize “the world’s smartest model.” He pointed to Google as an example of what he doesn’t want ChatGPT to become: a service that accepts advertising dollars to place the worst choice above the best choice. He then followed up to express openness to advertising.

User-Aligned Monetization Model

Altman contrasted OpenAI’s revenue approach with the ad-driven incentives of Google. He explained that Google’s Search and advertising ecosystem depends on Google’s search results “doing badly for the user,” because ranking decisions are partly tied to maximizing advertising income.

The interviewer related that he and his wife took a trip to Europe and booked multiple hotels with help from ChatGPT and ate at restaurants that ChatGPT helped him find and at no point did any kind of kickback or advertising fee go back to OpenAI, leading him to tell his wife that ChatGPT “didn’t get a dime from this… this just seems wrong….” because he was getting so much value from ChatGPT and ChatGPT wasn’t getting anything back.

Altman answered that users trust ChatGPT and that’s why so many people pay for it.

He explained:

“I think if ChatGPT finds you the… To zoom out even before the answer, one of the unusual things we noticed a while ago, and this was when it was a worst problem, ChatGPT would consistently be reported as a user’s most trusted technology product from a big tech company. We don’t really think of ourselves as a big tech company, but I guess we are now. That’s very odd on the surface, because AI is the thing that hallucinates, AI is the thing with all the errors, and that was much more of a problem. And there’s a question of why.

Ads on a Google search are dependent on Google doing badly. If it was giving you the best answer, there’d be no reason ever to buy an ad above it. So you’re like, that thing’s not quite aligned with me.

ChatGPT, maybe it gives you the best answer, maybe it doesn’t, but you’re paying it, or hopefully are paying it, and it’s at least trying to give you the best answer. And that has led to people having a deep and pretty trusting relationship with ChatGPT. You ask ChatGPT for the best hotel, not Google or something else.”

Altman’s response used the interviewer’s experience as an example of a paradigm change in user trust in technology. He contrasted ChatGPT’s model, where users directly pay for answers, with Google’s ad-based model that profits from imperfect results. His point is that ChatGPT’s business model aligns more closely with users’ interests, earning a sense of trust and reliability rather than making their users feel exploited by an advertising system. This is why users perceive ChatGPT as more trustworthy, even though ChatGPT is known to hallucinate.

Altman Is Open To Transaction Fees

Altman was strongly against accepting advertising money in exchange for showing a hotel above what ChatGPT would naturally show. He said that he would be open to accepting a transaction fee should a user book that hotel through ChatGPT because that has no influence on what ChatGPT recommends, thus preserving a user’s trust.

He shared how this would work:

“If ChatGPT were accepting payment to put a worse hotel above a better hotel, that’s probably catastrophic for your relationship with ChatGPT. On the other hand, if ChatGPT shows you it’s best hotel, whatever that is, and then if you book it with one click, takes the same cut that it would take from any other hotel, and there’s nothing that influenced it, but there’s some sort of transaction fee, I think that’s probably okay. And with our recent commerce thing, that’s the spirit of what we’re trying to do. We’ll do that for travel at some point.”

I think a takeaway here is that Altman believes the advertising model that the Internet has been built on over the past thirty-plus years can subvert user trust and lead to a poor user experience. He feels that a transaction fee model is less likely to impact the quality of the service that users are paying for and that it will maintain the feeling of trust that people have in ChatGPT.

But later on in the interview, as you’ll see, Altman surprises the interviewer with his comment about the possibility of advertisements on ChatGPT.

How OpenAI Will Monetize Itself

When pressed about how OpenAI will monetize itself, Altman responded that he expects the future of commerce will have lower margins and that he doesn’t expect to fully fund OpenAI by booking hotels but by doing exceptional things like curing diseases.

Altman explained his vision:

“So one thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I’m happy about that. I think there’s like a lot of taxes that just suck for the economy and getting those down should be great all around. But I think that most companies like OpenAI will make more money at a lower margin.

…I think the way to monetize the world’s smartest model is certainly not hotel booking.  …I want to discover new science and figure out a way to monetize that. You can only do with the smartest model.

There is a question of, should, many people have asked, should OpenAI do ChatGPT at all? Why don’t you just go build AGI? Why don’t you go discover a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology? And it is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually.

…Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever.

So maybe ChatGPT and hotel booking and whatever else is not the best way we can make money. In fact, I’m certain it’s not. I do think it’s a very important thing to do for the world, and I’m happy for OpenAI to do some things that are not the economic maxing thing.”

Advertisements May Be Coming To ChatGPT

At around the 18 minute mark the interviewer asked Altman about advertising on OpenAI and Altman acknowledged that there may be a form of advertising but was vague about what that would look like.

He explained:

“Again, there’s a kind of ad that I think would be really bad, like the one we talked about.

There are kinds of ads that I think would be very good or pretty good to do. I expect it’s something we’ll try at some point. I do not think it is our biggest revenue opportunity.”

The interviewer asked:

“What will the ad look like on the page?”

Altman responded:

“I have no idea. You asked like a question about productivity earlier. I’m really good about not doing the things I don’t want to do.”

Takeaway

Sam Altman suggests an interesting way forward on how to monetize Internet users. His way is based on trust and finding a way to monetize that doesn’t betray that trust.

Watch the interview starting at about the 16 minute mark:

Featured image/Screenshot from interview

Why AI Content All Sounds the Same & How SEO Pros Can Fix It via @sejournal, @mktbrew

This post was sponsored by Market Brew. The opinions expressed in this article are the sponsor’s own.

If your AI-generated articles don’t rank but sound fine, you’re not alone.

AI has made it effortless to produce content, but not to stand out in SERPs.

Across nearly every industry, brands are using generative AI tools like ChatGPT, Perplexity, Claude, and more to scale content production, only to discover that, to search engines, everything sounds the same.

But this guide will help you build E-E-A-T-friendly & AI-Overview-worthy content that boosts your AI Overview visibility, while giving you more control over your rankings.

Why Does All AI-Generated Content Sound The Same?

Most generative AI models write from the same training data, producing statistically “average” answers to predictable prompts.

The result is fluent, on-topic copy that is seen as interchangeable from one brand to the next.

To most readers, it may feel novel.

To search engines, your AI content may look redundant.

Algorithms can now detect when pages express the same ideas with minor wording differences. Those pages compete for the same meaning, and only one tends to win.

The challenge for SEOs isn’t writing faster, it’s writing differently.

That starts with understanding why search engines can tell the difference even when humans can’t.

How Do Search Engines & Answer Engines See My Content?

Here’s what Google actually sees when it looks at your page:

  • Search engines no longer evaluate content by surface keywords.
  • They map meaning.

Modern ranking systems translate your content into embeddings.

When two pages share nearly identical embeddings, the algorithm treats them as duplicates of meaning, similar to duplicate content.

That’s why AI-generated content blends together. The vocabulary may change, but the structure and message remain the same.

What Do Answer Engines Look For On Web Pages?

Beyond words, engines analyze the entire ecosystem of a page:

These structural cues help determine whether content is contextually distinct or just another derivative variant.

To stand out, SEOs have to shape the context that guides the model before it writes.

That’s where the Inspiration Stage comes in.

How To Teach AI To Write Like Your Brand, Not The Internet

Before you generate another article, feed the AI your brand’s DNA.

Language models can complete sentences, but can’t represent your brand, structure, or positioning unless you teach them.

Advanced teams solve this through context engineering, defining who the AI is writing for and how that content should behave in search.

The Inspiration Stage should combine three elements that together create brand-unique outputs.

Step 1 – Create A Brand Bible: Define Who You Are

The first step is identity.

A Brand Bible translates your company’s tone, values, and vocabulary into structured guidance the AI can reference. It tells the model how to express authority, empathy, or playfulness. And just as important, what NOT to say.

Without it, every post sounds like a tech press release.

With it, you get language that feels recognizably yours, even when produced at scale.

“The Brand Bible isn’t decoration: it’s a defensive wall against generic AI sameness.”

A great example: Market Brew’s Brand Bible Wizard

Step 2 – Create A Template URL: Structure How You Write

Great writing still needs great scaffolding.

By supplying a Template URL, a page whose structure already performs well, you give the model a layout to emulate: heading hierarchy, schema markup, internal link positions, and content rhythm.

Adding a Template Influence parameter can help the AI decide how closely to follow that structure. Lower settings would encourage creative variation; higher settings would preserve proven formatting for consistency across hundreds of pages.

Templates essentially become repeatable frameworks for ranking success.

An example of how to apply a template URL

Step 3 – Reverse-Engineer Your Competitor Fan-Out Prompts: Know the Landscape

Context also means competition. When you are creating AI content, it needs to be optimized for a series of keywords and prompts.

Fan-out prompts are a concept that maps the broader semantic territory around a keyword or topic. These are a network of related questions, entities, and themes that appear across the SERP.

In addition, fan-out prompts should be reverse-engineered from top competitors in that SERP.

Feeding this intelligence into the AI ensures your content strategically expands its coverage; something that the LLM search engines are hungry for.

“It’s not copying competitors, it’s reverse-engineering the structure of authority.”

Together, these three inputs create a contextual blueprint that transforms AI from a text generator into a brand and industry-aware author.

Market Brew’s implementation of reverse engineering fan-out prompts

How To Incorporate Human-Touch Into AI Content

If your AI tool spits out finished drafts with no checkpoints, you’ve lost control of what high-quality content is.

That’s a problem for teams who need to verify accuracy, tone, or compliance.

Breaking generation into transparent stages solves this.

Incorporate checkpoints where humans can review, edit, or re-queue the content at each stage:

  • Research.
  • Outline.
  • Draft.
  • Refinement.

Metrics for readability, link balance, and brand tone become visible in real time.

This “human-in-the-loop” design keeps creative control where it belongs.

Instead of replacing editors, AI becomes their analytical assistant: showing how each change affects the structure beneath the words.

“The best AI systems don’t replace editors, they give them x-ray vision into every step of the process.”

How To Build Content The Way Search Engines Read It

Modern SEO focuses on predictive quality signals: indicators that content is likely to perform before it ever ranks.

These include:

  • Semantic alignment: how closely the page’s embeddings match target intent clusters.
  • Structural integrity: whether headings, schema, and links follow proven ranking frameworks.
  • Brand consistency and clarity: tone and terminology that match the brand bible without losing readability.

Tracking these signals during creation turns optimization into a real-time discipline.

Teams can refine strategy based on measurable structure, not just traffic graphs weeks later.

That’s the essence of predictive SEO: understanding success before the SERP reflects it.

The Easy Way To Create High-Visibility Content For Modern SERPs

Top SEO teams are already using the Content Booster approach.

Market Brew’s Content Booster is one such example.

It embeds AI writing directly within a search engine simulation, using the same mechanics that evaluate pages to guide creation.

Writers begin by loading their Brand Bible, selecting a Template URL, and enabling reverse-engineered fan-out prompts.

Next, the internal and external linking strategy is defined, which uses a search engine model’s link scoring system, plus its entity-based text classifier as a guide to place the most valuable links possible.

This is bolstered by a “friends/foes” section that allows writers to define quoting / linking opportunities to friendly sites, and “foe” sites where external linking should be avoided.

The Content Booster then produces and evaluates a 7-stage content pipeline, each driven by thousands of AI agents.

Stage Function What You Get
0. Brand Bible Upload your brand assets and site; Market Brew learns your tone, voice, and banned terms. Every piece written in your unique brand style.
1. Opportunity & Strategy Define your target keyword or prompt, tone, audience, and linking strategy. A strategic blueprint tied to real search intent.
2. Brief & Structure Creates an SEO-optimized outline using semantic clusters and entity graphs. Perfectly structured brief ready for generation.
3. Draft Generation AI produces content constrained by embeddings and brand parameters. A first draft aligned with ranking behavior, not just text patterns.
4. Optimization & Alignment Uses cosine similarity and Market Brew’s ranking model to score each section. Data-driven tuning for maximum topical alignment.
5. Internal Linking & Entity Enrichment Adds schema markup, entity tags, and smart internal links. Optimized crawl flow and contextual authority.
6. Quality & Compliance Checks grammar, plagiarism, accessibility, and brand voice. Ready-to-publish content that meets editorial and SEO standards.

Editors can inspect or refine content at any stage, ensuring human direction without losing automation.

Instead of waiting months to measure results, teams see predictive metrics: like fan-out coverage, audience/persona compliance, semantic similarity, link distribution, embedding clusters and more. The moment a draft is generated.

This isn’t about outsourcing creativity.

It’s about giving SEO professionals the same visibility and control that search engineers already have.

Your Next Steps

If you teach your AI to think like your best strategist, sameness stops being a problem.

Every brand now has access to the same linguistic engine; the only differentiator is context.

The future of SEO belongs to those who blend human creativity with algorithmic understanding, who teach their models to think like search engines while sounding unmistakably human.

By anchoring AI in brand, structure, and competition, and by measuring predictive quality instead of reactive outcomes, SEOs can finally close the gap between what we publish and what algorithms reward.

“The era of AI sameness is already here. The brands that thrive will be the ones that teach their AI to sound human and think like a search engine.”

Ready to see how predictive SEO works in action?

Explore the free trial of Market Brew’s Light Brew system — where you can model how search engines interpret your content and test AI writing workflows before publishing.


Image Credits

Featured Image: Image by Market Brew. Used with permission.

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

The Download: the solar geoengineering race, and future gazing with the The Simpsons

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why the for-profit race into solar geoengineering is bad for science and public trust

—David Keith is the professor of geophysical sciences at the University of Chicago and Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.

As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about emerging efforts to start and fund private companies to deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. Read the full story.

This story is part of Heat Exchange, MIT Technology Review’s guest opinion series offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the series here.

Can “The Simpsons” really predict the future?

According to internet listicles, the animated sitcom The Simpsons has predicted the future anywhere from 17 to 55 times.

The show foresaw Donald Trump becoming US President a full 17 years before the real estate mogul was inaugurated as the 45th leader of the United States. Earlier, in 1993, an episode of the show featured the “Osaka flu,” which some felt was eerily prescient of the coronavirus pandemic. And—somehow!—Simpsons writers just knew that the US Olympic curling team would beat Sweden eight whole years before they did it.

Al Jean has worked on The Simpsons on and off since 1989; he is the cartoon’s longest-serving showrunner. Here, he reflects on the conspiracy theories that have sprung from these apparent prophecies. Read the full story.

—Amelia Tait

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” about how the present boom in conspiracy theories is reshaping science and technology.

MIT Technology Review Narrated: Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap where his therapist began inadvertently sharing his screen.

For the rest of the session, Declan was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen, who was taking what Declan was saying, putting it into ChatGPT, and then parroting its answers.

But Declan is not alone. In fact, a growing number of people are reporting receiving AI-generated communiqués from their therapists. Clients’ trust and privacy are being abandoned in the process.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon is suing Perplexity over its Comet AI agent
It alleges Perplexity is committing computer fraud by not disclosing when Comet is shopping on a human’s behalf. (Bloomberg $)
+ In turn, Perplexity has accused Amazon of bullying. (CNBC)

2 Trump has nominated the billionaire entrepreneur Jared Isaacman to lead NASA
Five months after he withdrew Isaacman’s nomination for the same job. (WP $)
+ It was around the same time Elon Musk left the US government. (WSJ $)

3 Homeland Security has released an app for police forces to scan people’s faces 
Mobile Fortify uses facial recognition to identify whether someone’s been given a deportation order. (404 Media)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

4 Scientific journals are being swamped with AI-written letters
Researchers are sifting through their inbox trying to work out what to believe. (NYT $)
+ ArXiv is no longer accepting certain papers for fear they’ve been written by AI. (404 Media)

5 The AI boom has proved a major windfall for equipment makers 
Makers of small turbines and fuel cells, rejoice. (WSJ $)

6 Chronic kidney disease may be the first chronic illness linked to climate change
Experts have linked a surge in the disease to hotter temperatures. (Undark)
+ The quest to find out how our bodies react to extreme temperatures. (MIT Technology Review)

7 Brazil is proposing a fund to protect tropical forests
It would pay countries not to fell their trees. (NYT $)

8 New York has voted for a citywide digital map
It’ll officially represent the five boroughs for the first time. (Fast Company $)

9 The internet could be at risk of catastrophic collapse
Meet the people preparing for that exact eventuality. (New Scientist $)

10 A Chinese space craft may have been hit by space junk
Three astronauts have been forced to remain on the Tiangong space station while the damage is investigated. (Ars Technica)

Quote of the day

“I am not sure how I earned the trust of so many, but I will do everything I can to live up to those expectations.”

—Jared Isaacman, Donald Trump’s renomination to lead NASA, doesn’t appear entirely sure in his own abilities to lead the agency, Ars Technica reports.

One more thing

Is the digital dollar dead?

In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.

How things change. Years later, the digital dollar—even though it doesn’t exist—has become political red meat, as some politicians label it a dystopian tool for surveillance. And late last year, the Boston Fed quietly stopped working on its CBDC project. So is the dream of the digital dollar dead? Read the full story.

—Mike Orcutt

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The world’s oldest air has been unleashed, after six million years under ice.
+ How to stop sweating the small stuff and try to be happy in this mad world.
+ Happy Bonfire Night to our British readers! 🎆🎇
+ The spirit of Halloween is still with us: the scariest music ever recorded.

A new ion-based quantum computer makes error correction simpler

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. 

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s.

“Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner.

Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.

Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both. 

Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 

In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines. 

A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques. 

Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit.

This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits.

Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam.

This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley.

Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says. 

Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor. 

“These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.”

Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

Merge Google Ads Campaigns for Better ROAS

Google Ads AI-driven optimization works best with abundant data. A single entity with 10,000 impressions is better than 10 with 1,000 each — whether a campaign, ad group, keyword, or ad copy.

Thus I always recommend that clients condense their campaigns (or ad groups, keywords, ad copy). But which to condense, and how, is not typically obvious. Common questions include:

• What campaigns should I condense?
• How do I know when a campaign has enough data, or too little?
• Which keywords need their own ad groups, regardless of performance?

AI Optimization

There is no set threshold for condensing campaigns. For instance, should 10 ad groups condense to five or two or one? The answer is subjective based on experience and intuition.

It could depend on the keyword theme. For years, aligning keywords with ad copy drove performance. An advertiser created separate ad groups for, say, gym bags, hiking bags, and running bags.

Searching Google for “gym bags” produced gym-bag-specific ad copy and, presumably, linked to a page dedicated to gym bags. Before smart bidding and AI, this breakout was common. Each ad group received relatively less traffic and conversions due to manual bidding.

Fast forward to 2025, and a manual setup won’t collect enough data for meaningful AI optimizations. A better option is to condense the three ad groups into one for “athletic bags.” The ad copy might be less aligned to each keyword, but Google’s Responsive Search Ads and keyword-level URLs will show the ad combination most likely to convert. Grouping keywords by theme rather than product type yields more data.

Condensing Campaigns

Consider the hypothetical laptop-computer campaign below with eight ad groups and 90 days of performance data.

Ad Group Clicks Cost Conversions Revenue Cost/Conversion ROAS
Laptop Bags 55 $455 10 $1,245 $45.50 174%
Laptop Covers 35 $333 8 $600 $41.63 80%
Laptop Accessories 30 $300 9 $400 $33.33 33%
Laptop Stands 5 $65 1 $20 $65.00 -69%
Laptop Cases 15 $44 2 $50 $22.00 14%
Laptop Sleeves 20 $55 1 $20 $55.00 -64%
Laptop Docking Stations 3 $40 0 $0 $0.00 -100%
Laptop Mounts 8 $50 0 $0 $0.00 -100%

The only ad group with at least 50 clicks and 10 conversions is “Laptop Bags.” Three of the ad groups haven’t generated more than 10 clicks, and half the groups have seen just one conversion or none. I would condense the campaign into three ad groups, using the best-performing groups as the anchors.

To start, the “Laptop Bags” ad group has the most conversions and the highest ROAS. I won’t change this group, as that could disrupt performance. I’ll pause the “Covers,” “Cases,” and “Sleeves” groups and move their keywords into the “Laptop Bags” anchor ad group. Each of those, to me, has a common theme of “laptop protection.”

Similarly, I’ll use “Laptop Stands” as an anchor group combined with keywords from “Docking Stations” and “Mounts.”

Lastly, I’ll leave “Laptop Accessories” as is in the final group.

Each of those decisions is subjective; other practitioners could have combined differently.

Monitor Performance

I always monitor condensed campaigns closely. The performance typically improves, or at least stays consistent, although it might decline temporarily. If the decline persists, I’ll revert to the legacy structure for further analysis.

Perplexity Bets $400M On Snapchat To Scale AI Search Adoption via @sejournal, @MattGSouthern

Perplexity will pay Snap $400 million to integrate its AI answer engine into Snapchat’s chat interface, with rollout starting next year.

  • Perplexity will pay Snap $400 million over one year to integrate its AI answer engine into Snapchat.
  • Snap calls this its first large-scale integration of an external AI partner directly in the app.
  • Perplexity handles 150+ million questions weekly, so the integration meaningfully expands distribution.