A detailed guide to optimizing ecommerce product variations for SEO and conversions

Table of contents

Product variations are more than just an ecommerce feature. They give your customers choices, whether it’s size, color, style, or material, while helping your store stand out in competitive search results. When optimized correctly, product variations do more than display available options. They improve the customer experience by making shopping easier. At the same time, they boost conversions by catering to diverse needs and support your SEO strategy by targeting more keywords.

This guide will explain the best practices for product variations and show you how to optimize them for search engines and customers so your ecommerce site can grow in traffic, rankings, and sales.

What are product variations in ecommerce?

Product variations or product variants are different versions of the same product designed to give customers options. These variations can be based on attributes like size, color, material, style, or capacity. Instead of creating multiple product listings, variations group all options under a single product, making it easier for customers to browse and purchase.

For example, when you search for an iPhone on Amazon, you’ll see options for different colors and storage capacities, all available on a single page. This setup lets customers explore multiple choices without leaving the main product page.

Example of product variants

Managing product variations depends on the platform you use:

  • In WooCommerce, product variations are created using attributes such as size or color, and then assigning values to those attributes. Store owners can upload unique images, set prices, and adjust stock for each variation

    Read more: Variable Products Documentation – WooCommerce

  • In Shopify, variations are managed under the ‘Variants’ section of a product. You can add options like size, color, or material, and then assign values. Each variant can have its own price, SKU, and image, making it simple to customize how the variations appear in your store


    Read more: Shopify Help Center – Adding variants

Why do product variations matter for customers?

Okay, now let’s see why you need product variants and not upload each option as a completely separate product. Think of it this way: customers don’t want to scroll through endless listings just to compare a black t-shirt with a white one or a 64GB phone with a 128GB version. Variations keep everything in one place, making shopping smoother and more intuitive.

Here’s why product variations are so important for your customers:

  • Improved shopping experience: Variants reduce unnecessary clicks and allow customers to compare options side by side within a single product page. This saves time and makes decision-making easier
  • Higher conversions and lower bounce rates: When customers find their preferred size, color, or feature right away, they are more likely to complete a purchase instead of leaving your store
  • Reduced purchase anxiety: Variants ensure customers do not feel limited by stock. Seeing multiple choices available decreases the chance of cart abandonment
  • Personalization and satisfaction: Offering customers options empowers them to choose a product that feels tailor-made for them, improving overall satisfaction
  • Indirect SEO benefits: A better shopping experience often leads to longer session durations, fewer bounces, and more engagement. These signals may indirectly support stronger SEO performance, as they align with positive user experience metrics

How do product variations support your ecommerce SEO strategies?

Product variations are not just about creating a better shopping experience; they also bring direct ecommerce SEO benefits that can help your store attract more qualified traffic. When optimized correctly, variants can make your product pages richer, more discoverable, and more engaging.

Increase in keyword targeting

Variants allow you to target a wider range of long-tail keywords that reflect real customer search behavior. For example, instead of only competing for ‘men’s wallet,’ you can rank for ‘men’s black leather wallet’ or ‘slim men’s brown wallet.’ These specific keywords usually carry higher purchase intent and face less competition

Levi’s product page for jeans uses long-tail keywords in the product description for keyword targeting

Richer content for search engines and AI engines

Each variation allows you to add unique attributes, descriptions, and specifications. This creates a more detailed and content-rich product page that search engines and AI-driven engines (like ChatGPT or Google’s AI Overviews) value when surfacing answers and shaping brand perception.

ChatGPT showing product options for a t-shirt

Improved user engagement and longer sessions

A well-structured page that clearly displays variations keeps users from bouncing to competitor sites when they don’t immediately find their preferred option. Instead, they spend more time exploring, comparing, and interacting with your store, which indirectly supports SEO through stronger engagement signals.

Better structured data for enhanced search results

When product variants are properly marked up with structured data, search engines can display rich snippets that include price ranges, availability, color options, and reviews. This not only makes your listings stand out but also boosts click-through rates (CTRs) from search results.

Yoast SEO’s Structure data feature describes your product content as a single interconnected schema graph that search engines can easily understand. This helps them interpret your product variations more accurately and increases your chances of getting rich results, from product details to FAQs.

In short, optimized product variants make your product pages more keyword-diverse, content-rich, and engaging while also improving how your store is presented in search results and generative AI chat replies.

Blueprint for optimizing your product variations

Here’s the part you’ve been waiting for: how to optimize your product variations for SEO, conversions, and user experience. In this section, we’ll cover the right technical implementation, smart SEO tactics, and the common mistakes you’ll want to avoid.

Technical implementation of product variations

Getting the technical setup right is the foundation for optimizing your product variations for both ecommerce SEO and user experience. Poor implementation can lead to crawl inefficiencies, duplicate content, and a confusing buyer journey.

Here’s how to approach it effectively:

Handling variations in URLs

One of the biggest decisions you’ll make is how to structure URLs for your product variations:

  • Parameters (e.g., ?color=red&size=12): Good for filtering and faceted navigation, but they can create crawl bloat if not managed properly. Always define URL parameters in Google Search Console and use canonical tags to consolidate signals
  • Separate pages for each variation (e.g., /red-dress-size-12): This can be useful when specific variations have significant search demand (like ‘iPhone 15 Pro Max 512GB Blue’). However, it requires careful duplication management and unique, optimized content for each page
  • Single product page with dropdowns or swatches: The most common approach for ecommerce stores, as it consolidates SEO signals into one canonical page while providing users with all available variations in one place

Takeaway: Use a hybrid approach. Keep a single master product page, but only create dedicated variation URLs for high-demand search queries (with unique descriptions, images, and structured data).

Note: only create dedicated variation URLs if you can add unique value (content/images), otherwise, it risks duplication

Internal linking best practices

Internal linking is crucial in helping search engines understand the relationships between your main product page and its variations.

  • Always link back to the parent product page from any variation-specific pages
  • Ensure your category pages link to the main product page, not every single variation (to prevent diluting crawl equity)
  • Use descriptive anchor text when linking internally, e.g., ‘men’s black leather wallet’ rather than just ‘wallet’

The Internal linking suggestions feature in Yoast SEO Premium is a real time-saver. As you write, it recommends relevant pages and posts so you can easily connect variations, parent products, and related content. This not only strengthens your site structure and boosts SEO but also ensures visitors enjoy a seamless browsing experience.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Takeaway: Build a clean hierarchy where category pages → main product pages → variations, ensuring both users and crawlers can navigate easily.

Managing faceted navigation and filters

Filters (like size, color, brand, or price) enhance user experience but can create SEO challenges if every filter combination generates a new crawlable URL.

  • Use or noindex for low-value filter pages (like ‘price under $20’ if it doesn’t add SEO value)
  • Block irrelevant filter parameters in robots.txt to prevent crawl bloat
  • For valuable filters (e.g., ‘red running shoes’), allow them to be indexed and optimize the content

Takeaway: Conduct a filter audit in Google Search Console. Identify which filtered URLs actually drive impressions and clicks, and only allow those to be indexable.

Media content optimization for ecommerce product variations

When it comes to product variations, visuals and supporting media play a critical role in both SEO and conversions. Shoppers often make purchase decisions based on how well they can visualize a specific variation. In fact, 75% of online shoppers rely on product images when making purchasing decisions.

Also read: Image SEO: Optimizing images for search engines

Here’s how you can optimize media content for ecommerce product variations:

Use unique images for each variation

Avoid using the same generic image across all variations. Display each color, size, material, or feature with its own high-quality image set. For example, if you sell a t-shirt in six colors, show each color separately to help customers make confident choices.

Unique product images for each variant

Leverage 360° views and videos

Showcase variations with interactive media like 360° spins or short product videos. For example, a ‘black leather recliner’ video demonstrates texture and function more effectively than a static image, leading to higher engagement and conversions.

Use videos and 360-degree media to portray your products

Optimize alt text, file names, and metadata

Every image should have descriptive, keyword-rich alt text that specifies the variation. Instead of writing ‘red shoe,’ use ‘women’s red running shoe size 8.’ File names (e.g., womens-red-running-shoe-size8.webp) and captions should also reinforce the variation for better indexing.

Implement structured data for media

Use the Product schema to explicitly define images and videos for each variation. Including structured data ensures that Google and AI-driven engines like ChatGPT can clearly interpret your variation visuals and display them in rich results or AI summaries.

For instance, assigning images to specific SKUs (via image markup) makes it easier for search engines to show the correct variation in shopping results.

SEO tips for product variations

Optimizing product variations for SEO requires more than attractive visuals and solid descriptions. You need to apply some proven SEO techniques to ensure search engines correctly interpret your product pages and users get the best possible experience.

Here are a few key practices every ecommerce store owner should follow:

Use canonical tags to avoid duplicate content issues

Product variations often generate multiple URLs, which can cause duplicate content problems. Canonical tags help solve this by pointing to the primary version of a page, consolidating ranking signals, and avoiding internal competition.

Yoast simplifies this process by automatically inserting canonical URL tags on your product pages. This ensures search engines know which version to prioritize, prevents diluted link equity, and even consolidates social shares under the original page. For store owners, this means less technical overhead and stronger, cleaner rankings.

Apply global product identifiers (GTIN, MPN, ISBN) where relevant

Global product identifiers like GTINs, MPNs, and ISBNs act as unique fingerprints for your products. They help Google and other search engines correctly match your items in their vast index, which improves the accuracy of search listings and reduces confusion with similar products. They also add credibility, since customers can cross-check these identifiers before purchase.

With Yoast WooCommerce SEO, adding these identifiers becomes much easier. The plugin reminds you to fill in missing SKUs, GTINs, or EANs for each product variation and automatically outputs them in structured data. This not only helps your products qualify for rich results but also ensures that no variant is left incomplete from an SEO standpoint.

Buy WooCommerce SEO now!

Unlock powerful features and much more for your online store with Yoast WooCommerce SEO!

Regularly audit Google Search Console data to track performance

Google Search Console is a goldmine for understanding how product variations are performing. By monitoring which variant pages are driving impressions, clicks, and conversions, you can refine your SEO strategy.

For example, if certain variants attract little traffic but consume crawl budget, it might be better to consolidate them under canonical tags.

Regular audits also help you detect indexing issues, thin content problems, or underperforming structured data. This keeps your product catalogue lean, crawl-efficient, and focused on driving meaningful organic traffic.

Also read: How to check the performance of rich results in Google Search Console

Common product variation ecommerce errors to avoid

Even if you’ve implemented the right technical setup, added structured data, and optimized your media content, a few small mistakes can undo all that effort. To make sure your product variations support SEO and conversions instead of hurting them, here are some common pitfalls to avoid:

  • Duplicate content: Creating separate standalone pages for each variation (like size or color) without consolidation leads to content duplication. This confuses search engines and dilutes rankings across multiple weak pages
  • Poor user experience: If your variation options are hidden, unclear, or slow to load, users struggle to make choices. This friction reduces conversions and increases bounce rates
  • Incorrect structured data: Applying schema inaccurately can cause search engines to display the wrong product details in search results, damaging credibility and visibility
  • Thin content: Not providing unique descriptions, images, or metadata for each variation leaves the page with little value. Search engines tend to down-rank such content, reducing discoverability
  • Crawl bloat: Generating too many low-value variation URLs (like separate pages for every minor option) wastes crawl budget and prevents high-priority pages from being indexed efficiently. Additionally, it could dilute internal link equity

By keeping these errors in check, you’ll ensure your product variation strategy strengthens your SEO and user experience instead of working against them.

Ready to unfold all variations?

Product variations are not just small details hidden in your catalogue. They play a major role in how both search engines and shoppers experience your store. When done right, they prevent duplicate content issues, improve crawl efficiency, deliver richer search results, and create a seamless journey for your customers.

The key is to treat product variations as part of your overall SEO strategy, not as an afterthought. Every unique image, structured snippet, and clear variation option makes your store more visible, more reliable, and more profitable.

This is where Yoast SEO becomes a game-changer. With automatic structured data, smart handling of canonical URLs, and advanced content optimization tools, Yoast helps you get product variations right the first time.

AI Search Sends Users to 404 Pages Nearly 3X More Than Google via @sejournal, @MattGSouthern

New research examining 16 million URLs aligns with Google’s predictions that hallucinated links will become an issue across AI platforms.

An Ahrefs study shows that AI assistants send users to broken web pages nearly three times more often than Google Search.

The data arrives six months after Google’s John Mueller raised awareness about this issue.

ChatGPT Leads In URL Hallucination Rates

ChatGPT creates the most fake URLs among all AI assistants tested. The study found that 1% of URLs people clicked led to 404 pages. Google’s rate is just 0.15%.

The problem gets worse when looking at all URLs ChatGPT mentions, not just clicked ones. Here, 2.38% lead to error pages. Compare this to Google’s top search results, where only 0.84% are broken links.

Claude came in second with 0.58% broken links for clicked URLs. Copilot had 0.34%, Perplexity 0.31%, and Gemini 0.21%. Mistral had the best rate at 0.12%, but it also sends the least traffic to websites.

Why Does This Happen?

The research found two main reasons why AI creates fake links.

First, some URLs used to exist but don’t anymore. When AI relies on old information instead of searching the web in real-time, it might suggest pages that have been deleted or moved.

Second, AI sometimes invents URLs that sound right but never existed.

Ryan Law from Ahrefs shared examples from their own site. AI assistants created fake URLs like “/blog/internal-links/” and “/blog/newsletter/” because these sound like pages Ahrefs might have. But they don’t actually exist.

Limited Impact on Overall Traffic

The problem may seem significant, but most websites won’t notice much impact. AI assistants only bring in about 0.25% of website traffic. Google, by comparison, drives 39.35% of traffic.

This means fake URLs affect a tiny portion of an already small traffic source. Still, the issue might grow as more people use AI for research and information.

The study also found that 74% of new web pages contain AI-generated content. When this content includes fake links, web crawlers might index them, spreading the problem further.

Mueller’s Prediction Proves Accurate

These findings match what Google’s John Mueller predicted in March. He forecasted a “slight uptick of these hallucinated links being clicked” over the next 6-12 months.

Mueller suggested focusing on better 404 pages rather than chasing accidental traffic.

His advice to collect data before making big changes looks smart now, given the small traffic impact Ahrefs found.

Mueller also predicted the problem would fade as AI services improve how they handle URLs. Time will tell if he’s right about this, too.

Looking Forward

For now, most websites should focus on two things. Create helpful 404 pages for users who hit broken links. Then, set up redirects only for fake URLs that get meaningful traffic.

This allows you to handle the problem without overreacting to what remains a minor issue for most sites.

Let’s Look Inside An Answer Engine And See How GenAI Picks Winners via @sejournal, @DuaneForrester

Ask a question in ChatGPT, Perplexity, Gemini, or Copilot, and the answer appears in seconds. It feels effortless. But under the hood, there’s no magic. There’s a fight happening.

This is the part of the pipeline where your content is in a knife fight with every other candidate. Every passage in the index wants to be the one the model selects.

For SEOs, this is a new battleground. Traditional SEO was about ranking on a page of results. Now, the contest happens inside an answer selection system. And if you want visibility, you need to understand how that system works.

Let's Look Inside An Answer Engine and See How GenAI Picks WinnersImage Credit: Duane Forrester

The Answer Selection Stage

This isn’t crawling, indexing, or embedding in a vector database. That part is done before the query ever happens. Answer selection kicks in after a user asks a question. The system already has content chunked, embedded, and stored. What it needs to do is find candidate passages, score them, and decide which ones to pass into the model for generation.

Every modern AI search pipeline uses the same three stages (across four steps): retrieval, re-ranking, and clarity checks. Each stage matters. Each carries weight. And while every platform has its own recipe (the weighting assigned at each step/stage), the research gives us enough visibility to sketch a realistic starting point. To basically build our own model to at least partially replicate what’s going on.

The Builder’s Baseline

If you were building your own LLM-based search system, you’d have to tell it how much each stage counts. That means assigning normalized weights that sum to one.

A defensible, research-informed starting stack might look like this:

  • Lexical retrieval (keywords, BM25): 0.4.
  • Semantic retrieval (embeddings, meaning): 0.4.
  • Re-ranking (cross-encoder scoring): 0.15.
  • Clarity and structural boosts: 0.05.

Every major AI system has its own proprietary blend, but they’re all essentially brewing from the same core ingredients. What I’m showing you here is the average starting point for an enterprise search system, not exactly what ChatGPT, Perplexity, Claude, Copilot, or Gemini operate with. We’ll never know those weights.

Hybrid defaults across the industry back this up. Weaviate’s hybrid search alpha parameter defaults to 0.5, an equal balance between keyword matching and embeddings. Pinecone teaches the same default in its hybrid overview.

Re-ranking gets 0.15 because it only applies to the short list. Yet its impact is proven: “Passage Re-Ranking with BERT” showed major accuracy gains when BERT was layered on BM25 retrieval.

Clarity gets 0.05. It’s small, but real. A passage that leads with the answer, is dense with facts, and can be lifted whole, is more likely to win. That matches the findings from my own piece on semantic overlap vs. density.

At first glance, this might sound like “just SEO with different math.” It isn’t. Traditional SEO has always been guesswork inside a black box. We never really had access to the algorithms in a format that was close to their production versions. With LLM systems, we finally have something search never really gave us: access to all the research they’re built on. The dense retrieval papers, the hybrid fusion methods, the re-ranking models, they’re all public. That doesn’t mean we know exactly how ChatGPT or Gemini dials their knobs, or tunes their weights, but it does mean we can sketch a model of how they likely work much more easily.

From Weights To Visibility

So, what does this mean if you’re not building the machine but competing inside it?

Overlap gets you into the room, density makes you credible, lexical keeps you from being filtered out, and clarity makes you the winner.

That’s the logic of the answer selection stack.

Lexical retrieval is still 40% of the fight. If your content doesn’t contain the words people actually use, you don’t even enter the pool.

Semantic retrieval is another 40%. This is where embeddings capture meaning. A paragraph that ties related concepts together maps better than one that is thin and isolated. This is how your content gets picked up when users phrase queries in ways you didn’t anticipate.

Re-ranking is 15%. It’s where clarity and structure matter most. Passages that look like direct answers rise. Passages that bury the conclusion drop.

Clarity and structure are the tie-breaker. 5% might not sound like much, but in close fights, it decides who wins.

Two Examples

Zapier’s Help Content

Zapier’s documentation is famously clean and answer-first. A query like “How to connect Google Sheets to Slack” returns a ChatGPT answer that begins with the exact steps outlined because the content from Zapier provides the exact data needed. When you click through a ChatGPT resource link, the page you land on is not a blog post; it’s probably not even a help article. It’s the actual page that lets you accomplish the task you asked for.

  • Lexical? Strong. The words “Google Sheets” and “Slack” are right there.
  • Semantic? Strong. The passage clusters related terms like “integration,” “workflow,” and “trigger.”
  • Re-ranking? Strong. The steps lead with the answer.
  • Clarity? Very strong. Scannable, answer-first formatting.

In a 0.4 / 0.4 / 0.15 / 0.05 system, Zapier’s chunk scores across all dials. This is why their content often shows up in AI answers.

A Marketing Blog Post

Contrast that with a typical long marketing blog post about “team productivity hacks.” The post mentions Slack, Google Sheets, and integrations, but only after 700 words of story.

  • Lexical? Present, but buried.
  • Semantic? Decent, but scattered.
  • Re-ranking? Weak. The answer to “How do I connect Sheets to Slack?” is hidden in a paragraph halfway down.
  • Clarity? Weak. No liftable answer-first chunk.

Even though the content technically covers the topic, it struggles in this weighting model. The Zapier passage wins because it aligns with how the answer selection layer actually works.

Traditional search still guides the user to read, evaluate, and decide if the page they land on answers their need. AI answers are different. They don’t ask you to parse results. They map your intent directly to the task or answer and move you straight into “get it done” mode. You ask, “How to connect Google Sheets to Slack,” and you end up with a list of steps or a link to the page where the work is completed. You don’t really get a blog post explaining how someone did this during their lunch break, and it only took five minutes.

Volatility Across Platforms

There’s another major difference from traditional SEO. Search engines, despite algorithm changes, converged over time. Ask Google and Bing the same question, and you’ll often see similar results.

LLM platforms don’t converge, or at least, aren’t so far. Ask the same question in Perplexity, Gemini, and ChatGPT, and you’ll often get three different answers. That volatility reflects how each system weights its dials. Gemini may emphasize citations. Perplexity may reward breadth of retrieval. ChatGPT may compress aggressively for conversational style. And we have data that shows that between a traditional engine, and an LLM-powered answer platform, there is a wide gulf between answers. Brightedge’s data (62% disagreement on brand recommendations) and ProFound’s data (…AI modules and answer engines differ dramatically from search engines, with just 8 – 12% overlap in results) showcase this clearly.

For SEOs, this means optimization isn’t one-size-fits-all anymore. Your content might perform well in one system and poorly in another. That fragmentation is new, and you’ll need to find ways to address it as consumer behavior around using these platforms for answers shifts.

Why This Matters

In the old model, hundreds of ranking factors blurred together into a consensus “best effort.” In the new model, it’s like you’re dealing with four big dials, and every platform tunes them differently. In fairness, the complexity behind those dials is still pretty vast.

Ignore lexical overlap, and you lose part of that 40% of the vote. Write semantically thin content, and you can lose another 40. Ramble or bury your answer, and you won’t win re-ranking. Pad with fluff and you miss the clarity boost.

The knife fight doesn’t happen on a SERP anymore. It happens inside the answer selection pipeline. And it’s highly unlikely those dials are static. You can bet they move in relation to many other factors, including each other’s relative positioning.

The Next Layer: Verification

Today, answer selection is the last gate before generation. But the next stage is already in view: verification.

Research shows how models can critique themselves and raise factuality. Self-RAG demonstrates retrieval, generation, and critique loops. SelfCheckGPT runs consistency checks across multiple generations. OpenAI is reported to be building a Universal Verifier for GPT-5. And, I wrote about this whole topic in a recent Substack article.

When verification layers mature, retrievability will only get you into the room. Verification will decide if you stay there.

Closing

This really isn’t regular SEO in disguise. It’s a shift. We can now more clearly see the gears turning because more of the research is public. We also see volatility because each platform spins those gears differently.

For SEOs, I think the takeaway is clear. Keep lexical overlap strong. Build semantic density into clusters. Lead with the answer. Make passages concise and liftable. And I do understand how much that sounds like traditional SEO guidance. I also understand how the platforms using the information differ so much from regular search engines. Those differences matter.

This is how you survive the knife fight inside AI. And soon, how you pass the verifier’s test once you’re there.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: tete_escape/Shutterstock

Ask A PPC: How Do I Nail A PPC Job Interview For Google & Meta Ads? via @sejournal, @navahf

It is a wild job market right now, and if you’re applying for a PPC role, you’re probably feeling the pressure to stand out in interviews that are increasingly demanding and often unclear in their expectations.

Whether you’re interviewing for a specialist, manager, or hybrid media role, one thing is certain: You need to be ready to demonstrate platform expertise, strategic thinking, and the ability to connect performance with business outcomes.

One reader put it this way:

“I’m preparing for a performance marketing job, specifically in PPC, and I want to focus on Google and Meta ads. Have you any advice that would help me with interview preparation for these roles?”

This question is particularly timely because it doesn’t just ask about one platform. It is looking for dual fluency in Google and Meta, which represent paid search and paid social. That nuance matters.

Stopping there, however, is a mistake. Savvy employers will appreciate an applicant who can speak to Microsoft Ads, TikTok, LinkedIn, Pinterest, Reddit, and emerging platforms, even if those channels are not in scope right now. That breadth of perspective signals that you’re not just a button-pusher; you’re a strategist.

Below is a breakdown of the three core areas most interviewers will evaluate: Paid Search, Paid Social, and General Marketing and Culture Fit.

Paid Search Interview Prep (Google, Microsoft, Etc.)

Modern paid search, especially within Google, demands more than keyword-level tactics. You need to understand how campaigns serve business objectives.

Expect strategy questions like, “X business has Y budget and Z goals – what kind of campaign would you run and why?” Strong candidates will be able to discuss budgeting frameworks, auction mechanics, audience segmentation, and creative message mapping.

You will likely be asked about reporting. Expect to reference tools like Looker Studio, Google Analytics 4, Power BI, Adobe, or Triple Whale. Even speaking confidently about one tool while showing awareness of others can be impressive.

Mention tools like Microsoft Clarity when discussing conversion rate optimization. Behavioral analytics insights reinforce that you understand the full user journey and do not treat campaigns as isolated events.

One frequently asked question involves account structure. You might be asked, “Why would you structure a campaign/account this way?” Never cite “best practices” or default methods as your rationale. Interviewers want reasoning rooted in context, goals, and a test-and-learn approach.

Stay current on innovations. Be ready to speak about features such as Performance Max, audience expansion tools, or any other platform updates that impact strategy. Share why you find them valuable and how you would explain their relevance to a client.

To stand out even further, draw comparisons between Google and Microsoft Ads, or highlight how Reddit and Amazon are bringing new energy to the paid search space.

Paid Social Interview Prep (Meta, TikTok, LinkedIn, Etc.)

Paid social requires creative fluency, audience empathy, and an understanding of privacy constraints. These platforms are less about exact keyword intent and more about relevance, scale, and emotional resonance.

Prepare to talk about platform-specific ad types and creative strategies. Discuss how you would use Facebook, Instagram, WhatsApp, and Threads, and how your tactics might differ on TikTok, LinkedIn, YouTube Shorts, or Reddit.

Understand how platforms organize their campaign hierarchies. For instance, Meta emphasizes the ad set level for budgeting and targeting, whereas Google does not. Create a reference sheet for yourself so you can confidently speak to the differences during interviews.

Expect questions around creative production and reporting. Interviewers may ask, “What would you do if the client is picky about creative but refuses to supply any?” or “How would you prove that your campaign delivered results if the client questions the attribution?” These are behavioral and strategic tests rolled into one.

Be prepared to explain your approach to budgeting. Paid social often involves very large or very small budgets, and employers want to hear how you allocate funds based on audience size, objective, and creative lifecycle.

Show an understanding of creative testing frameworks, including how you develop variations of hooks, visuals, or calls to action across placements and formats.

General Marketing And Culture Fit

Some parts of the interview will focus less on tactics and more on how you think and collaborate. These are just as important to prepare for.

Be ready to answer questions like, “Tell me about a campaign that worked – and one that didn’t.” Use those stories to demonstrate analytical thinking, cross-functional collaboration, and your ability to learn from both success and failure.

You will also likely get questions about how you communicate performance. You might be asked how you handle underperformance and how you keep stakeholders aligned and informed during those periods.

Come prepared with thoughtful questions of your own. Ask, “What’s behind hiring for this role?” This can give insight into whether the role is tied to growth, turnover, or team restructuring. It also helps you gauge whether expectations are realistic.

Another useful question is, “What does success look like in this role?” This will tell you whether the role is tied to long-term strategic goals or short-term revenue. Follow that up with, “How will I be measured in the first six months versus the next two years?” This demonstrates that you are serious about growth and longevity.

Culture questions are also important. Asking, “Do people tend to hang out or do their own thing?” invites a conversation about the team dynamic, without feeling overly formal or forced.

Preparation Support

You do not need to prepare alone. Use AI tools like ChatGPT, Copilot, or Gemini to help you simulate interviews, organize your thoughts, or analyze job descriptions. Ask the AI to role-play as an interviewer and challenge you with platform-specific or scenario-based questions.

Use those tools to map out which metrics, frameworks, and features align with each platform. You want your prep to feel structured so you can walk into the interview with clarity and confidence.

Ultimately, interviews are not just an audition. They are a dialogue. Prepare thoroughly, think critically, and lead with the mindset of a strategist. That is how you stand out in a sea of applicants, and that is how you set yourself up for success.

If you have a PPC question you want answered in a future edition of Ask the PPC, send it in. Whether you’re prepping for interviews, troubleshooting performance issues, or pitching channel expansion, we are here to help.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Antitrust Case: AI Overviews Use FastSearch, Not Links via @sejournal, @martinibuster

A sharp-eyed search marketer discovered the reason why Google’s AI Overviews showed spammy web pages. The recent Memorandum Opinion in the Google antitrust case featured a passage that offers a clue as to why that happened and speculates how it reflects Google’s move away from links as a prominent ranking factor.

Ryan Jones, founder of SERPrecon (LinkedIn profile), called attention to a passage in the recent Memorandum Opinion that shows how Google grounds its Gemini models.

Grounding Generative AI Answers

The passage occurs in a section about grounding answers with search data. Ordinarily, it’s fair to assume that links play a role in ranking the web pages that an AI model retrieves from a search query to an internal search engine. So when someone asks Google’s AI Overviews a question, the system queries Google Search and then creates a summary from those search results.

But apparently, that’s not how it works at Google. Google has a separate algorithm that retrieves fewer web documents and does so at a faster rate.

The passage reads:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. Rem. Tr. at 3509:23–3511:4 (Reid). FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. Id. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Ryan Jones shared these insights:

“This is interesting and confirms both what many of us thought and what we were seeing in early tests. What does it mean? It means for grounding Google doesn’t use the same search algorithm. They need it to be faster but they also don’t care about as many signals. They just need text that backs up what they’re saying.

…There’s probably a bunch of spam and quality signals that don’t get computed for fastsearch either. That would explain how/why in early versions we saw some spammy sites and even penalized sites showing up in AI overviews.”

He goes on to share his opinion that links aren’t playing a role here because the grounding uses semantic relevance.

What Is FastSearch?

Elsewhere the Memorandum shares that FastSearch generates limited search results:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

Now the question is, what’s the RankEmbed model?

The Memorandum explains that RankEmbed is a deep-learning model. In simple terms, a deep-learning model identifies patterns in massive datasets and can, for example, identify semantic meanings and relationships. It does not understand anything in the same way that a human does; it is essentially identifying patterns and correlations.

The Memorandum has a passage that explains:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets. …(Allan)

…Google has developed various “top-level” signals that are inputs to producing the final score for a web page. Id. at 2793:5–2794:9 (Allan) (discussing RDXD-20.018). Among Google’s top-level signals are those measuring a web page’s quality and popularity. Id.; RDX0041 at -001.

Signals developed through deep-learning models, like RankEmbed, also are among Google’s top-level signals.”

User-Side Data

RankEmbed uses “user-side” data. The Memorandum, in a section about the kind of data Google should provide to competitors, describes RankEmbed (which FastSearch is based on) in this manner:

“User-side Data used to train, build, or operate the RankEmbed model(s); “

Elsewhere it shares:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: _____% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

Then:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms. PXR0171 at -086 (“Embedding based retrieval is effective at semantic matching of docs and queries”);

…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…RankEmbed particularly helped Google improve its answers to long-tail queries.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.

…The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.

…RankEmbedBERT needs to be retrained to reflect fresh data…”

A New Perspective On AI Search

Is it true that links do not play a role in selecting web pages for AI Overviews? Google’s FastSearch prioritizes speed. Ryan Jones theorizes that it could mean Google uses multiple indexes, with one specific to FastSearch made up of sites that tend to get visits. That may be a reflection of the RankEmbed part of FastSearch, which is said to be a combination of “click-and-query data” and human rater data.

Regarding human rater data, with billions or trillions of pages in an index, it would be impossible for raters to manually rate more than a tiny fraction. So it follows that the human rater data is used to provide quality-labeled examples for training. Labeled data are examples that a model is trained on so that the patterns inherent to identifying a high-quality page or low-quality page can become more apparent.

Featured Image by Shutterstock/Cookie Studio

8 Generative Engine Optimization (GEO) Strategies For Boosting AI Visibility in 2025 via @sejournal, @samanyougarg

This post was sponsored by Writesonic. The opinions expressed in this article are the sponsor’s own.

AI search now makes the first decision.

When? Before a buyer hits your website.

If you’re not part of the AI answer, you’re not part of the deal. In fact, 89% of B2B buyers use AI platforms like ChatGPT for research.

Picture this:

  • A founder at a 12-person SaaS asks, “best CRM for a 10-person B2B startup.”
  • AI answer cites:
    a TechRadar roundup,
    a r/SaaS thread,
    a fresh comparison,
    Not you.
  • Your brand is missing.
  • They book demos with two rivals.
  • You never hear about it.

Here is why. AI search works on intent, not keywords.

It reads content, then grounds answers with sources. It leans on third-party citations, community threads, and trusted publications. It trusts what others say about you more than what you say about yourself.

Most Generative Engine Optimization (GEO) tools stop at the surface. They track mentions, list prompts you missed, and ship dashboards. They do not explain why you are invisible or what to fix. Brands get reports, not steps.

We went hands-on. We analyzed millions of conversations and ran controlled tests. The result is a practical playbook: eight strategies that explain the why, give a quick diagnostic, and end with actions you can ship this week.

Off-Page Authority Builders For AI Search Visibility

1. Find & Fix Your Citation Gaps

Citation gaps are the highest-leverage strategy most brands miss.

Translation: This is an easy win for you.

What Is A Citation Gap?

A citation gap is when AI platforms cite web pages that mention your competitors but not you. These cited pages become the sources AI uses to generate its answers.

Think of it like this:

  • When someone asks ChatGPT about CRMs, it pulls information from specific web pages to craft its response.
  • If those source pages mention your competitors but not you, AI recommends them instead of your brand.

Finding and fixing these gaps means getting your brand mentioned on the exact pages AI already trusts and cites as sources.

Why You Need Citations In Answer Engines

If you’re not cited in an answer engine, you are essentially invisible.

Let’s break this down.

TechRadar publishes “21 Best Collaboration Tools for Remote Teams” mentioning:

  • Asana.
  • Monday.
  • Notion.

When users ask ChatGPT about remote project management, AI cites this TechRadar article.

Your competitors appear in every response. You don’t.

How To Fix Citation Gaps

That TechRadar article gets cited for dozens of queries, including “best remote work tools,” “Monday alternatives,” “startup project management.”

Get mentioned in that article, and you appear in all those AI responses. One placement creates visibility across multiple search variations.

Contact the TechRadar author with genuine value, such as:

  • Exclusive data about remote productivity.
  • Unique use cases they missed.
  • Updated features that change the comparison.

The beauty? It’s completely scalable.

Quick Win:

  1. Identify 50 high-authority articles where competitors are mentioned but you’re not.
  2. Get into even 10 of them, and your AI visibility multiplies exponentially.

2. Engage In The Reddit & UGC Discussions That AI References

Social platformsImage created by Writesonic, August 2025

AI trusts real user conversations over marketing content.

Reddit citations in AI overviews surged from 1.3% to 7.15% in just three months, a 450% increase. User-generated content now makes up 21.74% of all AI citations.

Why You Should Add Your Brand To Reddit & UGC Conversations

Reddit, Quora, LinkedIn Pulse, and industry forums together, and you’ve found where AI gets most of its trusted information.

If you show up as “trusted” information, your visibility increases.

How To Inject Your Brand Into AI-Sourced Conversations

Let’s say a Reddit thread titled “Best project management tool for a startup with 10 people?” gets cited whenever users ask about startup tools.

Since AI already cites these, if you enter the conversation and include your thoughtful contribution, it will get included in future AI answers.

Pro Tip #1: Don’t just promote your brand. Share genuine insights, such as:

  • Hidden costs.
  • Scaling challenges.
  • Migration tips.

Quick Win:

Find and join the discussions AI seems to trust:

  • Reddit threads with 50+ responses.
  • High-upvote Quora answers in your industry.
  • LinkedIn Pulse articles from recognized experts.
  • Active forum discussions with detailed experiences.

Pro Tip #2: Finding which articles get cited and which Reddit threads AI trusts takes forever manually. GEO platforms automate this discovery, showing you exactly which publications to pitch and which discussions to join.

On-Page Optimization For GEO

3. Study Which Topics Get Cited Most, Then Write Them

Something we’re discovering: when AI gives hundreds of citations for a topic, it’s not just citing one amazing article.

Instead, AI pulls from multiple sites covering that same topic.

If you haven’t written about that topic at all, you’re invisible while competitors win.

Consider Topic Clusters To Get Cited

Let’s say you’re performing a content gap analysis for GEO.

You notice these articles all getting 100+ AI citations:

  • “Best Project Management Software for Small Teams”
  • “Top 10 Project Management Tools for Startups”
  • “Project Management Software for Teams Under 20”

Different titles, same intent: small teams need project management software.

When users ask, “PM tool for my startup,” AI might cite 2-3 of these articles together for a comprehensive answer.

Ask “affordable project management,” and AI pulls different ones. The point is that these topics cluster around the same user need.

How To Outperform Competitors In AI Generated Search Answers

Identify intent clusters for your topic and create one comprehensive piece on your own website so your own content gets cited.

In this example, we’d suggest writing “Best Project Management Software for Small Teams (Under 50 People).”

It should cover startups, SMBs, and budget considerations all in one authoritative guide.

Quick Win:

  • Find 20 high-citation topic clusters you’re missing.
  • Create comprehensive content for each cluster.
  • Study what makes the top versions work, such as structure, depth, and comparison tables.
  • Then make yours better with fresher data and broader coverage.

4. Update Content Regularly To Maintain AI Visibility

AI platforms heavily favor recent content.

Content from the past two to three months dominates AI citations, with freshness being a key ranking factor. If your content appears outdated, AI tends to overlook it in favor of newer alternatives.

Why You Should Keep Your Content Up To Date For GEO Visibility

Let’s say your “Email Marketing Best Practices” from 2023 used to get AI citations.

Now it’s losing to articles with 2025 data. AI sees the date and chooses fresher content every time.

How To Keep Your Content Fresh Enough To Be Cited In AIOs

Weekly refresh for top 10 pages:

  • Add two to three new statistics.
  • Include a recent case study.
  • Update “Last Modified” date prominently.
  • Add one new FAQ.
  • Change title to “(Updated August 2025)”.

Bi-weekly, on less important pages:

  • Replace outdated examples.
  • Update internal links.
  • Rewrite the weakest section.
  • Add seasonal relevance.

Pro Tip: Track your content’s AI visibility systematically. Certain advanced GEO tools alert you when pages lose citations, so you know exactly what to refresh and when.

5. Create “X vs Y” And “X vs Y vs Z” Comparison Pages

Users constantly ask AI to help them choose between options. AI platforms love comparison content. They even prompt users to compare features and create comparison tables.

Pages that deliver these structured comparisons dominate AI search results.

Common questions flooding AI platforms:

  • “Slack vs Microsoft Teams for remote work”
  • “HubSpot vs Salesforce for small business”
  • “Asana or Monday for creative agencies”

AI can’t answer these without citing detailed comparisons. Generic blog posts don’t work. Promotional content gets ignored.

Create comprehensive comparisons like: “Asana vs Monday vs ClickUp: Project Management for Creative Teams.”

How To Create Comparisons That Have High Visibility On SERPs

Use a content structure that wins:

  • Quick decision matrix upfront.
  • Pricing breakdown by team size.
  • Feature-by-feature comparison table.
  • Integrations.
  • Learning curve and onboarding time.
  • Best for: specific use cases.

Make it genuinely balanced:

  • Asana: “Overwhelming for teams under 5”
  • Monday: “Gets expensive with add-ons”
  • ClickUp: “Steep learning curve initially”

Include your product naturally in the comparison. Be honest about limitations while highlighting genuine advantages.

AI prefers citing fair comparisons over biased reviews. Include real limitations, actual pricing (not just “starting at”), and honest trade-offs. This builds trust that gets you cited repeatedly.

Technical GEO To Do Right Now

6. Fix Robots.txt Blocking AI Crawlers

Most websites accidentally block the very bots they want to attract. Like putting a “Do Not Enter” sign on your store while wondering why customers aren’t coming in.

ChatGPT uses three bots:

  • ChatGPT-User: Main bot serving actual queries (your money maker)
  • OAI-SearchBot: Activates when users click search toggle.
  • GPTBot: Collects training data for future models.

Strategic decision: Publications worried about content theft might block GPTBot. Product companies should allow it, however, because you want future AI models trained on your content for long-term visibility.

Essential bots to allow:

  • Claude-Web (Anthropic).
  • PerplexityBot.
  • GoogleOther (Gemini).

Add to robots.txt:

User-agent: ChatGPT-User
Allow: /
User-agent: Claude-Web
Allow: /
User-agent: PerplexityBot
Allow: /

Verify it’s working: Check server logs for these user agents actively crawling your content. No crawl activity means no AI visibility.

7. Fix Broken Pages For AI Crawlers

Just like Google Search Console shows Googlebot errors, you need visibility for AI crawlers. But AI bots behave differently and can be aggressive.

Monitor AI bot-specific issues:

  • 404 errors on important pages.
  • 500 server errors during crawls.
  • Timeout issues when bots access content.

If your key product pages error when ChatGPT crawls them, you’ll never appear in AI responses.

Common problems:

  • AI crawlers triggering DDoS protection.
  • CDN security blocking legitimate bots.
  • Rate limiting preventing full crawls.

Fix: Whitelist AI bots in your CDN (Cloudflare, Fastly). Set up server-side tracking to differentiate AI crawlers from regular traffic. No errors = AI can cite you.

8. Avoid JavaScript For Main Content

Most AI crawlers can’t execute JavaScript. If your content loads dynamically, you’re invisible to AI.

Quick test: Disable JavaScript in your browser. Visit key pages. Can you see the main content, product descriptions, and key information?

Blank page = AI sees nothing.

Solutions:

  • Server-side rendering (Next.js, Nuxt.js).
  • Static site generators (Gatsby, Hugo).
  • Progressive enhancement (core content works without JS).

Bottom line: If it needs JavaScript to display, AI can’t read it. Fix this or stay invisible.

Take Action Now

People ask ChatGPT, Claude, and Perplexity for recommendations every day. If you’re missing from those answers, you’re missing deals.

These eight strategies boil down to three moves: get mentioned where AI already looks (high-authority sites and Reddit threads), create content AI wants to cite (comparisons and fresh updates), and fix the technical blocks keeping AI out (robots.txt and JavaScript issues).

You can do all this manually. Track mentions in spreadsheets, find citation gaps by hand, and update content weekly. It works on a smaller scale, consumes time, and requires a larger team.

Writesonic provides you with a GEO platform that goes beyond tracking to giving you precise actions to boost visibility – create new content, refresh existing pages, or reach out to sites that mention competitors but not you.

Plus, get real AI search volumes to prioritize high-impact prompts.


Image Credits

Featured Image: Image by Writesonic. Used with permission.

In-Post Image: Image by Writesonic. Used with permission.

Building the AI-enabled enterprise of the future

Artificial intelligence is fundamentally reshaping how the world operates. With its potential to automate repetitive tasks, analyze vast datasets, and augment human capabilities, the use of AI technologies is already driving changes across industries.

In health care and pharmaceuticals, machine learning and AI-powered tools are advancing disease diagnosis, reducing drug discovery timelines by as much as 50%, and heralding a new era of personalized medicine. In supply chain and logistics, AI models can help prevent or mitigate disruptions, allowing businesses to make informed decisions and enhance resilience amid geopolitical uncertainty. Across sectors, AI in research and development cycles may reduce time-to-market by 50% and lower costs in industries like automotive and aerospace by as much as 30%.

“This is one of those inflection points where I don’t think anybody really has a full view of the significance of the change this is going to have on not just companies but society as a whole,” says Patrick Milligan, chief information security officer at Ford, which is making AI an important part of its transformation efforts and expanding its use across company operations.

Given its game-changing potential—and the breakneck speed with which it is evolving—it is perhaps not surprising that companies are feeling the pressure to deploy AI as soon as possible: 98% say they feel an increased sense of urgency in the last year. And 85% believe they have less than 18 months to deploy an AI strategy or they will see negative business effects.

Companies that take a “wait and see” approach will fall behind, says Jeetu Patel, president and chief product officer at Cisco. “If you wait for too long, you risk becoming irrelevant,” he says. “I don’t worry about AI taking my job, but I definitely worry about another person that uses AI better than me or another company that uses AI better taking my job or making my company irrelevant.”

But despite the urgency, just 13% of companies globally say they are ready to leverage AI to its full potential. IT infrastructure is an increasing challenge as workloads grow ever larger. Two-thirds (68%) of organizations say their infrastructure is moderately ready at best to adopt and scale AI technologies.

Essential capabilities include adequate compute power to process complex AI models, optimized network performance across the organization and in data centers, and enhanced cybersecurity capabilities to detect and prevent sophisticated attacks. This must be combined with observability, which ensures the reliable and optimized performance of infrastructure, models, and the overall AI system by providing continuous monitoring and analysis of their behavior. Good quality, well-managed enterprise-wide data is also essential—after all, AI is only as good as the data it draws on. All of this must be supported by AI-focused company culture and talent development.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The connected customer

As brands compete for increasingly price conscious consumers, customer experience (CX) has become a decisive differentiator. Yet many struggle to deliver, constrained by outdated systems, fragmented data, and organizational silos that limit both agility and consistency.

The current wave of artificial intelligence, particularly agentic AI that can reason and act across workflows, offers a powerful opportunity to reshape service delivery. Organizations can now provide fast, personalized support at scale while improving workforce productivity and satisfaction. But realizing that potential requires more than isolated tools; it calls for a unified platform that connects people, data, and decisions across the service lifecycle. This report explores how leading organizations are navigating that shift, and what it takes to move from AI potential to CX impact.

Key findings include:

  • AI is transforming customer experience (CX). Customer service has evolved from the era of voicebased support through digital commerce and cloud to today’s AI revolution. Powered by large language models (LLMs) and a growing pool of data, AI can handle more diverse customer queries, produce highly personalized communication at scale, and help staff and senior management with decision support. Customers are also warming to AI-powered platforms as performance and reliability improves. Early adopters report improvements including more satisfied customers, more productive staff, and richer performance insights.
  • Legacy infrastructure and data fragmentation are hindering organizations from maximizing the value of AI. While customer service and IT departments are early adopters of AI, the broader organizations across industries are often riddled with outdated infrastructure. This impinges the ability of autonomous AI tools to move freely across workflows and data repositories to deliver goal-based tasks. Creating a unified platform and orchestration architecture will be key to unlock AI’s potential. The transition can be a catalyst for streamlining and rationalizing the business as a whole.
  • High-performing organizations use AI without losing the human touch. While consumers are warming to AI, rollout should include some discretion. Excessive personalization could make customers uncomfortable about their personal data, while engineered “empathy” from bots may be received as insincere. Organizations should not underestimate the unique value their workforce offers. Sophisticated adopters strike the right balance between human and machine capabilities. Their leaders are proactive in addressing job displacement worries through transparent communication, comprehensive training, and clear delineation between AI and human roles. The most effective organizations treat AI as a collaborative tool that enhances rather than replaces human connection and expertise.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: sustainable architecture, and DeepSeek’s success

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Material Cultures looks to the past to build the future

Despite decades of green certifications, better material sourcing, and the use of more sustainable materials, the built environment is still responsible for a third of global emissions worldwide. According to a 2024 UN report, the building sector has fallen “significantly behind on progress” toward becoming more sustainable. Changing the way we erect and operate buildings remains key to tackling climate change.

London-based design and research nonprofit Material Cultures is exploring how tradition can be harnessed in new ways to repair the contemporary building system. As many other practitioners look to artificial intelligence and other high-tech approaches, Material Cultures is focusing on sustainability, and finding creative ways to turn local materials into new buildings. Read the full story.

—Patrick Sisson

This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land.

MIT Technology Review Narrated: How a top Chinese AI model overcame US sanctions

Earlier this year, the AI community was abuzz over DeepSeek R1, a new open-source reasoning model. The model was developed by the Chinese AI startup DeepSeek, which claims that R1 matches or even surpasses OpenAI’s ChatGPT o1 on multiple key benchmarks but operates at a fraction of the cost.

DeepSeek’s success is even more remarkable given the constraints facing Chinese AI companies in the form of increasing US export controls on cutting-edge chips. Read the full story.This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google won’t be forced to sell Chrome after all
A federal judge has instead ruled it has to share search data with its rivals. (Politico)
+ He also barred Google from making deals to make Chrome the default search engine on people’s phones. (The Register)
+ The company’s critics feel the ruling doesn’t go far enough. (The Verge)

2 OpenAI is adding emotional guardrails to ChatGPT
The new rules are designed to better protect teens and vulnerable people. (Axios)
+ Families of dead teenagers say AI companies aren’t doing enough. (FT $)
+ An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it. (MIT Technology Review)

3 China’s military has showed off its robotic wolves
Alongside underwater torpedoes and hypersonic cruise missiles. (BBC)
+ Xi Jinping has pushed to modernize the world’s largest standing army. (CNN)
+ Phase two of military AI has arrived. (MIT Technology Review)

4 ICE has resumed working with a previously banned spyware vendor
Paragon Solutions’ software was found on the devices of journalists earlier this year. (WP $)
+ The tool can manipulate a phone’s recorder to become a covert listening device. (The Guardian)

5 An identical twin has been convicted of a crime based on DNA analysis 
It’s the first time the technology has been successfully used in the US, and solves a 38-year old cold case. (The Guardian)

6 People who understand AI the least are the most likely to use it 
Those with a better grasp of how AI works know more about its limitations. (WSJ $)
+ What is AI? (MIT Technology Review)

7 BMW is preparing to unveil a super-smart EV
Its new iX3 sport utility vehicle will have 20 times more computing power. (FT $)

8 Sick and lonely people are turning to AI “doctors”
Physicians are too busy to spend much time with patients. Chatbots are filling the void. (Rest of World)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

9 Around 90% of life on Earth is still unknown
But shedding light on these mysterious organisms is essential to our future survival. (Vox)

10 Wax worms could help tackle our plastic pollution problem 🪱
The plastic-hungry pests can eat a polythene bag in a matter of hours. (Wired $)
+ Think that your plastic is being recycled? Think again. (MIT Technology Review)

Quote of the day

“It’s a nothingburger.”

—Gabriel Weinberg, chief executive of search engine DuckDuckGo, reacts to the judge’s decision in the Google Chrome monopoly case, the New York Times reports.

 One more thing

Why we can no longer afford to ignore the case for climate adaptation

Back in the 1990s, anyone suggesting that we’d need to adapt to climate change while also cutting emissions was met with suspicion. Most climate change researchers felt adaptation studies would distract from the vital work of keeping pollution out of the atmosphere to begin with.

Despite this hostile environment, a handful of experts were already sowing the seeds for a new field of research called “climate change adaptation”: study and policy on how the world could prepare for and adapt to the new disasters and dangers brought forth on a warming planet. Today, their research is more important than ever. Read the full story

—Madeline Ostrander

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me

+ How to have a happier life, even when you’re living through bleak times (maybe skip the raisins on ice cream, though.)
+ If you’re loving Alien: Earth right now, why not dive back into the tremendously terrifying Alien: Isolation game?
+ The first freaky images of the second part of zombie flick 28 Years Later have landed.
+ Anthony Gormley, you will always be cool.

A Dozen Good Reads for Better Decisions

From back-to-school through the winter holidays, the busy retail season is also a time to forecast sales, set budgets, and plan for the coming year. Here are 12 new and time-tested books to help make informed choices.

Could Should Might Don’t: How We Think About the Future

Cover of Could Should Might Don't

Could Should Might Don’t

by Nick Foster

Thinking seriously about the future is a must for those who hope to shape it. This just-released book guides readers in going beyond the usual “lazy certainties and fearful fantasies” to imagine and create what comes next.

Distancing: How Great Leaders Reframe to Make Better Decisions

Cover of Distancing

Distancing

by L. David Marquet and Michael A. Gillespie

Asserting that we are our own biggest obstacle to making wiser decisions, the authors, a former U.S. Navy Captain and a professor of psychology, provide practical self-coaching methods for changing perspectives.

The Missing Billionaires: A Guide to Better Financial Decisions

Cover of Missing Billionaires

Missing Billionaires

by Victor Haghani, James White

There could be many more billionaires today if the wealthy families had made wiser investment and spending decisions. This Economist best book of the year in 2023 outlines a framework for optimal investing drawn from the authors’ extensive finance experience.

Start, Stay, or Leave: The Art of Decision-Making

Cover of Start, Stay, or Leave

Start, Stay, or Leave

by Trey Gowdy

Fox News host and former congressman Trey Gowdy shares with humor and practical advice the hard-earned lessons from great (and lousy) decisions that have shaped his life.

Probably Overthinking It

Cover of Probably Overthinking It

Probably Overthinking It

by Allen B. Downey

Statistics are everywhere, and so is the tendency to misinterpret them, with potentially disastrous consequences. Downey explains common statistical pitfalls, using copious illustrations, colorful storytelling, and clear prose.

Collective Illusions: Why We Make Bad Decisions

Cover of Collective Illusions

Collective Illusions

by Todd Rose

A feeling of belonging is a deep human need, but the desire to fit in can warp our perceptions and lead to decisions against our own best interest. Learn how to find clarity and authenticity from this national bestseller, named Amazon’s Best Book of the Year in Business, Leadership, and Science in 2022.

Radical Uncertainty: Decision-Making Beyond the Numbers

Cover of Radical Uncertainty

Radical Uncertainty

by John Kay and Mervyn King

Some risks are easily quantified, but many are not from data alone. Two of Britain’s foremost economists explain strategies for resilience in facing the unknowable.

The Big Picture: How to Visualize Data to Make Better Decisions Faster

Cover of The Big Picture

The Big Picture

by Steve Wexler

Understanding analytics is a crucial business skill; graphics alone can both enlighten and mislead. Wexler, who has taught and consulted for dozens of prominent organizations, distills his expertise into what one reviewer calls an “invaluable tool” for seeing patterns in data.

Farsighted: How We Make the Decisions That Matter the Most

Cover of Farsighted

Farsighted

by Steven Johnson

A prolific bestselling author and television and podcast host reveals the powerful methods used by expert decision-makers to make once-in-a-lifetime choices.

Risk Savvy: How to Make Good Decisions

Cover of Risk Savvy

Risk Savvy

by Gerd Gigerenzer

Gigerenzer, who directs the Max Planck Institute for Human Development in Berlin and is an expert on risk, argues that expert analyses are often flawed or misinterpreted. He advocates going with the gut in the face of uncertainty. Readers hail it as both wise and easy to read.

Left Brain, Right Stuff: How Leaders Make Winning Decisions

Cover of Left Brain, Right Stuff

Left Brain, Right Stuff

by Phil Rosenzweig

For business leaders and entrepreneurs, decision-making in the real world entails not just thoughtful analysis but following it with strategic action. Reviewers say Rosenzweig “delivers an invaluable framework for making good and timely decisions,” and laud his “fascinating storytelling.”

Predictably Irrational, Revised and Expanded Edition

Cover of Predictably Irrational

Predictably Irrational

by Dan Ariely

One of the most influential books in behavioral economics, Ariely’s groundbreaking bestseller uses compelling, real-world examples to demonstrate how people consistently make the same predictable mistakes, and how we can avoid these damaging patterns to make more rational decisions.