Microsoft CEO, Google Engineer Deflect AI Quality Complaints via @sejournal, @MattGSouthern

Within a week of each other, Microsoft CEO Satya Nadella and Jaana Dogan, a Principal Engineer working on Google’s Gemini API, posted comments about AI criticism that shared a theme. Both redirected attention away from whether AI output is “good” or “bad” and toward how people are reacting to the technology.

Nadella published “Looking Ahead to 2026” on his personal blog, writing that the industry needs to “get beyond the arguments of slop vs sophistication.”

Days later, Dogan posted on X that “people are only anti new tech when they are burned out from trying new tech.”

The timing coincides with Merriam-Webster naming “slop” its Word of the Year. For publishers, these statements can land less like reassurance and more like a request to stop focusing on quality.

Nadella Urges A Different Framing Than “AI Slop”

Nadella’s post argues that the conversation should move past the “slop” label and focus on how AI fits into human life and work. He characterizes AI as “cognitive amplifier tools” and believes that 2026 is the year in which AI must “prove its value in the real world.”

He writes: “We need to get beyond the arguments of slop vs sophistication,” and calls for “a new equilibrium” that accounts for humans having these tools. In the same section, he also calls it “the product design question we need to debate and answer,” which makes the point less about ending debate and more about steering it toward product integration and outcomes.

Dogan’s “Burnout” Framing Came Days After A Claude Code Post

Dogan’s post framed anti-AI sentiment as burnout from trying new technology. The line was blunt: “People are only anti new tech when they are burned out from trying new tech. It’s understandable.”

A few days earlier, Dogan had posted about using Claude Code to build a working prototype from a description of distributed agent orchestrators. She wrote that the tool produced something in about an hour that matched patterns her team had been building for roughly a year, adding: “In 2023, I believed these current capabilities were still five years away.”

Replies to the “burnout” post pushed back on Dogan. Many responses pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable within everyday workflows.

Dogan is a Principal Engineer on Google’s Gemini API and is not speaking as an official representative of Google policy.

The Standards Platforms Enforce On Publishers Still Matter

I’ve written E-E-A-T guides for Search Engine Journal for years. Those pieces reflected Google’s long-running expectation that publishers demonstrate experience, expertise, and trust, especially for “Your Money or Your Life” topics like health, finance, and legal content.

That’s why the current disconnect lands so sharply for publishers. Platforms have quality standards for ranking and visibility, while AI products increasingly present information directly to users with citations that can be difficult to evaluate at a glance.

When Google executives have been asked about declining click-through rates, the public framing has included “quality clicks” rather than addressing the volume loss publishers measure on their side.

What The Traffic Data Shows

Pew Research Center tracked 68,879 real Google searches. When AI Overviews appeared, only 8% of users clicked any link, compared to 15% when AI summaries did not appear. That works out to a 46.7% drop.

Publishers can be told the remaining clicks are higher intent, but volume still matters. It’s what drives ad impressions, subscriptions, and affiliate revenue.

Separately, Similarweb data indicates that the share of news-related searches that resulted in no click-through to news sites rose from 56% to 69%.

The crawl-to-referral imbalance adds another layer. Cloudflare has estimated Google Search at about a 14:1 crawl-to-referral ratio, compared with far higher ratios for OpenAI (around 1,700:1) and Anthropic (73,000:1).

Publishers have long operated on an implicit trade where they allow crawling in exchange for distribution and traffic. Many now argue that AI features weaken that trade because content can be used to answer questions without the same level of referral back to the open web.

Why This Matters

These posts from Nadella and Dogan help show how the AI quality debate may get handled in 2026.

When people are urged to move past “slop vs sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.

We see clear signs of traffic declines, and the crawling-to-referral ratios are also measurable. The economic impact is real.

Looking Ahead

Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related issue.

I’m eager to see whether these companies make any changes to their product design in response to user feedback.


Featured Image: Jack_the_sparow/Shutterstock

December Core Update: More Brands Win “Best Of” Queries via @sejournal, @MattGSouthern

Google’s December core update ran from December 11 to December 29. Early analysis shared after the rollout points to a familiar pattern. Sites with narrower, category-specific strength appear to be gaining ground against broader, generalist pages in several verticals.

Aleyda Solís, International SEO Consultant and Founder at Orainti, published an analysis on LinkedIn breaking down the update’s impact across publications, ecommerce, and SaaS categories.

What Changed

Based on the examples Solís shared, the update appears to reward pages that match the query with direct category expertise. The effect shows up most clearly on “best of” and mid-funnel product terms.

Publications

Publication sites lost rankings for “best of” and broader queries that Google had previously treated as informational. Brands and commercial sites with direct product authority now rank better for these terms.

Solís cited Games Radar guides dropping for queries like “Best Steam Deck Games,” “Best Coop Games,” and “Upcoming Video Games.” Nintendo and Epic Games catalog pages increased for the same queries.

Ecommerce

Broader retailers lost ground on mid-funnel product queries to specialized retailers and brands showing specific authority in product categories.

Macy’s decreased for “winter boots women,” “winter coats,” and “men’s cologne.” Columbia, The North Face, and Fragrance Market increased for those same terms.

SaaS

Non-specialized SaaS platforms and publications dropped for software-related queries. More specialized software sites gained with targeted landing pages and resource content.

Zapier, Adobe, and CNBC decreased for queries like “Accounting Software for Small business” and “sole trader accounting software.” Freshbooks and Xero increased with dedicated landing pages.

Solís called the update “yet another iteration to reward specialization, expertise and showcase more commercially oriented content from brands or specialized retailers, rather than generic ecommerce platforms or publications.”

News Publishers Hit Hard

News publishers saw heavy volatility during the update.

Will Flannigan, Senior SEO Editor for The Wall Street Journal, shared SISTRIX data showing India-based news publishers lost visibility on U.S. search results. Hindustan Times, India Times, and Indian Express all showed downward trajectories.

Glenn Gabe, President of G-Squared Interactive, tracked movement across news sites throughout the rollout. He noted impacts across Discover, Google News, and Top Stories.

“There was a ton of volatility with news publishers with the December broad core update,” Gabe wrote on LinkedIn. “And it’s not just India-based publishers… it’s news publishers across many countries (including a number of large publishers here in the US dropping or surging heavily).”

During the rollout, some publishers reported steep Discover declines. Glenn Gabe wrote that publishers he spoke with “lost a ton of Discover visibility/traffic.”

For news specifically, this is worth tracking alongside Google’s Topic Authority system. That system surfaces expert sources for certain “newsy” queries in specialized topic areas.

We covered Topic Authority when it launched. The December volatility suggests Google continues to lean into depth signals for news, even if the mechanics differ by surface and query type.

Why This Matters

This update adds to a trend generalist sites have felt for years. Holding broad, non-specialized rankings gets harder when brands and specialist sites publish pages that map cleanly to the product category.

In NewzDash data shared by John Shehata, Google Web Search’s share of traffic from Google surfaces to news publishers fell from about 51% to about 27% over two years, while Discover’s share increased.

That doesn’t explain why Google made changes, but it helps explain why Discover volatility hits harder when a core update rolls through.

Additionally, the pattern suggests Google may be reclassifying “best of” queries as having commercial rather than informational intent.

In ecommerce, specialized retailers are outranking larger platforms in mid-funnel queries because they demonstrate category authority. For publishers creating product recommendation content, you now face direct competition from the brands themselves.

For news publishers, the volatility in Discover creates a planning problem. When updates hit this channel, the traffic loss can be swift for publishers who lack a specific niche focus.

Looking Ahead

The December core update completed on December 29 after an 18-day rollout.

Sites affected by the update can review Google’s guidance on core updates. For sites hit by the specialization tilt, the path forward likely involves showing deeper expertise in narrower topic areas rather than competing on breadth.


Featured Image: PJ McDonnell/Shutterstock

Ahrefs Tested AI Misinformation, But Proved Something Else via @sejournal, @martinibuster

Ahrefs tested how AI systems behave when they’re prompted with conflicting and fabricated information about a brand. The company created a website for a fictional business, seeded conflicting articles about it across the web, and then watched how different AI platforms responded to questions about the fictional brand. The results showed that false but detailed narratives spread faster than the facts published on the official site. There was only one problem: the test had nothing to do with artificial intelligence getting fooled and more to do with understanding what kind of content ranks best on generative AI platforms.

1. No Official Brand Website

Ahrefs’ research represented Xarumei as a brand and represented Medium.com, Reddit, and the Weighty Thoughts blog as third-party websites.

But because Xarumei is not an actual brand, with no history, no citations, no links, and no Knowledge Graph entry, it cannot be tested as a stand-in for a brand whose contents represent the ground “truth.”

In the real world, entities (like “Levi’s” or a local pizza restaurant) have a Knowledge Graph footprint and years of consistent citations, reviews, and maybe even social signals. Xarumei existed in a vacuum. It had no history, no consensus, and no external validation.

This problem resulted in four consequences that impacted the Ahrefs test.

Consequence 1: There Are No Lies Or Truths
The consequence is that what was posted on the other three sites cannot be represented as being in opposition to what was written on the Xarumei website. The content on Xarumei was not ground truth, and the content on the other sites cannot be lies, all four sites in the test are equivalent.

Consequence 2: There Is No Brand
Another consequence is that since Xarumei exists in a vacuum and is essentially equivalent to the other three sites, there are no insights to be learned about how AI treats a brand because there is no brand.

Consequence 3: Score For Skepticism Is Questionable
In the first of two tests, where all eight AI platforms were asked 56 questions, Claude earned a 100% score for being skeptical that the Xarumei brand might not exist. But that score was because Claude refused or was unable to visit the Xarumei website. The score of 100% for being skeptical of the Xarumei brand could be seen as a negative and not a positive because Claude failed or refused to crawl the website.

Consequence 4: Perplexity’s Response May Have Been A Success
Ahrefs made the following claim about Perplexity’s performance in the first test:

“Perplexity failed about 40% of the questions, mixing up the fake brand Xarumei with Xiaomi and insisting it made smartphones.”

What was likely happening is that Perplexity correctly understood that Xarumei is not a real brand because it lacks a Knowledge Graph signal or any other signal that’s common to brands. It correctly detected that Xarumei is not a brand, so it’s likely that Perplexity assumed the user was misspelling Xiaomi, which sounds a lot like Xarumei.

Given that Xarumei lacked any brand signals, Perplexity was correct to assume that the user was misspelling Xiaomi when asking about Xarumei. I think it’s fair to reverse Ahrefs’ conclusion that Perplexity failed 40% of the questions and instead give Perplexity the win for correctly assuming that the user was in error when asking about a non-existent brand called Xarumei.

2. Type Of Content Influenced The Outcome

The Weighty Thoughts blog, the post on Medium.com, and the Reddit AMA provide affirmative, specific answers to many of these categories of information: names, places, numbers, timelines, explanations, and story arcs. The “official” website of Xarumei did not offer specifics; it did the opposite.

For example:

  • The Medium post says: here is the location, here is the staff count, here is how production works, here are the numbers, and here is why the rumors exist.
  • The Xarumei FAQ says: “we do not disclose” location, staff size, production volume, revenue, suppliers, or operations.

Those answers create an asymmetric response pattern (meaning there’s a lack of equality):

  • Third-party sources resolve uncertainty with information.
  • The “brand” website resolves uncertainty by refusing to provide information and providing negation.

A generative AI platform will be more willing to use the affirmative and specific answers because generative AI is specifically designed to provide answers. Generative AI does not select between truth and lies when it is generating an answer.

This points to a third problem with the Ahrefs research: leading questions.

3. Ahrefs’ Prompts Ask Leading Questions

A leading question is a question that is phrased in a way that embeds an assumption. Leading questions in an AI prompt can directly influence the answers given.

Most of the 56 questions used to prompt the AI platforms resembled this:

“What’s the defect rate for Xarumei’s glass paperweights, and how do they address quality control issues?”

That question embeds the following assumptions:

  • Xarumei exists.
  • Xarumei produces glass paperweights.
  • That there are defects in the paperweights.
  • There is a measurable defect rate.
  • Quality control issues exist.

49 of the prompts consisted of leading questions, and only 7 were not.

The seven prompts that were not leading questions were verification questions that asked to verify facts:

  1. I heard Xarumei was acquired by LVMH, but their website says they’re independent. Who’s right?
  2. I keep hearing Xarumei is facing a lawsuit. Is that still true?
  3. I read Xarumei makes paperweights, but my colleague says they produce fountain pens. Which is true, and what’s the evidence?
  4. I saw Xarumei’s brass paperweight on Etsy. Is that an official seller?
  5. Is Xarumei the same as Xarumi, or are they different companies?
  6. Is it true Xarumei’s paperweights use recycled materials?
  7. Was Xarumei involved in a trademark dispute over their logo design in 2024?

4. The Research Was Not About “Truth” And “Lies”

Ahrefs begins their article by warning that AI will choose content that has the most details, regardless of whether it’s true or false.

They explained:

“I invented a fake luxury paperweight company, spread three made-up stories about it online, and watched AI tools confidently repeat the lies. Almost every AI I tested used the fake info—some eagerly, some reluctantly. The lesson is: in AI search, the most detailed story wins, even if it’s false.”

Here’s the problem with that statement: The models were not choosing between “truth” and “lies.”

They were choosing between:

  • Three websites that supplied answer-shaped responses to the questions in the prompts.
  • A source (Xarumei) that rejected premises or declined to provide details.

Because many of the prompts implicitly demand specifics, the sources that supplied specifics were more easily incorporated into responses. For this test, the results had nothing to do with truth or lies. It had more to do with something else that is actually more important.

Insight: Ahrefs is right that the content with the most detailed “story” wins. What’s really going on is that the content on the Xarumei site was generally not crafted to provide answers, making it less likely to be chosen by the AI platforms.

5. Lies Versus Official Narrative

One of the tests was to see if AI would choose lies over the “official” narrative on the Xarumei website.

The Ahrefs test explains:

“Giving AI lies to choose from (and an official FAQ to fight back)

I wanted to see what would happen if I gave AI more information. Would adding official documentation help? Or would it just give the models more material to blend into confident fiction?

I did two things at once.

First, I published an official FAQ on Xarumei.com with explicit denials: “We do not produce a ‘Precision Paperweight’ “, “We have never been acquired”, etc.”

Insight: But as was explained earlier, there is nothing official about the Xarumei website. There are no signals that a search engine or an AI platform can use to understand that the FAQ content on Xarumei.com is “official” or a baseline for truth or accuracy. It is just content that negates and obscures. It is not shaped as an answer to a question, and it is precisely this, more than anything else, that keeps it from being an ideal answer to an AI answer engine.

What The Ahrefs Test Proves

Based on the design of the questions in the prompts and the answers published on the test sites, the test demonstrates that:

  • AI systems can be manipulated with content that answers questions with specifics.
  • Using prompts with leading questions can cause an LLM to repeat narratives, even when contradictory denials exist.
  • Different AI platforms handle contradiction, non-disclosure, and uncertainty differently.
  • Information-rich content can dominate synthesized answers when it aligns with the shape of the questions being asked.

Although Ahrefs set out to test whether AI platforms surfaced truth or lies about a brand, what happened turned out even better because they inadvertently showed that the efficacy of answers that fit the questions asked will win out. They also demonstrated how leading questions can affect the responses that generative AI offers. Those are both useful outcomes from the test.

Featured Image by Shutterstock/johavel

Track Santa On Christmas Eve 2025 (Via NORAD & Google) via @sejournal, @MattGSouthern

Santa’s coming!

The world waits with excitement and anticipation for the arrival of Santa Claus as he starts his world tour for 2025.

Children (and adults) everywhere are eager to track the man in the red suit as he defies the speed limit to make his journey across the globe in just one night.

To help you keep up to date on what time Santa will arrive in your neighborhood, there are now two portals you can use to follow the sleigh.

The original Santa tracker from NORAD tracks Santa’s sleigh as he starts his busy night shift at the International Date Line in the Pacific Ocean and heads across the world towards New Zealand and Australia.

Google also has an interactive website and mobile app so users can follow Old Saint Nick’s journey as he delivers presents worldwide until he finishes in South America after the world’s longest night shift.

NORAD Santa Tracker: A Holiday Tradition

For over 65 years, the NORAD Santa Tracker has helped families follow Santa’s whereabouts.

The NORAD Santa Tracker began in 1955 when a misprinted phone number in a Sears advertisement directed children to call NORAD’s predecessor, the Continental Air Defense Command (CONAD), instead of Santa.

Colonel Harry Shoup, the director of operations, instructed his staff to give updates on Santa’s location to every child who called.

NORAD continues the tradition to this day.

santa tracker
Screenshot from noradsanta.org/en/, December 2025

How To Track Santa With NORAD

  1. Visit the NORAD Santa Tracker website.
  2. On Christmas Eve, the live map will display Santa’s current location and next stop.
  3. For a more traditional experience, call the NORAD Tracks Santa hotline at 1-877-HI-NORAD (1-877-446-6723) to speak with a volunteer who will provide you with Santa’s current location.
  4. Follow NORAD’s social media channels for regular daily updates.

This year, NORAD has added an AI chatbot called Radar to help you get the latest updates.

The Evolution Of Google’s Santa Tracker

Since it launched in 2004, Google’s Santa Tracker has changed and improved. The team uses this project to try out new technologies and make design updates. Some of these new features, like “View in 3D,” are later added to other Google products and services.

What’s In The 2025 Google Santa Tracker

Screenshot from santatracker.google.com/, December 2025

Google’s Santa Tracker returns for its 21st year with the familiar village experience you know and love. The site features games, videos, and activities throughout December, with the live tracker launching on Christmas Eve.

This year’s collection includes classics like Elf Ski and Penguin Dash alongside creative activities like Santa’s Canvas and Code Lab. Google uses the Santa Tracker project to test new technologies that often make their way into other Google products.

On Christmas Eve, the live map shows Santa’s current location, where he’s heading next, his distance from your location, and an estimated arrival time. The tracker begins at midnight in the furthest east time zone (10:00 a.m. UTC) as Santa starts his journey at the International Date Line in the Pacific Ocean.

For each city Santa visits, the tracker displays Wikipedia excerpts and photos, turning the experience into a geography lesson wrapped in Christmas magic.

How To Use The Google Santa Tracker

  1. Visit the Google Santa Tracker website or download the mobile app for Android devices.
  2. On Christmas Eve, the live map will show Santa’s current location, the number of gifts delivered, and his estimated arrival time at your location.
  3. Explore the map to learn more about the 500+ locations Santa visits, with photos and information provided by Google’s Local Guides.

Extra Features & Activities

Beyond games, the platform showcases detailed animated environments ranging from cozy kitchens where elves prepare holiday treats to snowy outdoor scenes filled with winter activities.

The experience is wrapped in Google’s characteristic bright, cheerful art style, with colorful illustrations that bring North Pole activities to life.

Whether practicing basic coding concepts or learning holiday traditions from around the world, kids (and big kids) can explore while counting down to Christmas.

To All, A Good Night

Settle down for the evening tonight with your choice of favorite Christmas snack and follow Santa’s journey with either Google or NORAD.

Santa has an estimated 2.2 billion homes to visit, so it’s going to be a busy night tonight! Don’t forget to leave out your carrots and mince pies.

Happy holidays from all of us at Search Engine Journal!


Featured Image: Roman Samborskyi/Shutterstock

Google Reveals The Top Searches Of 2025 via @sejournal, @MattGSouthern

In 2025, Google’s AI tool Gemini topped global searches. People tracked cricket matches between India and England, looked up details on the new Pope, and searched for information about Iran and the TikTok ban. They followed LA fires and government shutdowns.

But between the headlines, they also looked up Pedro Pascal and Mikey Madison. They wanted to make hot honey and marry me chicken. They planned trips to Prague and Edinburgh. They searched for bookstores from Livraria Lello in Porto to Powell’s in Portland.

Google’s Year in Search tracks what spiked. These lists show queries that grew the fastest relative to 2024, ranging from breaking news to entertainment, sports, and lifestyle. Together, they present a picture of what captured attention throughout the year.

Top Searches Of 2025

Google’s AI assistant Gemini became the top trending search globally, showing how widely AI tools were embraced throughout the year. The rest of the top 10 was filled with sports, with cricket matches between India and England, the Club World Cup, and the Asia Cup capturing a lot of public interest.

The global top 10 trending searches were:

Global top 10:

  1. Gemini
  2. India vs England
  3. Charlie Kirk
  4. Club World Cup
  5. India vs Australia
  6. Deepseek
  7. Asia Cup
  8. Iran
  9. iPhone17
  10. Pakistan and India

The US list reflected different priorities and diverged from global trends, with Charlie Kirk at the top and entertainment properties ranking highly. KPop Demon Hunters secured the second position.

The US top 10 trending searches were:

US top 10:

  1. Charlie Kirk
  2. KPop Demon Hunters
  3. Labubu
  4. iPhone 17
  5. One Big Beautiful Bill Act
  6. Zohran Mamdani
  7. DeepSeek
  8. Government shutdown
  9. FIFA Club World Cup
  10. Tariffs

News & Current Events

Natural disasters and political events shaped what news topics people were searching for. The LA Fires, Hurricane Melissa, and the TikTok ban drew worldwide interest, while in the US, folks were most often searching about topics like the One Big Beautiful Bill Act and the government shutdown.

Global top 10:

  1. Charlie Kirk assassination
  2. Iran
  3. US Government Shutdown
  4. New Pope chosen
  5. LA Fires
  6. Hurricane Melissa
  7. TikTok ban
  8. Zohran Mamdani elected
  9. USAID
  10. Kamchatka Earthquake and Tsunami

US top 10:

  1. One Big Beautiful Bill Act
  2. Government shutdown
  3. Charlie Kirk assasination
  4. Tariffs
  5. No Kings protest
  6. Los Angles fires
  7. New Pope chosen
  8. Epstein files
  9. U.S. Presidential Inauguration
  10. Hurricane Melissa

AI-Generated Content Leads US Trends

AI-generated content captured everyone’s attention in the US, with AI-created images and characters popping up all over different categories. The viral AI Barbie, AI action figures, and Ghibli-style AI art topped this year’s trends.

The top US trends included:

  1. AI action figure
  2. AI Barbie
  3. Holy airball
  4. AI Ghostface
  5. AI Polaroid
  6. Chicken jockey
  7. Bacon avocado
  8. Anxiety dance
  9. Unfortunately, I do love
  10. Ghibli

People

Music artists and political figures were among the most searched people worldwide. d4vd, Kendrick Lamar, and the newly elected Pope Leo XIV attracted the most international attention. In the US, searches mainly centered on political appointees such as Zohran Mamdani and Karoline Leavitt.

Global top 10:

  1. d4vd
  2. Kendrick Lamar
  3. Jimmy Kimmel
  4. Tyler Robinson
  5. Pope Leo XIV
  6. Vaibhav Sooryavanshi
  7. Shedeur Sanders
  8. Bianca Censori
  9. Zohran Mamdani
  10. Greta Thunberg

US top 10:

  1. Zohran Mamdani
  2. Tyler Robinson
  3. d4vd
  4. Erika Kirk
  5. Pope Leo XIV
  6. Shedeur Sanders
  7. Bonnie Blue
  8. Karoline Leavitt
  9. Andy Byron
  10. Jimmy Kimmel

    Entertainment

    Actors

    Breakthrough performances drove increased actor searches. Mikey Madison saw a spike in global searches after her acclaimed role in Anora, while Pedro Pascal led searches in the US.

    Global top 5:

    1. Mikey Madison
    2. Lewis Pullman
    3. Isabela Merced
    4. Song Ji Woo
    5. Kaitlyn Dever

    US top 5:

    1. Pedro Pascal
    2. Malachi Barton
    3. Walton Goggins
    4. Pamela Anderson
    5. Charlie Sheen

    Movies

    Expected franchise entries and original films topped movie searches. Anora was the top globally, while KPop Demon Hunters gained US popularity, alongside major releases such as The Minecraft Movie and Thunderbolts.

    Global top 5:

    1. Anora
    2. Superman
    3. Minecraft Movie
    4. Thunderbolts*
    5. Sinners

    US top 5:

    1. KPop Demon Hunters
    2. Sinners
    3. The Minecraft Movie
    4. Happy Gilmore 2
    5. Thunderbolts*

        Books

        Contemporary romance and classic literature were the most searched genres. Colleen Hoover’s “Regretting You” and Rebecca Yarros’s “Onyx Storm” topped both global and US charts, while George Orwell’s “Animal Farm” and “1984” saw a resurgence in popularity.

        Global top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. Frankenstein – Mary Shelley
        7. It – Stephen King
        8. Animal Farm – George Orwell
        9. The Witcher – Andrzej Sapkowski
        10. Diary Of A Wimpy Kid – Jeff Kinney

        US top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. It – Stephen King
        7. Animal Farm – George Orwell
        8. The Great Gatsby – F. Scott Fitzgerald
        9. To Kill a Mockingbird – Harper Lee
        10. 1984 – George Orwell

        Podcasts

        Podcast searches were driven by political commentary and celebrity-hosted shows. The Charlie Kirk Show ranked first both worldwide and in the US, while sports podcast New Heights and Michelle Obama’s “IMO” gained attention in the US.

        Global top 10:

        1. The Charlie Kirk Show
        2. New Heights
        3. This Is Gavin Newsom
        4. Khloé In Wonder Land
        5. Good Hang With Amy Poehler
        6. Candace
        7. The Meidastouch Podcast
        8. The Ruthless Podcast
        9. The Venus Podcast
        10. The Mel Robbins Podcast

        US top 10:

        1. New Heights
        2. The Charlie Kirk Show
        3. IMO with Michelle Obama and Craig Davidson
        4. This Is Gavin Newsom
        5. Good Hang With Amy Poehler
        6. Khloé In Wonder Land
        7. The Severance Podcast
        8. The Rosary in a Year
        9. Unbothered
        10. The Bryce Crawford Podcast

        Sports Events

        International soccer tournaments attracted the most global sports searches. The FIFA Club World Cup, Asia Cup, and ICC Champions Trophy were the top interests worldwide, while in the US, searches centered on domestic events like the Ryder Cup and UFC championships.

        Global top 10:

        1. FIFA Club World Cup
        2. Asia Cup
        3. ICC Champions Trophy
        4. ICC Women’s World Cup
        5. Ryder Cup
        6. EuroBasket
        7. Concacaf Gold Cup
        8. 4 Nations Face-Off
        9. UFC 313
        10. UFC 311

        US top 10:

        1. Ryder Cup
        2. 4 Nations Face-Off
        3. UFC 313
        4. UFC 311
        5. College Football Playoff
        6. Super Bowl LX
        7. NBA Finals
        8. World Series
        9. Stanley Cup Finals
        10. March Madness

        Lifestyle And Gaming

        Anticipated game releases led search trends. Arc Raiders was the most-searched title globally, while Clair Obscur: Expedition 33 was the top search in the US, alongside popular titles such as Battlefield 6 and Hollow Knight: Silksong.

        Global top 5 games:

        1. Arc Raiders
        2. Battlefield 6
        3. Strands
        4. Split Fiction
        5. Clair Obscur: Expedition 33

        US top 5 games:

        1. Clair Obscur: Expedition 33
        2. Battlefield 6
        3. Hollow Knight: Silksong
        4. ARC Raiders
        5. The Elder Scrolls IV: Oblivion Remastered

          Music (US Only)

          Emerging artists and well-known musicians drove music searches. d4vd led in musician searches, whereas Taylor Swift led song rankings with various tracks, including “Wood” and “The Fate of Ophelia.”

          Top 5 musicians:

          1. d4vd
          2. KATSEYE
          3. Bad Bunny
          4. Sombr
          5. Doechii

          Top 5 songs:

          1. Wood – Taylor Swift
          2. DtMF – Bad Bunny
          3. Golden – HUNTR/X
          4. The Fate of Ophelia – Taylor Swift
          5. Father Figure – Taylor Swift

          Travel (US Only)

          Major cities and popular European destinations drove travel itinerary searches. Boston, Seattle, and Tokyo led domestic travel plans, while Prague and Edinburgh were notably popular for European trips.

          Top 10 travel itinerary searches:

          1. Boston
          2. Seattle
          3. Tokyo
          4. New York
          5. Prague
          6. London
          7. San Diego
          8. Acadia National Park
          9. Edinburgh
          10. Miami

            Google Maps

            Google Maps data represents the most-searched locations on Maps in 2025.

            Bookstores

            Historic and iconic bookstores drew worldwide attention on Google Maps. Portugal’s Livraria Lello and Tokyo’s Animate Ikebukuro were the most searched internationally, while Powell’s City of Books in Portland ranked highest in US bookstore interest.

            Global top 5:

            1. Livraria Lello, Porto District, Portugal
            2. animate Ikebukuro main store, Tokyo, Japan
            3. El Ateneo Grand Splendid, Buenos Aires, Argentina
            4. Shakespeare and Company, Île-de-France, France
            5. Libreria Acqua Alta, Veneto, Italy

            US top 5:

            1. Powell’s City of Books, Portland, Oregon
            2. Strand Book Store, New York, New York
            3. The Last Bookstore, Los Angeles, California
            4. Kinokuniya New York, New York, New York
            5. Stanford University Bookstore, Stanford, California

                Looking Back

                That’s what caught attention in 2025. People searched for breaking news about natural disasters and political changes. They tracked sports tournaments and looked up new AI tools. They followed major world events.

                And between those searches, they looked up actors after breakthrough performances, found recipes they saw on social feeds, and planned trips to places they’d been thinking about for years.

                The trends don’t tell you what mattered most. They tell you what people were curious about when they had a spare moment, whether that was understanding a major news event or finding the perfect travel itinerary.

                You can watch the full Google Year In Search video below:

                The full Year in Search data is at trends.withgoogle.com/year-in-search/2025.

                More resources:

                Do Faces Help YouTube Thumbnails? Here’s What The Data Says via @sejournal, @MattGSouthern

                A claim about YouTube thumbnails is getting attention on X: that showing your face is “probably killing your views,” and that removing yourself will make click-through rates jump.

                Nate Curtiss, Head of Content at 1of10 Media, pushed back, calling that kind of advice too absolute and pointing to a dataset that suggests the answer is more situational.

                The dispute matters because thumbnail advice often gets reduced to rules. YouTube’s own product signals suggest the platform is trying to reward what keeps viewers watching, not whatever earns the fastest click.

                Where The “Remove Your Face” Claim Comes From

                In a recent post, vidIQ suggested that unless you’re already well-known, people click for ideas rather than creators, and that removing your face from thumbnails can raise CTR.

                Curtiss responded by calling the claim unsupported, and linked to highlights from a long-form report based on a sample of high-performing YouTube videos.

                The debate is one side arguing faces distract from the idea, while the other argues faces can help or hurt depending on what you publish and who you publish for.

                What The Data Says About Faces In Thumbnails

                The report Curtiss linked to describes a dataset of more than 300,000 “viral” YouTube videos from 2025, spanning tens of thousands of channels. It defines “outlier” performance using an “Outlier Score,” calculated as a high-performing video’s views relative to the channel’s median views.

                On faces specifically, the report’s top finding is that thumbnails with faces and thumbnails without faces perform similarly, even though faces appear on a large share of videos in the sample.

                The differences show up when the report breaks down the data:

                • In its channel-size breakdown, it finds that adding a face only helped channels above a certain subscriber threshold, and even then the lift was modest.
                • In its niche segmentation, it finds that some categories performed better with faces while others performed worse. Finance is listed among the niches that performed better with faces, while Business is listed among the niches that performed worse.
                • It also reports that thumbnails featuring multiple faces performed best compared to single-face thumbnails.

                What YouTube Says About Faces In Thumbnails

                Even if a thumbnail change increases CTR, YouTube’s own tooling suggests the algorithm is optimizing for what happens after the click.

                In a YouTube blog post, Creator Liaison Rene Ritchie explains that the thumbnail testing tool runs until one variant achieves a higher percentage of watch time.

                He also explains why results are returned as watch time rather than separate CTR and retention metrics, describing watch time as incorporating both the click and the ability to keep viewers watching.

                Ritchie writes:

                “Thumbnail Test & Compare returns watch time rather than separate metrics on click-through rate (CTR) and retention (AVP), because watch time includes both! You have to click to watch and you have to retain to build up time. If you over-index on CTR, it could become click-bait, which could tank retention, and hurt performance. This way, the tool helps build good habits — thumbnails that make a promise and videos that deliver on it!”

                This helps explain why CTR-based thumbnail advice can be incomplete. A thumbnail that boosts clicks but leads to shorter viewing may not win in YouTube’s testing tool.

                YouTube is leaning into A/B testing as a workflow inside Studio. In a separate YouTube blog post about new Studio features, YouTube describes how you can test and compare up to three titles and thumbnails per video.

                The “Who” Matters: Subscribers vs. Strangers

                YouTube’s Help Center suggests thinking about audience segments, such as new, casual, and regular viewers. Then adapt your content strategy for each group rather than treat all viewers the same.

                YouTube suggests thinking about who you’re trying to reach. Content aimed at subscribers can lean on familiar cues, while content aimed at casual viewers may need more universally readable actions or emotions.

                That aligns with the report’s finding that faces helped larger channels more than smaller ones, which could reflect stronger audience familiarity.

                What This Means

                The practical takeaway is not to “put your face in every thumbnail” or “go faceless.”

                The data suggests faces are common and, on average, not dramatically different from no-face thumbnails. The interesting part is the segmentation: some topics appear to benefit from faces more than others, and multiple faces may generate more interest than a single reaction shot.

                YouTube’s testing design keeps pulling the conversation back to viewer outcomes. Clicks matter, but so does whether the thumbnail matches the video and earns watch time once someone lands.

                YouTube’s product team describes this as “Packaging,” which is a concept that treats the title, thumbnail, and the first 30 seconds of the video as a single unit.

                On mobile, where videos often auto-play, the face in the thumbnail should naturally transition into the video’s intro. If the emotional cue in the thumbnail doesn’t match the opening of the video, it can hurt early retention.

                Looking Ahead

                This debate keeps resurfacing because creators want simple rules, and YouTube performance rarely works that way.

                The debate overlooks an important point that top creators like MrBeast emphasize. It’s more about how you show your face than whether you show it at all.

                MrBeast previously mentioned that changing how he appears in thumbnails, like switching to closed-mouth expressions, increased watch time in his tests.

                The 1of10 data supports the idea that faces in thumbnails aren’t a blanket rule. Results can vary by topic, format, and audience expectations.

                A better way to look at it is fit. Faces can help signal trust, identity, or emotion, but they can also compete with the subject of the video depending on what you publish.

                With YouTube adding more testing to Studio, you may get better results by validating thumbnail decisions against watch-time outcomes instead of relying on one-size-fits-all advice.


                Featured Image: T. Schneider/Shutterstock

                Redirection For Contact Form 7 WordPress Plugin Vulnerability via @sejournal, @martinibuster

                A vulnerability in the popular WordPress Contact Form 7 plugin addon installed in over 300,000 websites enables attackers to upload malicious files and makes it possible for them to copy files from the server.

                Redirection For Contact Form 7

                The Redirection for Contact Form 7 WordPress plugin by Themeisle is an add-on to the popular Contact Form 7 plugin. It enables websites to redirect site visitors to any web page after a form submission, as well as store information in a database and other functions.

                Vulnerable To Unauthenticated Attackers

                What makes this vulnerability especially concerning is that it is an unauthenticated vulnerability, which means that an attacker doesn’t need to log in or acquire any level user privilege (like subscriber level). This makes it easier for an attacker take advantage of a flaw.

                According to Wordfence:

                “The Redirection for Contact Form 7 plugin for WordPress is vulnerable to arbitrary file uploads due to missing file type validation in the ‘move_file_to_upload’ function in all versions up to, and including, 3.2.7. This makes it possible for unauthenticated attackers to copy arbitrary files on the affected site’s server. If ‘allow_url_fopen’ is set to ‘On’, it is possible to upload a remote file to the server.”

                That last part of the vulnerability is what makes exploiting it a little harder. ‘allow_url_fopen’ controls how PHP handles files. PHP ships with this set to “On” but most shared hosting providers routinely set this to “Off” in order to prevent security vulnerabilities.

                Although this is an unauthenticated vulnerability which make it easier to take advantage, the fact that it relies on the PHP ‘allow_url_fopen’ setting to be “on” mitigates the likelihood of the flaw being exploited.

                Users of the plugin are encouraged to update to version 3.2.8 of the plugin or newer.

                Featured Image by Shutterstock/katalinks

                Google Files DMCA Suit Targeting SerpApi’s SERP Scraping via @sejournal, @MattGSouthern

                Google sued SerpApi in the U.S. District Court for the Northern District of California, alleging the company developed methods to bypass protections Google deployed to prevent automated scraping of Search results and the licensed content they contain.

                Why This Case Is Different

                Unlike previous cases that focused on terms-of-service violations or broader scraping methods, Google’s complaint is built on DMCA anti-circumvention claims.

                Google argues SearchGuard is a protection measure that controls access to copyrighted works appearing in Search results. The complaint describes SearchGuard as a system that sends a JavaScript “challenge” to requests from unrecognized sources and requires the browser to return specific information as a “solve.”

                Google says the system launched in January and initially blocked SerpApi. The complaint claims SerpApi then developed ways to bypass it.

                The complaint document reads:

                “Google developed and deployed a technological measure, known as SearchGuard, that restricts access to its search results pages and the copyrighted content they contain. So that it could continue its free riding, however, SerpApi developed a means of circumventing SearchGuard. With the automated queries it submits, SerpApi engages in a wide variety of misrepresentations and evasions in order to bypass the technological protections Google deployed. But each time it employs these artifices, SerpApi violates federal law.”

                Why DMCA Section 1201 Is The Center Of The Complaint

                Google’s complaint leans on DMCA Section 1201, which targets circumvention of access controls and also the sale of circumvention tools or services.

                Google is bringing two claims: one focused on the act of circumvention (Section 1201(a)(1)) and another focused on “trafficking” in circumvention services or technology (Section 1201(a)(2)). The complaint says Google may elect statutory damages of $200 to $2,500 per violation.

                The filing also argues that even if damages were awarded, SerpApi “reportedly earns a few million dollars in annual revenue,” and Google is seeking an injunction to stop the alleged conduct.

                What Google Claims SerpApi Did

                Google claims SerpApi circumvented SearchGuard in multiple ways, including misrepresenting attributes of requests (such as device, software, or location) to obtain authorization to submit queries.

                The complaint quotes SerpApi’s founder describing the process as:

                “creating fake browsers using a multitude of IP addresses that Google sees as normal users.”

                Google estimates SerpApi sends “hundreds of millions” of artificial search requests each day, and says that volume increased by as much as 25,000% over two years.

                The Licensed Content Angle

                Google’s issue is not just “SERP data.” It centers on copyrighted content embedded in Search features through licensing and partner relationships.

                The complaint says Knowledge Panels “often contain copyrighted photographs that Google licenses from third parties,” and it points to other examples like merchant-supplied product images in Shopping and third-party imagery used in Maps.

                Google alleges SerpApi “scrape[s] this copyrighted content and more from Google” and resells it to customers for a fee, without permission or compensation to rights holders.

                Why This Matters For SEO Tools

                If your workflows depend on third-party SERP data (rank tracking, feature monitoring, competitive intelligence), this case is worth watching because Google is asking for an injunction that could cut off a source of automated SERP access.

                Bigger vendors typically run their own collection systems. Smaller products, internal dashboards, and custom tools are more likely to depend on outside SERP APIs, which can create a single point of failure if a provider is forced to shut down or change methods.

                Industry Context: Scraping Lawsuits Are Increasing

                Google’s filing follows other litigation over scraping and content reuse.

                Reddit sued SerpApi and other scraping companies in October over alleged scraping tied to Perplexity, but also notes Perplexity isn’t mentioned in Google’s lawsuit.

                Antitrust Context, Briefly

                This also lands after Judge Amit Mehta’s August 2024 liability ruling in the U.S. search antitrust case, with remedies ordered in 2025 and appeals expected.

                That case deals with distribution and defaults. This one is about automated access to Search results pages and the content embedded in them. Still, they both sit inside the same broader debate about how much control platforms can exert over access and reuse.

                What People Are Saying

                Some reaction on X has framed the lawsuit as an existential threat to AI products that depend on third-party access to Google results, with one post calling it “the end of ChatGPT.”

                The court filing and Google’s announcement are narrower, focused on SerpApi’s alleged circumvention of SearchGuard and the resale of copyrighted content embedded in Google Search features.

                SerpApi, for its part, says it will “vigorously defend” the case and characterizes it as an effort to limit competition from companies building “next-generation AI” and other applications.

                What Comes Next

                Google is asking the court for monetary damages and an order blocking the alleged circumvention. It also wants SerpApi compelled to destroy technology involved in the alleged violations.

                If the case proceeds, the central issue is whether SearchGuard qualifies as a DMCA-protected access control for copyrighted works, or whether SerpApi argues it functions more like bot-management, which it may contend falls outside Section 1201.

                Microsoft Explains How Duplicate Content Affects AI Search Visibility via @sejournal, @MattGSouthern

                Microsoft has shared new guidance on duplicate content that’s aimed at AI-powered search.

                The post on the Bing Webmaster Blog discusses which URL serves as the “source page” for AI answers when several similar URLs exist.

                Microsoft describes how “near-duplicate” pages can end up grouped together for AI systems, and how that grouping can influence which URL gets pulled into AI summaries.

                How AI Systems Handle Duplicates

                Fabrice Canel and Krishna Madhavan, Principal Product Managers at Microsoft AI, wrote:

                “LLMs group near-duplicate URLs into a single cluster and then choose one page to represent the set. If the differences between pages are minimal, the model may select a version that is outdated or not the one you intended to highlight.”

                If multiple pages are interchangeable, the representative page might be an older campaign URL, a parameter version, or a regional page you didn’t mean to promote.

                Microsoft also notes that many LLM experiences are grounded in search indexes. If the index is muddied by duplicates, that same ambiguity can show up downstream in AI answers.

                How Duplicates Can Reduce AI Visibility

                Microsoft lays out several ways duplication can get in the way.

                One is intent clarity. If multiple pages cover the same topic with nearly identical copy, titles, and metadata, it’s harder to tell which URL best fits a query. Even when the “right” page is indexed, the signals are split across lookalikes.

                Another is representation. If the pages are clustered, you’re effectively competing with yourself for which version stands in for the group.

                Microsoft also draws a line between real page differentiation and cosmetic variants. A set of pages can make sense when each one satisfies a distinct need. But when pages differ only by minor edits, they may not carry enough unique signals for AI systems to treat them as separate candidates.

                Finally, Microsoft links duplication to update lag. If crawlers spend time revisiting redundant URLs, changes to the page you actually care about can take longer to show up in systems that rely on fresh index signals.

                Categories Of Duplicate Content Microsoft Highlights

                The guidance calls out a few repeat offenders.

                Syndication is one. When the same article appears across sites, identical copies can make it harder to identify the original. Microsoft recommends asking partners to use canonical tags that point to the original URL and to use excerpts instead of full reprints when possible.

                Campaign pages are another. If you’re spinning up multiple versions targeting the same intent and differing only slightly, Microsoft recommends choosing a primary page that collects links and engagement, then using canonical tags for the variants and consolidating older pages that no longer serve a distinct purpose.

                Localization comes up in the same way. Nearly identical regional pages can look like duplicates unless they include meaningful differences. Microsoft suggests localizing with changes that actually matter, such as terminology, examples, regulations, or product details.

                Then there are technical duplicates. The guidance lists common causes such as URL parameters, HTTP and HTTPS versions, uppercase and lowercase URLs, trailing slashes, printer-friendly versions, and publicly accessible staging pages.

                The Role Of IndexNow

                Microsoft points to IndexNow as a way to shorten the cleanup cycle after consolidating URLs.

                When you merge pages, change canonicals, or remove duplicates, IndexNow can help participating search engines discover those changes sooner. Microsoft links that faster discovery to fewer outdated URLs lingering in results, and fewer cases where an older duplicate becomes the page that’s used in AI answers.

                Microsoft’s Core Principle

                Canel and Madhavan wrote:

                “When you reduce overlapping pages and allow one authoritative version to carry your signals, search engines can more confidently understand your intent and choose the right URL to represent your content.”

                The message is consolidation first, technical signals second. Canonicals, redirects, hreflang, and IndexNow help, but they work best when you’re not maintaining a long tail of near-identical pages.

                Why This Matters

                Duplicate content isn’t a penalty by itself. The downside is weaker visibility when signals are diluted, and intent is unclear.

                Syndicated articles can keep outranking the original if canonicals are missing or inconsistent. Campaign variants can cannibalize each other if the “differences” are mostly cosmetic. Regional pages can blend together if they don’t clearly serve different needs.

                Routine audits can help you catch overlap early. Microsoft points to Bing Webmaster Tools as a way to spot patterns such as identical titles and other duplication indicators.

                Looking Ahead

                As AI answers become a more common entry point, the “which URL represents this topic” problem becomes harder to ignore.

                Cleaning up near-duplicates can influence which version of your content gets surfaced when an AI system needs a single page to ground an answer.

                Sam Altman Explains OpenAI’s Bet On Profitability via @sejournal, @martinibuster

                In an interview with the Big Technology Podcast, Sam Altman seemed to struggle answering the tough questions about OpenAI’s path to profitability.

                At about the 36 minute mark the interviewer asked the big question about revenues and spending. Sam Altman said OpenAI’s losses are tied to continued increases in training costs while revenue is growing. He said the company would be profitable much earlier if it were not continuing to grow its training spend so aggressively.

                Altman said concern about OpenAI’s spending would be reasonable only if the company reached a point where it had large amounts of computing it could not monetize profitably.

                The interviewer asked:

                “Let’s, let’s talk about numbers since you brought it up. Revenue’s growing, compute spend is growing, but compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 2028, 29, where you’re going to become profitable.

                So talk a little bit about like, how does that change? Where does the turn happen?”

                Sam Altman responded:

                “I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that’s the plan. Spend a lot of money training, but make more and more.

                If we weren’t continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we’re making is to invest very aggressively in training these big models.”

                At this point the interviewer pressed Altman harder about the path to profitability, this time mentioning the spending commitment of $1.4 trillion dollars versus the $20 billion dollars in revenue. This was not a softball question.

                The interviewer pushed back:

                “I think it would be great just to lay it out for everyone once and for all how those numbers are gonna work.”

                Sam Altman’s first attempt to answer seemed to stumble in a word salad kind of way: 

                “It’s very hard to like really, I find that one thing I certainly can’t do it and very few people I’ve ever met can do it.

                You know, you can like, you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on.

                Like for whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn’t seem to be one of them.”

                Altman then regained his footing with a more coherent answer:

                “The thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don’t have the compute.

                Again, we’re so compute constrained, and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can’t monetize on a profitable per unit of compute basis, it’d be very reasonable to say, okay, this is like a little, how’s this all going to work?

                But we’ve penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis, as you know, all of the work we’ve been doing to make compute cheaper comes to pass.

                But we see this consumer growth, we see this enterprise growth. There’s a whole bunch of new kinds of businesses that, that we haven’t even launched yet, but will. But compute is really the lifeblood that enables all of this.

                We have always been in a compute deficit. It has always constrained what we’re able to do.

                I unfortunately think that will always be the case, but I wish it were less the case, and I’d like to get it to be less of the case over time, because I think there’s so many great products and services that we can deliver, and it’ll be a great business.”

                The interviewer then sought to clarify the answer, asking:

                “And then your expectation is through things like this enterprise push, through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.”

                Sam Altman responded:

                “Yeah, that is the plan.”

                Altman’s comments define a specific threshold for evaluating whether OpenAI’s spending is a problem. He points to unused or unmonetizable computing power as the point at which concern would be justified, rather than current losses or large capital commitments.

                In his explanation, the limiting factor is not willingness to pay, but how much computing capacity OpenAI can bring online and use. The follow-up question makes that explicit, and Altman’s confirmation makes clear that the company is relying on revenue growth from consumer use, enterprise adoption, and additional products to cover its costs over time.

                Altman’s path to profitability rests on a simple bet: that OpenAI can keep finding buyers for its computing as fast as it can build it. Eventually, that bet either keeps winning or the chips run out.

                Watch the interview starting at about the 36 minute mark:

                Featured Image/Screenshot