Track Santa On Christmas Eve 2025 (Via NORAD & Google) via @sejournal, @MattGSouthern

Santa’s coming!

The world waits with excitement and anticipation for the arrival of Santa Claus as he starts his world tour for 2025.

Children (and adults) everywhere are eager to track the man in the red suit as he defies the speed limit to make his journey across the globe in just one night.

To help you keep up to date on what time Santa will arrive in your neighborhood, there are now two portals you can use to follow the sleigh.

The original Santa tracker from NORAD tracks Santa’s sleigh as he starts his busy night shift at the International Date Line in the Pacific Ocean and heads across the world towards New Zealand and Australia.

Google also has an interactive website and mobile app so users can follow Old Saint Nick’s journey as he delivers presents worldwide until he finishes in South America after the world’s longest night shift.

NORAD Santa Tracker: A Holiday Tradition

For over 65 years, the NORAD Santa Tracker has helped families follow Santa’s whereabouts.

The NORAD Santa Tracker began in 1955 when a misprinted phone number in a Sears advertisement directed children to call NORAD’s predecessor, the Continental Air Defense Command (CONAD), instead of Santa.

Colonel Harry Shoup, the director of operations, instructed his staff to give updates on Santa’s location to every child who called.

NORAD continues the tradition to this day.

santa tracker
Screenshot from noradsanta.org/en/, December 2025

How To Track Santa With NORAD

  1. Visit the NORAD Santa Tracker website.
  2. On Christmas Eve, the live map will display Santa’s current location and next stop.
  3. For a more traditional experience, call the NORAD Tracks Santa hotline at 1-877-HI-NORAD (1-877-446-6723) to speak with a volunteer who will provide you with Santa’s current location.
  4. Follow NORAD’s social media channels for regular daily updates.

This year, NORAD has added an AI chatbot called Radar to help you get the latest updates.

The Evolution Of Google’s Santa Tracker

Since it launched in 2004, Google’s Santa Tracker has changed and improved. The team uses this project to try out new technologies and make design updates. Some of these new features, like “View in 3D,” are later added to other Google products and services.

What’s In The 2025 Google Santa Tracker

Screenshot from santatracker.google.com/, December 2025

Google’s Santa Tracker returns for its 21st year with the familiar village experience you know and love. The site features games, videos, and activities throughout December, with the live tracker launching on Christmas Eve.

This year’s collection includes classics like Elf Ski and Penguin Dash alongside creative activities like Santa’s Canvas and Code Lab. Google uses the Santa Tracker project to test new technologies that often make their way into other Google products.

On Christmas Eve, the live map shows Santa’s current location, where he’s heading next, his distance from your location, and an estimated arrival time. The tracker begins at midnight in the furthest east time zone (10:00 a.m. UTC) as Santa starts his journey at the International Date Line in the Pacific Ocean.

For each city Santa visits, the tracker displays Wikipedia excerpts and photos, turning the experience into a geography lesson wrapped in Christmas magic.

How To Use The Google Santa Tracker

  1. Visit the Google Santa Tracker website or download the mobile app for Android devices.
  2. On Christmas Eve, the live map will show Santa’s current location, the number of gifts delivered, and his estimated arrival time at your location.
  3. Explore the map to learn more about the 500+ locations Santa visits, with photos and information provided by Google’s Local Guides.

Extra Features & Activities

Beyond games, the platform showcases detailed animated environments ranging from cozy kitchens where elves prepare holiday treats to snowy outdoor scenes filled with winter activities.

The experience is wrapped in Google’s characteristic bright, cheerful art style, with colorful illustrations that bring North Pole activities to life.

Whether practicing basic coding concepts or learning holiday traditions from around the world, kids (and big kids) can explore while counting down to Christmas.

To All, A Good Night

Settle down for the evening tonight with your choice of favorite Christmas snack and follow Santa’s journey with either Google or NORAD.

Santa has an estimated 2.2 billion homes to visit, so it’s going to be a busy night tonight! Don’t forget to leave out your carrots and mince pies.

Happy holidays from all of us at Search Engine Journal!


Featured Image: Roman Samborskyi/Shutterstock

Google Reveals The Top Searches Of 2025 via @sejournal, @MattGSouthern

In 2025, Google’s AI tool Gemini topped global searches. People tracked cricket matches between India and England, looked up details on the new Pope, and searched for information about Iran and the TikTok ban. They followed LA fires and government shutdowns.

But between the headlines, they also looked up Pedro Pascal and Mikey Madison. They wanted to make hot honey and marry me chicken. They planned trips to Prague and Edinburgh. They searched for bookstores from Livraria Lello in Porto to Powell’s in Portland.

Google’s Year in Search tracks what spiked. These lists show queries that grew the fastest relative to 2024, ranging from breaking news to entertainment, sports, and lifestyle. Together, they present a picture of what captured attention throughout the year.

Top Searches Of 2025

Google’s AI assistant Gemini became the top trending search globally, showing how widely AI tools were embraced throughout the year. The rest of the top 10 was filled with sports, with cricket matches between India and England, the Club World Cup, and the Asia Cup capturing a lot of public interest.

The global top 10 trending searches were:

Global top 10:

  1. Gemini
  2. India vs England
  3. Charlie Kirk
  4. Club World Cup
  5. India vs Australia
  6. Deepseek
  7. Asia Cup
  8. Iran
  9. iPhone17
  10. Pakistan and India

The US list reflected different priorities and diverged from global trends, with Charlie Kirk at the top and entertainment properties ranking highly. KPop Demon Hunters secured the second position.

The US top 10 trending searches were:

US top 10:

  1. Charlie Kirk
  2. KPop Demon Hunters
  3. Labubu
  4. iPhone 17
  5. One Big Beautiful Bill Act
  6. Zohran Mamdani
  7. DeepSeek
  8. Government shutdown
  9. FIFA Club World Cup
  10. Tariffs

News & Current Events

Natural disasters and political events shaped what news topics people were searching for. The LA Fires, Hurricane Melissa, and the TikTok ban drew worldwide interest, while in the US, folks were most often searching about topics like the One Big Beautiful Bill Act and the government shutdown.

Global top 10:

  1. Charlie Kirk assassination
  2. Iran
  3. US Government Shutdown
  4. New Pope chosen
  5. LA Fires
  6. Hurricane Melissa
  7. TikTok ban
  8. Zohran Mamdani elected
  9. USAID
  10. Kamchatka Earthquake and Tsunami

US top 10:

  1. One Big Beautiful Bill Act
  2. Government shutdown
  3. Charlie Kirk assasination
  4. Tariffs
  5. No Kings protest
  6. Los Angles fires
  7. New Pope chosen
  8. Epstein files
  9. U.S. Presidential Inauguration
  10. Hurricane Melissa

AI-Generated Content Leads US Trends

AI-generated content captured everyone’s attention in the US, with AI-created images and characters popping up all over different categories. The viral AI Barbie, AI action figures, and Ghibli-style AI art topped this year’s trends.

The top US trends included:

  1. AI action figure
  2. AI Barbie
  3. Holy airball
  4. AI Ghostface
  5. AI Polaroid
  6. Chicken jockey
  7. Bacon avocado
  8. Anxiety dance
  9. Unfortunately, I do love
  10. Ghibli

People

Music artists and political figures were among the most searched people worldwide. d4vd, Kendrick Lamar, and the newly elected Pope Leo XIV attracted the most international attention. In the US, searches mainly centered on political appointees such as Zohran Mamdani and Karoline Leavitt.

Global top 10:

  1. d4vd
  2. Kendrick Lamar
  3. Jimmy Kimmel
  4. Tyler Robinson
  5. Pope Leo XIV
  6. Vaibhav Sooryavanshi
  7. Shedeur Sanders
  8. Bianca Censori
  9. Zohran Mamdani
  10. Greta Thunberg

US top 10:

  1. Zohran Mamdani
  2. Tyler Robinson
  3. d4vd
  4. Erika Kirk
  5. Pope Leo XIV
  6. Shedeur Sanders
  7. Bonnie Blue
  8. Karoline Leavitt
  9. Andy Byron
  10. Jimmy Kimmel

    Entertainment

    Actors

    Breakthrough performances drove increased actor searches. Mikey Madison saw a spike in global searches after her acclaimed role in Anora, while Pedro Pascal led searches in the US.

    Global top 5:

    1. Mikey Madison
    2. Lewis Pullman
    3. Isabela Merced
    4. Song Ji Woo
    5. Kaitlyn Dever

    US top 5:

    1. Pedro Pascal
    2. Malachi Barton
    3. Walton Goggins
    4. Pamela Anderson
    5. Charlie Sheen

    Movies

    Expected franchise entries and original films topped movie searches. Anora was the top globally, while KPop Demon Hunters gained US popularity, alongside major releases such as The Minecraft Movie and Thunderbolts.

    Global top 5:

    1. Anora
    2. Superman
    3. Minecraft Movie
    4. Thunderbolts*
    5. Sinners

    US top 5:

    1. KPop Demon Hunters
    2. Sinners
    3. The Minecraft Movie
    4. Happy Gilmore 2
    5. Thunderbolts*

        Books

        Contemporary romance and classic literature were the most searched genres. Colleen Hoover’s “Regretting You” and Rebecca Yarros’s “Onyx Storm” topped both global and US charts, while George Orwell’s “Animal Farm” and “1984” saw a resurgence in popularity.

        Global top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. Frankenstein – Mary Shelley
        7. It – Stephen King
        8. Animal Farm – George Orwell
        9. The Witcher – Andrzej Sapkowski
        10. Diary Of A Wimpy Kid – Jeff Kinney

        US top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. It – Stephen King
        7. Animal Farm – George Orwell
        8. The Great Gatsby – F. Scott Fitzgerald
        9. To Kill a Mockingbird – Harper Lee
        10. 1984 – George Orwell

        Podcasts

        Podcast searches were driven by political commentary and celebrity-hosted shows. The Charlie Kirk Show ranked first both worldwide and in the US, while sports podcast New Heights and Michelle Obama’s “IMO” gained attention in the US.

        Global top 10:

        1. The Charlie Kirk Show
        2. New Heights
        3. This Is Gavin Newsom
        4. Khloé In Wonder Land
        5. Good Hang With Amy Poehler
        6. Candace
        7. The Meidastouch Podcast
        8. The Ruthless Podcast
        9. The Venus Podcast
        10. The Mel Robbins Podcast

        US top 10:

        1. New Heights
        2. The Charlie Kirk Show
        3. IMO with Michelle Obama and Craig Davidson
        4. This Is Gavin Newsom
        5. Good Hang With Amy Poehler
        6. Khloé In Wonder Land
        7. The Severance Podcast
        8. The Rosary in a Year
        9. Unbothered
        10. The Bryce Crawford Podcast

        Sports Events

        International soccer tournaments attracted the most global sports searches. The FIFA Club World Cup, Asia Cup, and ICC Champions Trophy were the top interests worldwide, while in the US, searches centered on domestic events like the Ryder Cup and UFC championships.

        Global top 10:

        1. FIFA Club World Cup
        2. Asia Cup
        3. ICC Champions Trophy
        4. ICC Women’s World Cup
        5. Ryder Cup
        6. EuroBasket
        7. Concacaf Gold Cup
        8. 4 Nations Face-Off
        9. UFC 313
        10. UFC 311

        US top 10:

        1. Ryder Cup
        2. 4 Nations Face-Off
        3. UFC 313
        4. UFC 311
        5. College Football Playoff
        6. Super Bowl LX
        7. NBA Finals
        8. World Series
        9. Stanley Cup Finals
        10. March Madness

        Lifestyle And Gaming

        Anticipated game releases led search trends. Arc Raiders was the most-searched title globally, while Clair Obscur: Expedition 33 was the top search in the US, alongside popular titles such as Battlefield 6 and Hollow Knight: Silksong.

        Global top 5 games:

        1. Arc Raiders
        2. Battlefield 6
        3. Strands
        4. Split Fiction
        5. Clair Obscur: Expedition 33

        US top 5 games:

        1. Clair Obscur: Expedition 33
        2. Battlefield 6
        3. Hollow Knight: Silksong
        4. ARC Raiders
        5. The Elder Scrolls IV: Oblivion Remastered

          Music (US Only)

          Emerging artists and well-known musicians drove music searches. d4vd led in musician searches, whereas Taylor Swift led song rankings with various tracks, including “Wood” and “The Fate of Ophelia.”

          Top 5 musicians:

          1. d4vd
          2. KATSEYE
          3. Bad Bunny
          4. Sombr
          5. Doechii

          Top 5 songs:

          1. Wood – Taylor Swift
          2. DtMF – Bad Bunny
          3. Golden – HUNTR/X
          4. The Fate of Ophelia – Taylor Swift
          5. Father Figure – Taylor Swift

          Travel (US Only)

          Major cities and popular European destinations drove travel itinerary searches. Boston, Seattle, and Tokyo led domestic travel plans, while Prague and Edinburgh were notably popular for European trips.

          Top 10 travel itinerary searches:

          1. Boston
          2. Seattle
          3. Tokyo
          4. New York
          5. Prague
          6. London
          7. San Diego
          8. Acadia National Park
          9. Edinburgh
          10. Miami

            Google Maps

            Google Maps data represents the most-searched locations on Maps in 2025.

            Bookstores

            Historic and iconic bookstores drew worldwide attention on Google Maps. Portugal’s Livraria Lello and Tokyo’s Animate Ikebukuro were the most searched internationally, while Powell’s City of Books in Portland ranked highest in US bookstore interest.

            Global top 5:

            1. Livraria Lello, Porto District, Portugal
            2. animate Ikebukuro main store, Tokyo, Japan
            3. El Ateneo Grand Splendid, Buenos Aires, Argentina
            4. Shakespeare and Company, Île-de-France, France
            5. Libreria Acqua Alta, Veneto, Italy

            US top 5:

            1. Powell’s City of Books, Portland, Oregon
            2. Strand Book Store, New York, New York
            3. The Last Bookstore, Los Angeles, California
            4. Kinokuniya New York, New York, New York
            5. Stanford University Bookstore, Stanford, California

                Looking Back

                That’s what caught attention in 2025. People searched for breaking news about natural disasters and political changes. They tracked sports tournaments and looked up new AI tools. They followed major world events.

                And between those searches, they looked up actors after breakthrough performances, found recipes they saw on social feeds, and planned trips to places they’d been thinking about for years.

                The trends don’t tell you what mattered most. They tell you what people were curious about when they had a spare moment, whether that was understanding a major news event or finding the perfect travel itinerary.

                You can watch the full Google Year In Search video below:

                The full Year in Search data is at trends.withgoogle.com/year-in-search/2025.

                More resources:

                Do Faces Help YouTube Thumbnails? Here’s What The Data Says via @sejournal, @MattGSouthern

                A claim about YouTube thumbnails is getting attention on X: that showing your face is “probably killing your views,” and that removing yourself will make click-through rates jump.

                Nate Curtiss, Head of Content at 1of10 Media, pushed back, calling that kind of advice too absolute and pointing to a dataset that suggests the answer is more situational.

                The dispute matters because thumbnail advice often gets reduced to rules. YouTube’s own product signals suggest the platform is trying to reward what keeps viewers watching, not whatever earns the fastest click.

                Where The “Remove Your Face” Claim Comes From

                In a recent post, vidIQ suggested that unless you’re already well-known, people click for ideas rather than creators, and that removing your face from thumbnails can raise CTR.

                Curtiss responded by calling the claim unsupported, and linked to highlights from a long-form report based on a sample of high-performing YouTube videos.

                The debate is one side arguing faces distract from the idea, while the other argues faces can help or hurt depending on what you publish and who you publish for.

                What The Data Says About Faces In Thumbnails

                The report Curtiss linked to describes a dataset of more than 300,000 “viral” YouTube videos from 2025, spanning tens of thousands of channels. It defines “outlier” performance using an “Outlier Score,” calculated as a high-performing video’s views relative to the channel’s median views.

                On faces specifically, the report’s top finding is that thumbnails with faces and thumbnails without faces perform similarly, even though faces appear on a large share of videos in the sample.

                The differences show up when the report breaks down the data:

                • In its channel-size breakdown, it finds that adding a face only helped channels above a certain subscriber threshold, and even then the lift was modest.
                • In its niche segmentation, it finds that some categories performed better with faces while others performed worse. Finance is listed among the niches that performed better with faces, while Business is listed among the niches that performed worse.
                • It also reports that thumbnails featuring multiple faces performed best compared to single-face thumbnails.

                What YouTube Says About Faces In Thumbnails

                Even if a thumbnail change increases CTR, YouTube’s own tooling suggests the algorithm is optimizing for what happens after the click.

                In a YouTube blog post, Creator Liaison Rene Ritchie explains that the thumbnail testing tool runs until one variant achieves a higher percentage of watch time.

                He also explains why results are returned as watch time rather than separate CTR and retention metrics, describing watch time as incorporating both the click and the ability to keep viewers watching.

                Ritchie writes:

                “Thumbnail Test & Compare returns watch time rather than separate metrics on click-through rate (CTR) and retention (AVP), because watch time includes both! You have to click to watch and you have to retain to build up time. If you over-index on CTR, it could become click-bait, which could tank retention, and hurt performance. This way, the tool helps build good habits — thumbnails that make a promise and videos that deliver on it!”

                This helps explain why CTR-based thumbnail advice can be incomplete. A thumbnail that boosts clicks but leads to shorter viewing may not win in YouTube’s testing tool.

                YouTube is leaning into A/B testing as a workflow inside Studio. In a separate YouTube blog post about new Studio features, YouTube describes how you can test and compare up to three titles and thumbnails per video.

                The “Who” Matters: Subscribers vs. Strangers

                YouTube’s Help Center suggests thinking about audience segments, such as new, casual, and regular viewers. Then adapt your content strategy for each group rather than treat all viewers the same.

                YouTube suggests thinking about who you’re trying to reach. Content aimed at subscribers can lean on familiar cues, while content aimed at casual viewers may need more universally readable actions or emotions.

                That aligns with the report’s finding that faces helped larger channels more than smaller ones, which could reflect stronger audience familiarity.

                What This Means

                The practical takeaway is not to “put your face in every thumbnail” or “go faceless.”

                The data suggests faces are common and, on average, not dramatically different from no-face thumbnails. The interesting part is the segmentation: some topics appear to benefit from faces more than others, and multiple faces may generate more interest than a single reaction shot.

                YouTube’s testing design keeps pulling the conversation back to viewer outcomes. Clicks matter, but so does whether the thumbnail matches the video and earns watch time once someone lands.

                YouTube’s product team describes this as “Packaging,” which is a concept that treats the title, thumbnail, and the first 30 seconds of the video as a single unit.

                On mobile, where videos often auto-play, the face in the thumbnail should naturally transition into the video’s intro. If the emotional cue in the thumbnail doesn’t match the opening of the video, it can hurt early retention.

                Looking Ahead

                This debate keeps resurfacing because creators want simple rules, and YouTube performance rarely works that way.

                The debate overlooks an important point that top creators like MrBeast emphasize. It’s more about how you show your face than whether you show it at all.

                MrBeast previously mentioned that changing how he appears in thumbnails, like switching to closed-mouth expressions, increased watch time in his tests.

                The 1of10 data supports the idea that faces in thumbnails aren’t a blanket rule. Results can vary by topic, format, and audience expectations.

                A better way to look at it is fit. Faces can help signal trust, identity, or emotion, but they can also compete with the subject of the video depending on what you publish.

                With YouTube adding more testing to Studio, you may get better results by validating thumbnail decisions against watch-time outcomes instead of relying on one-size-fits-all advice.


                Featured Image: T. Schneider/Shutterstock

                Redirection For Contact Form 7 WordPress Plugin Vulnerability via @sejournal, @martinibuster

                A vulnerability in the popular WordPress Contact Form 7 plugin addon installed in over 300,000 websites enables attackers to upload malicious files and makes it possible for them to copy files from the server.

                Redirection For Contact Form 7

                The Redirection for Contact Form 7 WordPress plugin by Themeisle is an add-on to the popular Contact Form 7 plugin. It enables websites to redirect site visitors to any web page after a form submission, as well as store information in a database and other functions.

                Vulnerable To Unauthenticated Attackers

                What makes this vulnerability especially concerning is that it is an unauthenticated vulnerability, which means that an attacker doesn’t need to log in or acquire any level user privilege (like subscriber level). This makes it easier for an attacker take advantage of a flaw.

                According to Wordfence:

                “The Redirection for Contact Form 7 plugin for WordPress is vulnerable to arbitrary file uploads due to missing file type validation in the ‘move_file_to_upload’ function in all versions up to, and including, 3.2.7. This makes it possible for unauthenticated attackers to copy arbitrary files on the affected site’s server. If ‘allow_url_fopen’ is set to ‘On’, it is possible to upload a remote file to the server.”

                That last part of the vulnerability is what makes exploiting it a little harder. ‘allow_url_fopen’ controls how PHP handles files. PHP ships with this set to “On” but most shared hosting providers routinely set this to “Off” in order to prevent security vulnerabilities.

                Although this is an unauthenticated vulnerability which make it easier to take advantage, the fact that it relies on the PHP ‘allow_url_fopen’ setting to be “on” mitigates the likelihood of the flaw being exploited.

                Users of the plugin are encouraged to update to version 3.2.8 of the plugin or newer.

                Featured Image by Shutterstock/katalinks

                Google Files DMCA Suit Targeting SerpApi’s SERP Scraping via @sejournal, @MattGSouthern

                Google sued SerpApi in the U.S. District Court for the Northern District of California, alleging the company developed methods to bypass protections Google deployed to prevent automated scraping of Search results and the licensed content they contain.

                Why This Case Is Different

                Unlike previous cases that focused on terms-of-service violations or broader scraping methods, Google’s complaint is built on DMCA anti-circumvention claims.

                Google argues SearchGuard is a protection measure that controls access to copyrighted works appearing in Search results. The complaint describes SearchGuard as a system that sends a JavaScript “challenge” to requests from unrecognized sources and requires the browser to return specific information as a “solve.”

                Google says the system launched in January and initially blocked SerpApi. The complaint claims SerpApi then developed ways to bypass it.

                The complaint document reads:

                “Google developed and deployed a technological measure, known as SearchGuard, that restricts access to its search results pages and the copyrighted content they contain. So that it could continue its free riding, however, SerpApi developed a means of circumventing SearchGuard. With the automated queries it submits, SerpApi engages in a wide variety of misrepresentations and evasions in order to bypass the technological protections Google deployed. But each time it employs these artifices, SerpApi violates federal law.”

                Why DMCA Section 1201 Is The Center Of The Complaint

                Google’s complaint leans on DMCA Section 1201, which targets circumvention of access controls and also the sale of circumvention tools or services.

                Google is bringing two claims: one focused on the act of circumvention (Section 1201(a)(1)) and another focused on “trafficking” in circumvention services or technology (Section 1201(a)(2)). The complaint says Google may elect statutory damages of $200 to $2,500 per violation.

                The filing also argues that even if damages were awarded, SerpApi “reportedly earns a few million dollars in annual revenue,” and Google is seeking an injunction to stop the alleged conduct.

                What Google Claims SerpApi Did

                Google claims SerpApi circumvented SearchGuard in multiple ways, including misrepresenting attributes of requests (such as device, software, or location) to obtain authorization to submit queries.

                The complaint quotes SerpApi’s founder describing the process as:

                “creating fake browsers using a multitude of IP addresses that Google sees as normal users.”

                Google estimates SerpApi sends “hundreds of millions” of artificial search requests each day, and says that volume increased by as much as 25,000% over two years.

                The Licensed Content Angle

                Google’s issue is not just “SERP data.” It centers on copyrighted content embedded in Search features through licensing and partner relationships.

                The complaint says Knowledge Panels “often contain copyrighted photographs that Google licenses from third parties,” and it points to other examples like merchant-supplied product images in Shopping and third-party imagery used in Maps.

                Google alleges SerpApi “scrape[s] this copyrighted content and more from Google” and resells it to customers for a fee, without permission or compensation to rights holders.

                Why This Matters For SEO Tools

                If your workflows depend on third-party SERP data (rank tracking, feature monitoring, competitive intelligence), this case is worth watching because Google is asking for an injunction that could cut off a source of automated SERP access.

                Bigger vendors typically run their own collection systems. Smaller products, internal dashboards, and custom tools are more likely to depend on outside SERP APIs, which can create a single point of failure if a provider is forced to shut down or change methods.

                Industry Context: Scraping Lawsuits Are Increasing

                Google’s filing follows other litigation over scraping and content reuse.

                Reddit sued SerpApi and other scraping companies in October over alleged scraping tied to Perplexity, but also notes Perplexity isn’t mentioned in Google’s lawsuit.

                Antitrust Context, Briefly

                This also lands after Judge Amit Mehta’s August 2024 liability ruling in the U.S. search antitrust case, with remedies ordered in 2025 and appeals expected.

                That case deals with distribution and defaults. This one is about automated access to Search results pages and the content embedded in them. Still, they both sit inside the same broader debate about how much control platforms can exert over access and reuse.

                What People Are Saying

                Some reaction on X has framed the lawsuit as an existential threat to AI products that depend on third-party access to Google results, with one post calling it “the end of ChatGPT.”

                The court filing and Google’s announcement are narrower, focused on SerpApi’s alleged circumvention of SearchGuard and the resale of copyrighted content embedded in Google Search features.

                SerpApi, for its part, says it will “vigorously defend” the case and characterizes it as an effort to limit competition from companies building “next-generation AI” and other applications.

                What Comes Next

                Google is asking the court for monetary damages and an order blocking the alleged circumvention. It also wants SerpApi compelled to destroy technology involved in the alleged violations.

                If the case proceeds, the central issue is whether SearchGuard qualifies as a DMCA-protected access control for copyrighted works, or whether SerpApi argues it functions more like bot-management, which it may contend falls outside Section 1201.

                Microsoft Explains How Duplicate Content Affects AI Search Visibility via @sejournal, @MattGSouthern

                Microsoft has shared new guidance on duplicate content that’s aimed at AI-powered search.

                The post on the Bing Webmaster Blog discusses which URL serves as the “source page” for AI answers when several similar URLs exist.

                Microsoft describes how “near-duplicate” pages can end up grouped together for AI systems, and how that grouping can influence which URL gets pulled into AI summaries.

                How AI Systems Handle Duplicates

                Fabrice Canel and Krishna Madhavan, Principal Product Managers at Microsoft AI, wrote:

                “LLMs group near-duplicate URLs into a single cluster and then choose one page to represent the set. If the differences between pages are minimal, the model may select a version that is outdated or not the one you intended to highlight.”

                If multiple pages are interchangeable, the representative page might be an older campaign URL, a parameter version, or a regional page you didn’t mean to promote.

                Microsoft also notes that many LLM experiences are grounded in search indexes. If the index is muddied by duplicates, that same ambiguity can show up downstream in AI answers.

                How Duplicates Can Reduce AI Visibility

                Microsoft lays out several ways duplication can get in the way.

                One is intent clarity. If multiple pages cover the same topic with nearly identical copy, titles, and metadata, it’s harder to tell which URL best fits a query. Even when the “right” page is indexed, the signals are split across lookalikes.

                Another is representation. If the pages are clustered, you’re effectively competing with yourself for which version stands in for the group.

                Microsoft also draws a line between real page differentiation and cosmetic variants. A set of pages can make sense when each one satisfies a distinct need. But when pages differ only by minor edits, they may not carry enough unique signals for AI systems to treat them as separate candidates.

                Finally, Microsoft links duplication to update lag. If crawlers spend time revisiting redundant URLs, changes to the page you actually care about can take longer to show up in systems that rely on fresh index signals.

                Categories Of Duplicate Content Microsoft Highlights

                The guidance calls out a few repeat offenders.

                Syndication is one. When the same article appears across sites, identical copies can make it harder to identify the original. Microsoft recommends asking partners to use canonical tags that point to the original URL and to use excerpts instead of full reprints when possible.

                Campaign pages are another. If you’re spinning up multiple versions targeting the same intent and differing only slightly, Microsoft recommends choosing a primary page that collects links and engagement, then using canonical tags for the variants and consolidating older pages that no longer serve a distinct purpose.

                Localization comes up in the same way. Nearly identical regional pages can look like duplicates unless they include meaningful differences. Microsoft suggests localizing with changes that actually matter, such as terminology, examples, regulations, or product details.

                Then there are technical duplicates. The guidance lists common causes such as URL parameters, HTTP and HTTPS versions, uppercase and lowercase URLs, trailing slashes, printer-friendly versions, and publicly accessible staging pages.

                The Role Of IndexNow

                Microsoft points to IndexNow as a way to shorten the cleanup cycle after consolidating URLs.

                When you merge pages, change canonicals, or remove duplicates, IndexNow can help participating search engines discover those changes sooner. Microsoft links that faster discovery to fewer outdated URLs lingering in results, and fewer cases where an older duplicate becomes the page that’s used in AI answers.

                Microsoft’s Core Principle

                Canel and Madhavan wrote:

                “When you reduce overlapping pages and allow one authoritative version to carry your signals, search engines can more confidently understand your intent and choose the right URL to represent your content.”

                The message is consolidation first, technical signals second. Canonicals, redirects, hreflang, and IndexNow help, but they work best when you’re not maintaining a long tail of near-identical pages.

                Why This Matters

                Duplicate content isn’t a penalty by itself. The downside is weaker visibility when signals are diluted, and intent is unclear.

                Syndicated articles can keep outranking the original if canonicals are missing or inconsistent. Campaign variants can cannibalize each other if the “differences” are mostly cosmetic. Regional pages can blend together if they don’t clearly serve different needs.

                Routine audits can help you catch overlap early. Microsoft points to Bing Webmaster Tools as a way to spot patterns such as identical titles and other duplication indicators.

                Looking Ahead

                As AI answers become a more common entry point, the “which URL represents this topic” problem becomes harder to ignore.

                Cleaning up near-duplicates can influence which version of your content gets surfaced when an AI system needs a single page to ground an answer.

                Sam Altman Explains OpenAI’s Bet On Profitability via @sejournal, @martinibuster

                In an interview with the Big Technology Podcast, Sam Altman seemed to struggle answering the tough questions about OpenAI’s path to profitability.

                At about the 36 minute mark the interviewer asked the big question about revenues and spending. Sam Altman said OpenAI’s losses are tied to continued increases in training costs while revenue is growing. He said the company would be profitable much earlier if it were not continuing to grow its training spend so aggressively.

                Altman said concern about OpenAI’s spending would be reasonable only if the company reached a point where it had large amounts of computing it could not monetize profitably.

                The interviewer asked:

                “Let’s, let’s talk about numbers since you brought it up. Revenue’s growing, compute spend is growing, but compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 2028, 29, where you’re going to become profitable.

                So talk a little bit about like, how does that change? Where does the turn happen?”

                Sam Altman responded:

                “I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that’s the plan. Spend a lot of money training, but make more and more.

                If we weren’t continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we’re making is to invest very aggressively in training these big models.”

                At this point the interviewer pressed Altman harder about the path to profitability, this time mentioning the spending commitment of $1.4 trillion dollars versus the $20 billion dollars in revenue. This was not a softball question.

                The interviewer pushed back:

                “I think it would be great just to lay it out for everyone once and for all how those numbers are gonna work.”

                Sam Altman’s first attempt to answer seemed to stumble in a word salad kind of way: 

                “It’s very hard to like really, I find that one thing I certainly can’t do it and very few people I’ve ever met can do it.

                You know, you can like, you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on.

                Like for whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn’t seem to be one of them.”

                Altman then regained his footing with a more coherent answer:

                “The thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don’t have the compute.

                Again, we’re so compute constrained, and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can’t monetize on a profitable per unit of compute basis, it’d be very reasonable to say, okay, this is like a little, how’s this all going to work?

                But we’ve penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis, as you know, all of the work we’ve been doing to make compute cheaper comes to pass.

                But we see this consumer growth, we see this enterprise growth. There’s a whole bunch of new kinds of businesses that, that we haven’t even launched yet, but will. But compute is really the lifeblood that enables all of this.

                We have always been in a compute deficit. It has always constrained what we’re able to do.

                I unfortunately think that will always be the case, but I wish it were less the case, and I’d like to get it to be less of the case over time, because I think there’s so many great products and services that we can deliver, and it’ll be a great business.”

                The interviewer then sought to clarify the answer, asking:

                “And then your expectation is through things like this enterprise push, through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.”

                Sam Altman responded:

                “Yeah, that is the plan.”

                Altman’s comments define a specific threshold for evaluating whether OpenAI’s spending is a problem. He points to unused or unmonetizable computing power as the point at which concern would be justified, rather than current losses or large capital commitments.

                In his explanation, the limiting factor is not willingness to pay, but how much computing capacity OpenAI can bring online and use. The follow-up question makes that explicit, and Altman’s confirmation makes clear that the company is relying on revenue growth from consumer use, enterprise adoption, and additional products to cover its costs over time.

                Altman’s path to profitability rests on a simple bet: that OpenAI can keep finding buyers for its computing as fast as it can build it. Eventually, that bet either keeps winning or the chips run out.

                Watch the interview starting at about the 36 minute mark:

                Featured Image/Screenshot

                Core Web Vitals Champ: Open Source Versus Proprietary Platforms via @sejournal, @martinibuster

                The Core Web Vitals Technology Report by the open source HTTPArchive community ranks content management systems by how well they perform on Google’s Core Web Vitals (CWV). The November 2025 data shows a significant gap between platforms with the highest ranked CMS scoring 84.87% of sites passing CWV, while the lowest ranked CMS scored 46.28%.

                What’s of interest this month is that the top three Core Web Vitals champs are all closed source proprietary platforms while the open source systems were at the bottom of the pack.

                Importance Of Core Web Vitals

                Core Web Vitals (CWV) are metrics created by Google to measure how fast, stable, and responsive a website feels to users. Websites that load quickly and respond smoothly keep visitors engaged and tend to perform better in terms of sales, reads, and add impressions, while sites that fall short frustrate users, increase bounce rates, and perform less well for business goals. CWV scores reflect the quality of the user experience and how a site performs under real-world conditions.

                How the Data Is Collected

                The CWV Technology Report combines two public datasets.

                The Chrome UX Report (CrUX) uses data from Chrome users who opt in to share performance statistics as they browse. This reflects how real users experience websites.
                The HTTP Archive runs lab-based tests that analyze how sites are built and whether they follow performance best practices.

                Together, the report I generated provides a snapshot of how each content management system performs on Core Web Vitals.

                Ranking By November 2025 CWV Score

                Duda Is The Number One Ranked Core Web Vitals Champ

                Duda ranked first in November 2025, with 84.87% of sites built on the platform delivering a passing Core Web Vitals score. It was the only platform in this comparison where more than four out of five sites achieved a good CWV score. Duda has consistently ranked #1 for Core Web Vitals for several years now.

                Wix Ranked #2

                Wix ranked second, with 74.86% of sites passing CWV. While it trailed Duda by ten percentage points, Wix was just about four percentage points ahead of the third place CMS in this comparison.

                Squarespace Ranked #3

                Squarespace ranked third, at 70.39%. Its CWV pass rate placed it closer to Wix than to Drupal, maintaining a clear position in the top three ranked publishing platforms.

                Drupal Ranked #4

                Drupal ranked fourth, with 63.27% of sites passing CWV. That score put Drupal in the middle of the comparison, below the three private label site builders. This is a curious situation because the bottom three CMS’s in this comparison are all open source platforms.

                Joomla Ranked #5

                Joomla ranked fifth, at 56.92%. While more than half of Joomla sites passed CWV, the platform remained well behind the top performers.

                WordPress Ranked Last at position #6

                WordPress ranked last, with 46.28% of sites passing Core Web Vitals. Fewer than half of WordPress sites met the CWV thresholds in this snapshot. What’s notable about WordPress’s poor ranking is that it lags behind the fifth place Joomla by about ten percentage points. So not only is WordPress ranked last in this comparison, it’s decisively last.

                Why the Numbers Matter

                Core Web Vitals scores translate into measurable differences in how users experience websites. Platforms at the top of the ranking deliver faster and more stable experiences across a larger share of sites, while platforms at the bottom expose a greater number of users to slower and less responsive pages. The gap between Duda and WordPress in the November 2025 comparison was nearly 40 percentage points, 38.59 percentage points.

                While an argument can be made that the WordPress ecosystem of plugins and themes may be to blame for the low CWV scores, the fact remains that WordPress is dead last in this comparison. Perhaps WordPress needs to become more proactive about how themes and plugins perform, such as come up with standards that they have to meet in order to gain a performance certification. That might cause plugin and theme makers to prioritize performance.

                Do Content Management Systems Matter For Ranking?

                I have mentioned this before and will repeat it this month. There have been discussions and debates about whether the choice of content management system affects search rankings. Some argue that plugins and flexibility make WordPress easier to rank in Google. But the fact is that private platforms like Duda, Wix, and Squarespace have all focused on providing competitive SEO functionalities that automate a wide range of technical SEO tasks.

                Some people insist that Core Web Vitals make a significant contribution to their rankings and I believe them. But in general, the fact is that CWV performance is a minor ranking factor.

                Nevertheless, performance still matters for outcomes that are immediate and measurable, such as user experience and conversions, which means that the November 2025 HTTPArchive Technology Report should not be ignored.

                The HTTPArchive report is available here but it will be going away and replaced very soon. I’ve tried the new report and, unless I missed something, it lacks a way to constrain the report by date.

                Featured Image by Shutterstock/Red Fox studio

                Google Says Ranking Systems Reward Content Made For Humans via @sejournal, @martinibuster

                Google’s Danny Sullivan discussed SEO and AI where they observed that their ranking systems are tuned for one thing, regardless if it’s classic search or AI search. What he talked about was optimizing for people, which is something I suspect the search marketing industry will increasingly be talking about.

                Nothing New You Need To Be Doing For AI Search

                The first thing Danny Sullivan discussed was that despite there being new search experiences powered by AI there isn’t anything new that they need to be doing.

                John Mueller asked:

                “So everything kind of around AI, or is this really a new thing? It feels like these fads come and go. Is AI in fad? How do you think?”

                Danny Sullivan responded:

                “Oh gosh, my favorite thing is that we should be calling it LMNOPEO because there’s just so many acronyms for it. It’s GEO for generative engine optimization or AEO for answer engine optimization and AIEO. I don’t know. There’s so many different names for it.

                I used to write about SEO and search. I did that for like 20 years. And part of me is just so relieved. I don’t have to do that aspect of it anymore to try to keep up with everything that people are wondering about.

                And on the other hand, you still have to kind of keep up on it because we still try to explain to people what’s going on. And I think the good news is like, There’s not a lot you actually really need to be worrying about.

                It’s understandable. I think people keep having these questions, right? I mean, you see search formats changing, you see all sorts of things happening and you wonder, well, is there something new I should be doing? Totally get that.

                And remember, we, John and I and others, we all came together because we had this blog post we did in May, which we’ll drop a link to or we’ll point you to somehow to it, but it was… we were getting asked again and again, well, what should we be doing? What should we be thinking about?

                And we all put our heads together and we talked with the engineers and everything else. So we came up with nothing really that different.”

                Google’s Systems Are Tuned To Rank Human Optimized Content

                Danny Sullivan next turned to discussing what Google’s systems are designed to rank, which is content that satisfies humans. Robbie Stein, currently Vice President of Product for Google Search, recently discussed the signals Google uses to identify helpful content, discussing how human feedback contributes to helping ranking systems understand what helpful content looks like.

                While Danny didn’t get into exact details about the helpfulness signals the way Stein did, Danny’s comments confirmed the underlying point that Robbie Stein was making about how their systems are tuned to identify content that satisfies humans.

                Danny continued explaining what SEOs and creators should know about Google’s ranking systems. He began by acknowledging that it’s reasonable that people see a different search experience and conclude that they must be doing something different.

                He explained:

                “…I think people really see stuff and they think they want to be doing something different. …It is the natural reaction you have, but we talk about sort of this North Star or the point that you should be heading to.”

                Next he explained how all of Google’s ranking systems are engineered to rank content that was made for humans and specifically calls out content that is created for search engines as examples of what not to do.

                Danny continued his answer:

                “And when it comes to all of our ranking systems, it’s about how are we trying to reward content that we think is great for people, that it was written for human beings in mind, not written for search algorithms, not written for LLMs, not written for LMNO, PEO, whatever you want to call it.

                It’s that everything we do and all the things that we tailor and all the things that we try to improve, it’s all about how do we reward content that human beings find satisfying and say, that was what I was looking for, that’s what I needed. So if all of our systems are lining up with that, it’s that thing about you’re going to be ahead of it if you’re already doing that.

                To whereas the more you’re trying to… Optimize or GEO or whatever you think it is for a specific kind of system, the more you’re potentially going to get away from the main goal, especially if those systems improve and get better, then you’re kind of having to shift and play a lot of catch up.

                So, you know, we’re going to talk about some of that stuff here with the big caveat, we’re only talking about Google, right? That’s who we work for. So we don’t say what, anybody else’s AI search, chat search, whatever you want to kind of deal with and kind of go with it from there. But we’ll talk about how we look at things and how it works.”

                What Danny is clearly saying is that Google is tuned to rank content that’s written for humans and that optimizing for specific LLMs sets up a situation where it could backfire.

                Why Optimizing For LLMs Is Misguided

                Although Danny didn’t mention it, this is the right moment to point out that OpenAI, Perplexity, and Claude together have a total traffic referral volume of less than 1%. So it’s clearly a mistake to optimize content for LLMs at the risk of losing significant traffic from search engines.

                Content that is genuinely satisfying to people remains aligned with what Google’s systems are built to reward.

                Why SEOs Don’t Believe Google

                Google’s insistence that their algorithms are tuned toward user satisfaction is not new. They have been saying it for over two decades, and over the years it has been a given that Google was overstating their technology. That is no longer the case.

                Arguably, since at least 2018’s Medic broad core update, Google has been making genuine strides toward actually delivering search results that are influenced by user behavior signals that guide Google’s machines toward understanding what kind of content people like, plus AI and neural networks that are better able to match content to a search query.

                If there is any doubt about this, check out the interview with Robbie Stein, where he explains exactly how human feedback, in aggregate, influences the search results.

                Is Human Optimized Content The New SEO?

                So now we are at a point where links no longer are the top ranking criteria. Google’s systems have the ability to understand queries and content and match one to the other. User behavior data, which has been a part of Google’s algorithms since at least 2004, plays a strong role in helping Google understand what kinds of content satisfy users.

                It may be well past time for SEOs and creators to let go of the old SEO playbooks and start focusing on optimizing their websites for humans.

                Featured Image by Shutterstock/Bas Nastassia

                Coursera Acquiring Udemy via @sejournal, @martinibuster

                Coursera has agreed to acquire Udemy in a stock-for-stock transaction that will combine two large online learning platforms with consumer and enterprise businesses.

                Under the terms of the deal, each Udemy share will be exchanged for 0.800 Coursera shares. Following the transaction, Coursera shareholders will own approximately 59% of the combined company, while Udemy shareholders will own about 41%. The merged company will continue operating as Coursera, Inc., headquartered in Mountain View, California. Greg Hart will remain CEO, and Coursera’s Andrew Ng will serve as chairman. The companies expect the transaction to close in the second half of 2026, subject to shareholder approval and regulatory clearances.

                Coursera’s platform is built around partnerships with universities, institutions, and industry organizations, with a focus on credentialed learning programs. Udemy operates an open marketplace of instructors and provides training programs used by enterprise customers. The combined company is expected to offer academic courses, professional skills training, and enterprise learning programs through a single platform.

                The companies report a combined total of more than 270 million registered learners and nearly 19,000 enterprise customers. Coursera contributes institutional partnerships and credential-focused offerings, while Udemy contributes a large instructor marketplace and a broad enterprise customer base. Udemy generates a majority of its revenue outside North America, while Coursera generates a larger share of revenue in the United States.

                If completed, the transaction will bring together institutional learning programs and an open instructor marketplace within a single company.

                General reaction online was surprise, with Udemy instructor unsure about where he stood, writing on X:

                “I can’t tell what this acquisition by Coursera means for my future as a Udemy instructor. Time will tell.
                I will definitely keep on teaching – on one platform or another.

                But learning that a brand that was THE main part of my professional life for the last 10 years will go away is really very, very sad.”

                Read more at the Coursera website:

                Coursera to combine with Udemy

                Featured Image by Shutterstock/ShutterStockies