Google’s New BlockRank Democratizes Advanced Semantic Search via @sejournal, @martinibuster

A new research paper from Google DeepMind  proposes a new AI search ranking algorithm called BlockRank that works so well it puts advanced semantic search ranking within reach of individuals and organizations. The researchers conclude that it “can democratize access to powerful information discovery tools.”

In-Context Ranking (ICR)

The research paper describes the breakthrough of using In-Context Ranking (ICR), a way to rank web pages using a large language model’s contextual understanding abilities.

It prompts the model with:

  1. Instructions for the task (for example, “rank these web pages”)
  2. Candidate documents (the pages to rank)
  3. And the search query.

ICR is a relatively new approach first explored by researchers from Google DeepMind and Google Research in 2024 (Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? PDF). That earlier study showed that ICR could match the performance of retrieval systems built specifically for search.

But that improvement came with a downside in that it requires escalating computing power as the number of pages to be ranked are increased.

When a large language model (LLM) compares multiple documents to decide which are most relevant to a query, it has to “pay attention” to every word in every document and how each word relates to all others. This attention process gets much slower as more documents are added because the work grows exponentially.

The new research solves that efficiency problem, which is why the research paper is called, Scalable In-context Ranking with Generative Models, because it shows how to scale In-context Ranking (ICR) with what they call BlockRank.

How BlockRank Was Developed

The researchers examined how the model actually uses attention during In-Context Retrieval and found two patterns:

  • Inter-document block sparsity:
    The researchers found that when the model reads a group of documents, it tends to focus mainly on each document separately instead of comparing them all to each other. They call this “block sparsity,” meaning there’s little direct comparison between different documents. Building on that insight, they changed how the model reads the input so that it reviews each document on its own but still compares all of them against the question being asked. This keeps the part that matters, matching the documents to the query, while skipping the unnecessary document-to-document comparisons. The result is a system that runs much faster without losing accuracy.
  • Query-document block relevance:
    When the LLM reads the query, it doesn’t treat every word in that question as equally important. Some parts of the question, like specific keywords or punctuation that signal intent, help the model decide which document deserves more attention. The researchers found that the model’s internal attention patterns, particularly how certain words in the query focus on specific documents, often align with which documents are relevant. This behavior, which they call “query-document block relevance,” became something the researchers could train the model to use more effectively.

The researchers identified these two attention patterns and then designed a new approach informed by what they learned. The first pattern, inter-document block sparsity, revealed that the model was wasting computation by comparing documents to each other when that information wasn’t useful. The second pattern, query-document block relevance, showed that certain parts of a question already point toward the right document.

Based on these insights, they redesigned how the model handles attention and how it is trained. The result is BlockRank, a more efficient form of In-Context Retrieval that cuts unnecessary comparisons and teaches the model to focus on what truly signals relevance.

Benchmarking Accuracy Of BlockRank

The researchers tested BlockRank for how well it ranks documents on three major benchmarks:

  • BEIR
    A collection of many different search and question-answering tasks used to test how well a system can find and rank relevant information across a wide range of topics.
  • MS MARCO
    A large dataset of real Bing search queries and passages, used to measure how accurately a system can rank passages that best answer a user’s question.
  • Natural Questions (NQ)
    A benchmark built from real Google search questions, designed to test whether a system can identify and rank the passages from Wikipedia that directly answer those questions.

They used a 7-billion-parameter Mistral LLM and compared BlockRank to other strong ranking models, including FIRST, RankZephyr, RankVicuna, and a fully fine-tuned Mistral baseline.

BlockRank performed as well as or better than those systems on all three benchmarks, matching the results on MS MARCO and Natural Questions and doing slightly better on BEIR.

The researchers explained the results:

“Experiments on MSMarco and NQ show BlockRank (Mistral-7B) matches or surpasses standard fine-tuning effectiveness while being significantly more efficient at inference and training. This offers a scalable and effective approach for LLM-based ICR.”

They also acknowledged that they didn’t test multiple LLMs and that these results are specific to Mistral 7B.

Is BlockRank Used By Google?

The research paper says nothing about it being used in a live environment. So it’s purely conjecture to say that it might be used. Also, it’s natural to try to identify where BlockRank fits into AI Mode or AI Overviews but the descriptions of how AI Mode’s FastSearch and RankEmbed work are vastly different from what BlockRank does. So it’s unlikely that BlockRank is related to FastSearch or RankEmbed.

Why BlockRank Is A Breakthrough

What the research paper does say is that this is a breakthrough technology that puts an advanced ranking system within reach of individuals and organizations that wouldn’t normally be able to have this kind of high quality ranking technology.

The researchers explain:

“The BlockRank methodology, by enhancing the efficiency and scalability of In-context Retrieval (ICR) in Large Language Models (LLMs), makes advanced semantic retrieval more computationally tractable and can democratize access to powerful information discovery tools. This could accelerate research, improve educational outcomes by providing more relevant information quickly, and empower individuals and organizations with better decision-making capabilities.

Furthermore, the increased efficiency directly translates to reduced energy consumption for retrieval-intensive LLM applications, contributing to more environmentally sustainable AI development and deployment.

By enabling effective ICR on potentially smaller or more optimized models, BlockRank could also broaden the reach of these technologies in resource-constrained environments.”

SEOs and publishers are free to their opinions of whether or not this could be used by Google. I don’t think there’s evidence of that but it would be interesting to ask a Googler about it.

Google appears to be in the process of making BlockRank available on GitHub, but it doesn’t appear to have any code available there yet.

Read about BlockRank here:
Scalable In-context Ranking with Generative Models

Featured Image by Shutterstock/Nithid

3 Things Stephanie Arnett is into right now

Dungeon Crawler Carl, by Matt Dinniman

This science fiction book series confronted me with existential questions like “Are we alone in the universe?” and “Do I actually like LitRPG??” (LitRPGwhich stands for “literary role-playing game”is a relatively new genre that merges the conventions of computer RPGs with those of science fiction and fantasy novels.) In the series, aliens destroy most of Earth, leaving the titular Carl and Princess Donut, his ex-girlfriend’s cat, to fight in a bloodthirsty game of survival with rules that are part reality TV and part video game dungeon crawl. I particularly recommend the audiobook, voiced by Jeff Hays, which makes the numerous characters easy to differentiate. 

Journaling, offline and open-source

For years I’ve tried to find a perfect system to keep track of all my random notes and weird little rabbit holes of inspiration. None of my paper journals or paid apps have been able to top how customizable and convenient the developer-­favorite notetaking app Obsidian is. Thanks to this app, I’ve been able to cancel subscription services I was using to track my reading habits, fitness goals, and journalingand I also use it to track tasks I do for work, like drafting this article. It’s open-source and files are stored on my device, so I don’t have to worry about whether I’m sharing my private thoughts with a company that might scrape them for AI.

Bird-watching with Merlin 

Sometimes I have to make a conscious effort to step away from my screens and get out in the world. The latest version of the birding app Merlin, from the Cornell Lab of Ornithology, helps ease the transition. I can “collect” and identify species via step-by-step questions, photos, ormy favoriteaudio that I record so that the app can analyze it to indicate which birds are singing in real time. Using the audio feature, I “captured” the red-eyed vireo flitting up in the tree canopy and backlit by the sun. Fantastic for my backyard feeder or while I’m out on the trail.

Dispatch: Partying at one of Africa’s largest AI gatherings

It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. The room is draped in white curtains, and a giant screen blinks with videos created with generative AI. A classic East African folk song by the Tanzanian singer Saida Karoli plays loudly on the speakers.

Friends greet each other as waiters serve arrowroot crisps and sugary mocktails. A man and a woman wearing leopard skins atop their clothes sip beer and chat; many women are in handwoven Ethiopian garb with red, yellow, and green embroidery. The crowd teems with life. “The best thing about the Indaba is always the parties,” computer scientist Nyalleng Moorosi tells me. Indaba means “gathering” in Zulu, and Deep Learning Indaba, where we’re meeting, is an annual AI conference where Africans present their research and technologies they’ve built.

Moorosi is a senior researcher at the Distributed AI Research Institute and has dropped in for the occasion from the mountain kingdom of Lesotho. Dressed in her signature “Mama Africa” headwrap, she makes her way through the crowded hall.

Moments later, a cheerful set of Nigerian music begins to play over the speakers. Spontaneously, people pop up and gather around the stage, waving flags of many African nations. Moorosi laughs as she watches. “The vibe at the Indaba—the community spirit—is really strong,” she says, clapping.

Moorosi is one of the founding members of the Deep Learning Indaba, which began in 2017 from a nucleus of 300 people gathered in Johannesburg, South Africa. Since then, the event has expanded into a prestigious pan-African movement with local chapters in 50 countries.

This year, nearly 3,000 people applied to join the Indaba; about 1,300 were accepted. They hail primarily from English-speaking African countries, but this year I noticed a new influx from Chad, Cameroon, the Democratic Republic of Congo, South Sudan, and Sudan. 

Moorosi tells me that the main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. Indeed, the organizations I’ve seen at the event include Microsoft Research’s AI for Good Lab, Google, the Mastercard Foundation, and the Mila–Quebec AI Institute. But she hopes to see more homegrown ventures create opportunities within Africa.

That evening, before the dinner, we’d both attended a panel on AI policy in Africa. Experts discussed AI governance and called for those developing national AI strategies to seek more community engagement. People raised their hands to ask how young Africans could access high-level discussions on AI policy, and whether Africa’s continental AI strategy was being shaped by outsiders. Later, in conversation, Moorosi told me she’d like to see more African priorities (such as African Union–backed labor protections, mineral rights, or safeguards against exploitation) reflected in such strategies. 

On the last day of the Indaba, I ask Moorosi about her dreams for the future of AI in Africa. “I dream of African industries adopting African-built AI products,” she says, after a long moment. “We really need to show our work to the world.” 

Abdullahi Tsanni is a science writer based in Senegal who specializes in narrative features. 

Job titles of the future: AI embryologist

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up. The United States is in fact facing a critical shortage of both embryologists and genetic counselors. 

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. 

Wiemer is the chief scientific officer and head of clinical affairs at Fairtility, a company that uses artificial intelligence to shed light on the viability of eggs and embryos before proceeding with IVF. The company’s algorithm, called CHLOE (for Cultivating Human Life through Optimal Embryos), has been trained on millions of embryo data points and outcomes and can quickly sift through a patient’s embryos to point the clinician to the ones with the highest potential for successful implantation. This, the company claims, will improve time to pregnancy and live births. While its effectiveness has been tested only retrospectively to date, CHLOE is the first and only FDA-approved AI tool for embryo assessment. 

Current challenge 

When a patient undergoes IVF, the goal is to make genetically normal embryos. Embryologists collect cells from each embryo and send them off for external genetic testing. The results of this biopsy can take up to two weeks, and the process can add thousands of dollars to the treatment cost. Moreover, passing the screen just means an embryo has the correct number of chromosomes. That number doesn’t necessarily reflect the overall health of the embryo. 

“An embryo has one singular function, and that is to divide,” says Wiemer. “There are millions of data points concerning embryo cell division, cell division characteristics, area and size of the inner cell mass, and the number of times the trophectoderm [the layer that contributes to the future placenta] contracts.”

The AI model allows for a group of embryos to be constantly measured against the optimal characteristics at each stage of development. “What CHLOE answers is: How well did that embryo develop? And does it have all the necessary components that are needed in order to make a healthy implantation?” says Wiemer. CHLOE produces an AI score reflecting all the analysis that’s been done within an embryo. 

In the near future, Wiemer says, reducing the percentage of abnormal embryos that IVF clinics transfer to patients should not require a biopsy: “Every embryology laboratory will be doing automatic assessments of embryo development.” 

A changing field

Wiemer, who started his career in animal science, says the difference between animal embryology and human embryology is the extent of paperwork. “Embryologists spend 40% of their time on non-embryology skills,” he adds. “AI will allow us to declutter the embryology field so we can get back to being true scientists.” This means spending more time studying the embryos, ensuring that they are developing normally, and using all that newfound information to get better at picking which embryos to transfer. 

“CHLOE is like having a virtual assistant in the lab to help with embryo selection, ensure conditions are optimal, and send out reports to patients and clinical staff,” he says. “Getting to study data and see what impacts embryo development is extremely rewarding, given that this capability was impossible a few years ago.” 

Amanda Smith is a freelance journalist and writer reporting on culture, society, human interest, and technology.

Inside the archives of the NASA Ames Research Center

At the southern tip of San Francisco Bay, surrounded by the tech giants Google, Apple, and Microsoft, sits the historic NASA Ames Research Center. Its rich history includes a grab bag of fascinating scientific research involving massive wind tunnels, experimental aircraft, supercomputing, astrobiology, and more.

Founded in 1939 as a West Coast lab for the National Advisory Committee for Aeronautics (NACA), NASA Ames was built to close the US gap with Germany in aeronautics research. Named for NACA founding member Joseph Sweetman Ames, the facility grew from a shack on Moffett Field into a sprawling compound with thousands of employees. A collection of 5,000 images from NASA Ames’s archives paints a vivid picture of bleeding-edge work at the heart of America’s technology hub. 

Wind tunnels

NASA AMES RESEARCH CENTER ARCHIVES

A key motivation for the new lab was the need for huge wind tunnels to jump-start America’s aeronautical research, which was far behind Germany’s. Smaller tunnels capable of speeds up to 300 miles per hour were built first, followed by a massive 40-by-80-foot tunnel for full-scale aircraft. Powered up in March 1941, these tunnels became vital after Pearl Harbor, helping scientists rapidly develop advanced aircraft.

Today, NASA Ames operates the world’s largest pressurized wind tunnel, with subsonic and transonic chambers for testing rockets, aircraft, and wind turbines.

Pioneer and Voyager 2

NASA AMES RESEARCH CENTER ARCHIVES

From 1965 to 1992, Ames managed the Pioneer missions, which explored the moon, Venus, Jupiter, and Saturn. It also contributed to Voyager 2, launched in 1977, which journeyed past four planets before entering interstellar space in 2018. Ames’s archive preserves our first glimpses of strange new worlds seen during these pioneering missions.

Odd aircraft

aircraft in flight

NASA AMES RESEARCH CENTER ARCHIVES

The skeleton of a hulking airship hangar, obsolete even before its completion, remains on NASA Ames’s campus.

Many odd-looking experimental aircraftsuch as vertical take-off and landing (VTOL) aircraft, jets, and rotorcrafthave been developed and tested at the facility over the years, and new designs continue to take shape there today.

Vintage illustrations

NASA AMES RESEARCH CENTER ARCHIVES

Awe-inspiring retro illustrations in the Ames archives depict surfaces of distant planets, NASA spacecraft descending into surreal alien landscapes, and fantastical renderings of future ring-shaped human habitats in space. The optimism and excitement of the ’70s and ’80s is evident. 

Bubble suits and early VR

person in an early VR suit

NASA AMES RESEARCH CENTER ARCHIVES

In the 1980s, NASA Ames researchers worked to develop next-generation space suits, such as the bulbous, hard-shelled AX-5 model. NASA Ames’s Human-Machine Interaction Group also did pioneering work in the 1980s with virtual reality and came up with some wild-­looking hardware. Long before today’s AR/VR boom, Ames researchers glimpsed the technology’s potentialwhich was limited only by computing power.

 Decades of federally funded research at Ames fueled breakthroughs in aviation, spaceflight, and supercomputingan enduring legacy now at risk as federal grants for science face deep cuts.

A version of this story appeared on Beau­tiful Public Data (beautifulpublicdata.com), a newsletter by Jon Keegan that curates visually interesting data sets collected by local, state, and federal government agencies.

 Introducing: the body issue

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the body issue

We’re thrilled to share the latest edition of MIT Technology Review magazine, digging into the future of the human body, and how it could change in the years ahead thanks to scientific and technological tinkering.

The below stories are just a taste of what you can expect from this fascinating issue. To read the full thing, subscribe now if you haven’t already.

+ A new field of science claims to be able to predict aesthetic traits, intelligence, and even moral 

character in embryos. But is this the next step in human evolution or something more dangerous? Read the full story.

+ How aging clocks can help us understand why we age—and if we could ever reverse it. Read the full story.

+ Instead of relying on the same old recipe biology follows, stem-cell scientist Jacob Hanna is coaxing the beginnings of animal bodies directly from stem cells. But should he?

+ The more we move, the more our muscle cells begin to make a memory of that exercise. Bonnie Tsui’s piece digs into how our bodies learn to remember.

MIT Technology Review Narrated: How Antarctica’s history of isolation is ending—thanks to Starlink

“This is one of the least visited places on planet Earth and I got to open the door,” Matty Jordan, a construction specialist at New Zealand’s Scott Base in Antarctica, wrote in the caption to the video he posted to Instagram and TikTok in October 2023. 

In the video, he guides viewers through the hut, pointing out where the men of Ernest Shackleton’s 1907 expedition lived and worked. 

The video has racked up millions of views from all over the world. It’s also kind of a miracle: until very recently, those who lived and worked on Antarctic bases had no hope of communicating so readily with the outside world. That’s starting to change, thanks to Starlink, the satellite constellation developed by Elon Musk’s company SpaceX to service the world with high-speed broadband internet.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has launched its own web browser  
Atlas has an Ask ChatGPT sidebar and an agent mode to complete certain tasks. (TechCrunch)
+ It runs on Chromium, the open-source engine that powers Google’s Chrome. (Axios)
+ OpenAI believes the future of web browsing will involve chatting to its interface. (Ars Technica)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

2 China is demanding US chip firms share their sales data
It’s conducting a probe into American suppliers, and it wants answers. (Bloomberg $)

3 AI pioneers are among those calling for a ban on superintelligent systems
Including Geoffrey Hinton and Yoshua Bengio. (The Guardian)
+ Prominent Chinese scientists have also signed the statement. (FT $)
+ Read our interview with Hinton on why he’s now scared of AI. (MIT Technology Review)

4 Anthropic promises its AI is not woke
Despite what the Trump administration’s “AI Czar” says. (404 Media)
+ Its CEO insists the company shares the same goals as the Trump administration. (CNBC)
+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)

5 Climate scientists expect we’ll see more solar geoengineering attempts
But it’s a risky intervention with potentially huge repercussions. (New Scientist $)
+ The hard lessons of Harvard’s failed geoengineering experiment. (MIT Technology Review)

6 Why Silicon Valley is so fixated on China
It marvels at the country’s ability to move fast and break things—but should it?(NYT $)
+ How Trump is helping China extend its massive lead in clean energy. (MIT Technology Review)

7 YouTube has launched a likeness detector to foil AI doppelgängers
But that doesn’t guarantee that the fake videos will be removed. (Ars Technica)

8 Bots are threatening Reddit’s status as an oasis of human chat
Can it keep fighting off the proliferation of AI slop? (WP $)
+ It’s not just Reddit either—employers are worried about ‘workslop’ too. (FT $)
+ AI trained on AI garbage spits out AI garbage. (MIT Technology Review)

9 This AI-powered pet toy is surprisingly cute
Moflin is a guinea pig-like creature that learns to become more expressive. (TechCrunch)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

10 You don’t need to know a lot about AI to get a job in AI
Make of that what you will. (Fast Company $)

Quote of the day

“It’s wild that Google wrote the Transformers paper (that birthed GPTs) AND open sourced Chromium, both of which will (eventually) lead to the downfall of their search monopoly. History lesson in there somewhere.”

—Investor Nikunj Kothari ponders the future of Google’s empire in the wake of the announcement of OpenAI’s new web browser in a post on X.

One more thing

The quest to protect farmworkers from extreme heat

Even as temperatures rise each summer, the people working outdoors to pick fruits, vegetables, and flowers have to keep laboring.

The consequences can be severe, leading to illnesses such as heat exhaustion, heatstroke and even acute kidney injury.

Now, researchers are developing an innovative sensor that tracks multiple vital signs with a goal of anticipating when a worker is at risk of developing heat illness and issuing an alert. If widely adopted and consistently used, it could represent a way to make workers safer on farms even without significant heat protections. Read the full story.

—Kalena Thomhave

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Netflix is making a film based on the hit board game Catan, for some reason.
+ Why it’s time to embrace the beauty of slow running.
+ The Satellite Crayon Project takes colors from the natural world and turns them into vibrant drawing implements.
+ Mamma Mia has never sounded better.

New Ecommerce Tools: October 22, 2025

Every week we handpick a list of new products and services for ecommerce merchants. This installment includes updates on generative engine optimization, agentic commerce, cryptocurrencies, embedded payments, website builders, customer data tools, and product experience management.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Introducing Syndigo OpenAI Connect and Syndigo GEO for agentic commerce. Syndigo, a provider of product experience management and syndication tools, has launched Syndigo OpenAI Connect and Syndigo Generative Engine Optimization to promote visibility, conversion, and loyalty in AI shopping. The offerings enable companies to publish content directly into ChatGPT using OpenAI’s Agentic Commerce Protocol specifications, making it easier for large language models to surface products and reach shoppers.

Home page of Syndigo

Syndigo

ACI Worldwide and BitPay partner on crypto and stablecoin payments. ACI Worldwide, a provider of global payments technology, has partnered with BitPay, a cryptocurrency payments processor, expanding the digital asset features for merchants and payment service providers via its Payments Orchestration platform. Merchants and PSPs can integrate BitPay, alongside other payment options, including the ability to accept, store, and spend stablecoins and other cryptocurrencies. BitPay’s platform also supports peer-to-peer payments and mobile point-of-sale applications.

MoonPay launches crypto payments service. MoonPay, a provider of blockchain-based infrastructures, has launched MoonPay Commerce, a platform for businesses to accept crypto payments worldwide. MoonPay Commerce enables users to set up checkouts, subscriptions, and deposits, offering real-time insights and customizable tools to create branded payment experiences. MoonPay Commerce also powers and maintains the Solana Pay integration for Shopify.

Commerce introduces BigCommerce Payments powered by PayPal. Commerce, parent company of BigCommerce and Feedonomics, has announced an embedded payment processing tool, powered by PayPal, to launch in 2026 in the U.S. BigCommerce merchants can gain access to advanced payment capabilities, simplified account management, and buy-now pay-later via PayPal’s Pay Later feature, all managed within the BigCommerce control panel. Features of BigCommerce Payments include real-time balance insights, top-ups and payouts, bank and card connections, and currency management.

Commerce launches Feedonomics Surface for D2C and B2B feed management. Commerce has also launched Feedonomics Surface, a tool that simplifies and automates the connection of product catalogs to advertising channels such as Google Shopping and Meta. Feedonomics Surface provides merchants with data optimization and automation tools to improve their return on advertising spend. Users can create, manage, and synchronize product feeds without extensive technical knowledge.

Home page of Feedonomics Surface

Feedonomics Surface

10Web launches AI-Powered frontend builder with WordPress backend. 10Web, a developer, has launched Vibe for WordPress, an AI-native frontend builder with a WordPress backend. According to 10Web, Vibe for WordPress converts natural language prompts into WordPress websites and applications. Users describe their project, such as a landing page or a multi-page site, and the platform generates a version in minutes.

Cryptocurrency platform Coinbase launches commercial payment tools. Coinbase, a cryptocurrency exchange, has introduced two commercial payment tools: global payouts and payment links. With global payouts, businesses can send or email USDC (a stablecoin pegged to the U.S. dollar) for cross-border payments, funded from platform balances or a connected bank account. With payment links, businesses can create a shareable link to request a specific amount in USDC. Customers can use that link to pay.

Grapes Studio launches AI-powered, HTML-first site builder. Grapes Studio, a website builder, has launched with an AI-powered, HTML-first editor that lets users copy and rebuild existing sites in minutes. Users can import their websites directly into the editor, then drag, drop, or ask the AI to make targeted updates, such as “add a pricing section” or “match this to our brand colors.” Users can switch seamlessly between AI assistance and hands-on drag-and-drop editing. Open Core Ventures funded the platform.

Product management platform Salsify releases AI-powered ecommerce tools. Salsify, a product experience platform, has created Angie, a conversational AI assistant to help brands navigate the Salsify platform, knowledge base, and customer configurations. Salsify has also launched its OpenAI syndication channel to help brands power agentic experiences through ChatGPT using the Agentic Commerce Protocol. The channel will provide OpenAI with an authoritative source of product data directly from brands, to surface goods in search and shopping experiences.

Home page of Salsify

Salsify

Mastercard launches Merchant Cloud to support global commerce. Mastercard has announced Merchant Cloud, a payments platform that unifies services from Mastercard and its partners, helping businesses navigate and expand worldwide commerce. The platform offers scheme-agnostic solutions for credential tokenization, guest checkout, fraud protection, identity verification, and approval rate optimization. It also provides gateway services, such as omnichannel experiences, efficient transaction routing, and access to data insights, along with capabilities to conduct agentic payments securely.

Rokt mParticle launches customer data platform on Snowflake AI Data Cloud. Rokt mParticle, a real-time customer data platform, has launched its Hybrid CPD on Snowflake AI Data Cloud, providing enterprise companies with real-time activation and cloud-native flexibility. According to Rokt, with the hybrid CDP, enterprises can choose the right data strategy for every need, whether responding to customer behavior or running large-scale campaigns directly from a cloud data platform.

Salesforce announces support for Agentic Commerce Protocol. Salesforce has partnered with Stripe and OpenAI to build an Instant Checkout integration guided by the Agentic Commerce Protocol. The Protocol, co-developed by Stripe and OpenAI, provides a standardized framework for brands to interact with consumers through AI agents and facilitate a quick checkout. This partnership will allow merchants using Salesforce’s Agentforce Commerce to harness conversational AI and intelligent shopping experiences. Buyers can pay using various methods, including Link, Stripe’s consumer payments product.

FiS introduces Smart Basket for real-time purchase intelligence. FiS, a financial technology platform, has announced Smart Basket, a real-time, item-level engine and transaction gateway. Smart Basket analyzes an individual’s shopping behavior to apply optimal rewards and payment methods at checkout. The tool leverages three components within FiS’s ecosystem: the real-time payments gateway, loyalty platform, and filtered spend.

Home page of FiS

FiS

AI Assistants Show Significant Issues In 45% Of News Answers via @sejournal, @MattGSouthern

Leading AI assistants misrepresented or mishandled news content in nearly half of evaluated answers, according to a European Broadcasting Union (EBU) and BBC study.

The research assessed free/consumer versions of ChatGPT, Copilot, Gemini, and Perplexity, answering news questions in 14 languages across 22 public-service media organizations in 18 countries.

The EBU said in announcing the findings:

“AI’s systemic distortion of news is consistent across languages and territories.”

What The Study Found

In total, 2,709 core responses were evaluated, with qualitative examples also drawn from custom questions.

Overall, 45% of responses contained at least one significant issue, and 81% had some issue. Sourcing was the most common problem area, affecting 31% of responses at a significant level.

How Each Assistant Performed

Performance varied by platform. Google Gemini showed the most issues: 76% of its responses contained significant problems, driven by 72% with sourcing issues.

The other assistants were at or below 37% for major issues overall and below 25% for sourcing issues.

Examples Of Errors

Accuracy problems included outdated or incorrect information.

For instance, several assistants identified Pope Francis as the current Pope in late May, despite his death in April, and Gemini incorrectly characterized changes to laws on disposable vapes.

Methodology Notes

Participants generated responses between May 24 and June 10, using a shared set of 30 core questions plus optional local questions.

The study focused on the free/consumer versions of each assistant to reflect typical usage.

Many organizations had technical blocks that normally restrict assistant access to their content. Those blocks were removed for the response-generation period and reinstated afterward.

Why This Matters

When using AI assistants for research or content planning, these findings reinforce the need to verify claims against original sources.

As a publication, this could impact how your content is represented in AI answers. The high rate of errors increases the risk of misattributed or unsupported statements appearing in summaries that cite your content.

Looking Ahead

The EBU and BBC published a News Integrity in AI Assistants Toolkit alongside the report, offering guidance for technology companies, media organizations, and researchers.

Reuters reports the EBU’s view that growing reliance on assistants for news could undermine public trust.

As EBU Media Director Jean Philip De Tender put it:

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”


Featured Image: Naumova Marina/Shutterstock

YouTube Expands Likeness Detection To All Monetized Channels via @sejournal, @MattGSouthern

YouTube is beginning to expand access to its likeness detection tool to all channels in the YouTube Partner Program over the next few months.

The technology helps you identify unauthorized videos where your facial likeness has been altered or generated with AI.

YouTube announced the expansion after testing the tool with a small group of creators.

The tool addresses a growing concern as AI-generated content becomes more sophisticated and accessible.

How Likeness Detection Works

Channels can access the tool through YouTube Studio’s content detection tab under a new likeness section.

The onboarding process requires identity verification. You scan a QR code with your phone’s camera, then submit a photo ID and record a brief selfie video performing specific motions.

YouTube processes this information on Google servers, typically granting access within a few days.

Once verified, creators see a dashboard displaying videos that match their facial likeness. The interface shows video titles, upload dates, upload channels, view counts, and subscriber numbers. YouTube’s systems flag some matches as higher priority for review.

Taking Action On Detected Content

You have three options when reviewing matches.

You can request removal under YouTube’s privacy guidelines, submit a copyright claim, or archive the video without action. The tool automatically fills legal name and email information when starting a removal request.

Privacy removal requests apply to altered or synthetic content that violates specific criteria. YouTube’s announcement highlighted two examples: AI-generated videos showing creators endorsing political candidates, and infomercials with creators’ faces added through AI.

Copyright claims follow different rules and must consider fair use exceptions. Videos using short clips from a creator’s channel may not qualify for privacy removal but could warrant copyright action.

See a demonstration in the video below:

Policy Differences

YouTube stressed the distinction between privacy and copyright policies.

Privacy policy violations involve altered or synthetic content judged against criteria including whether the content is parody, satire, or includes AI disclosure. Copyright infringement covers unauthorized use of original content, including cropped videos to avoid detection or videos with changed audio.

The tool surfaces some short clips from creators’ own channels. These don’t qualify for privacy removal but may be eligible for copyright claims if fair use doesn’t apply.

Why This Matters

This gives YouTube Partner Program creators direct control over how AI-generated content uses their likeness.

Monetized channels can now monitor unauthorized deepfakes and request removal when videos mislead the audience about endorsements or statements that were never made.

Looking Ahead

The tool will roll out to eligible creators over the next few months. Those who see no matches shouldn’t be concerned. YouTube says this indicates no detected unauthorized use of their likeness on the platform.

Channels can withdraw consent and stop using the tool at any time through the manage likeness detection settings.

PPC Trends 2026: AI, Automation, And The Fight For Visibility via @sejournal, @MattGSouthern

If you manage PPC campaigns, you’ve seen it. Platforms are making more decisions without asking you first.

Campaign types keep consolidating into AI-first formats like Performance Max and Demand Gen. The granular controls you used to rely on keep disappearing or moving behind automation.

A year ago, Performance Max still felt experimental. Now it’s often the default option, with AI generating ad copy, and automation selecting audiences based on signals you can’t always see. When performance drops, you have fewer levers to pull and less visibility into what’s actually happening.

It can be disorienting to some, and the trend isn’t reversing.

We asked PPC professionals how they’re navigating this shift. Most aren’t pessimistic about AI-first campaigns. Many have found ways to work with platform automation without surrendering the strategic thinking that drives results.

You can use AI tools without losing your expertise in the process.

4 Key Findings From Industry Professionals

We surveyed professionals from agency, platform, and consultancy backgrounds for this year’s report. Clear patterns emerged in how they’re adapting to AI-first campaign management.

1. AI Tools Save Time But Still Need Babysitting

Most professionals now use AI daily for tasks like keyword research and ad copy variations. The tools are good enough to integrate into workflows.

But there’s a catch. Over half identify “inaccurate, unreliable, or inconsistent output quality” as the biggest limitation. AI accelerates production, but it hasn’t replaced the need for human oversight.

One contributor noted that in regulated industries where legal review is required, AI outputs often can’t be used without heavy editing.

The professionals who get results are the ones treating AI as an assistant, not a replacement.

2. “Control” Means Something Different Now

You can’t control exact search terms the way you used to. You can’t set precise bids on individual keywords or force campaigns to follow rigid parameters.

Several contributors argue you still have meaningful control, it just operates differently than before. One Google Ads coach compared it to giving a teenager the destination address and trusting they can navigate there, even if they take a few wrong turns along the way.

The new version of control means setting clear business objectives and providing high-quality conversion data. If your conversion tracking is messy or incomplete, AI will optimize toward the wrong goals.

3. Measurement Got More Honest (And More Uncomfortable)

Cookie deprecation was canceled in Chrome, but measurement challenges haven’t disappeared. What’s changed is how practitioners talk about attribution.

One agency founder admitted that focusing too heavily on perfect attribution might have been a strategic mistake. “Your marketing strategy should hold up even if granular tracking disappears.”

Other contributors emphasize that first-party data collection with proper consent is now essential for survival, especially in lead generation models.

Revenue remains the most reliable source of truth when platform-reported metrics conflict.

The most durable measurement approach involves choosing a limited set of reliable lenses rather than attempting to reconcile data from every available source.

4. Platform-Generated Creative Performs Better Than You’d Think

This finding surprises people. Several contributors report that AI-generated creative assets can perform competitively with human-created versions when they’re prompted effectively.

But “when prompted effectively” is doing substantial work in that sentence.

Quality depends heavily on how well you prompt the tools and how much brand context you provide. The tools still struggle with maintaining consistent brand voice and meeting legal compliance requirements in regulated industries.

Visual generation continues to need improvement, though contributors note it’s getting better for ecommerce product photography.

Most teams have settled on a hybrid workflow where AI handles idea generation and creates variations while humans manage final approval and anything requiring nuanced brand voice.

What Makes This Report Different

Previous years focused on specific platform changes or new features. This year’s questions dig into strategy.

How do you maintain visibility when platforms reduce transparency? What measurement techniques still work when attribution is murky? How do you adapt creative workflows when AI can generate assets on demand?

The contributors include:

  • Brooke Osmundson, Director of Growth Marketing, Smith Micro Software.
  • Gil Gildner, Agency Co-Founder, Discosloth.
  • Navah Hopkins, Product Liaison, Microsoft.
  • Jonathan Kagan, Director of Search & Media Strategy, Amsive.
  • Mike Ryan, Head of Ecommerce Insights, Smarter Ecommerce.
  • Jyll Saskin Gales, Google Ads Coach, Inside Google Ads.

The answers reflect an industry adapting in real time. Some contributors have embraced AI-first workflows fully, while others remain cautious about surrendering too much control. All are experimenting constantly because the platforms aren’t slowing down.

Why Download This Now

If you’re managing campaigns, you’re already wrestling with these challenges. Are you approaching them with a clear strategy, or just reacting to each platform change as it happens?

This report will show you how experienced professionals at agencies, platforms, and consultancies are thinking through the same problems you’re facing right now.

Download PPC Trends 2026 to see how industry professionals are adapting their strategies, maintaining accountability in automated campaigns, and finding ways to make AI-first advertising work without losing the strategic expertise that separates successful campaigns from mediocre ones.

PPC Trends 2026


Featured Image: Paulo Bobita/Search Engine Journal