Meet the researcher hosting a scientific conference by and for AI

In October, a new academic conference will debut that’s unlike any other. Agents4Science is a one-day online event that will encompass all areas of science, from physics to medicine. All of the work shared will have been researched, written, and reviewed primarily by AI, and will be presented using text-to-speech technology. 

The conference is the brainchild of Stanford computer scientist James Zou, who studies how humans and AI can best work together. Artificial intelligence has already provided many useful tools for scientists, like DeepMind’s AlphaFold, which helps simulate proteins that are difficult to make physically. More recently, though, progress in large language models and reasoning-enabled AI has advanced the idea that AI can work more or less as autonomously as scientists themselves—proposing hypotheses, running simulations, and designing experiments on their own. 

James Zou
James Zou’s Agents4Science conference will use text-to-speech to present the work of the AI researchers.
COURTESY OF JAMES ZOU

That idea is not without its detractors. Among other issues, many feel AI is not capable of the creative thought needed in research, makes too many mistakes and hallucinations, and may limit opportunities for young researchers. 

Nevertheless, a number of scientists and policymakers are very keen on the promise of AI scientists. The US government’s AI Action Plan describes the need to “invest in automated cloud-enabled labs for a range of scientific fields.” Some researchers think AI scientists could unlock scientific discoveries that humans could never find alone. For Zou, the proposition is simple: “AI agents are not limited in time. They could actually meet with us and work with us 24/7.” 

Last month, Zou published an article in Nature with results obtained from his own group of autonomous AI workers. Spurred on by his success, he now wants to see what other AI scientists (that is, scientists that are AI) can accomplish. He describes what a successful paper at Agents4Science will look like: “The AI should be the first author and do most of the work. Humans can be advisors.”

A virtual lab staffed by AI

As a PhD student at Harvard in the early 2010s, Zou was so interested in AI’s potential for science that he took a year off from his computing research to work in a genomics lab, in a field that has greatly benefited from technology to map entire genomes. His time in so-called wet labs taught him how difficult it can be to work with experts in other fields. “They often have different languages,” he says. 

Large language models, he believes, are better than people at deciphering and translating between subject-specific jargon. “They’ve read so broadly,” Zou says, that they can translate and generalize ideas across science very well. This idea inspired Zou to dream up what he calls the “Virtual Lab.”

At a high level, the Virtual Lab would be a team of AI agents designed to mimic an actual university lab group. These agents would have various fields of expertise and could interact with different programs, like AlphaFold. Researchers could give one or more of these agents an agenda to work on, then open up the model to play back how the agents communicated to each other and determine which experiments people should pursue in a real-world trial. 

Zou needed a (human) collaborator to help put this idea into action and tackle an actual research problem. Last year, he met John E. Pak, a research scientist at the Chan Zuckerberg Biohub. Pak, who shares Zou’s interest in using AI for science, agreed to make the Virtual Lab with him. 

Pak would help set the topic, but both he and Zou wanted to see what approaches the Virtual Lab could come up with on its own. As a first project, they decided to focus on designing therapies for new covid-19 strains. With this goal in mind, Zou set off training five AI scientists (including ones trained to act like an immunologist, a computational biologist, and a principal investigator) with different objectives and programs at their disposal. 

Building these models took a few months, but Pak says they were very quick at designing candidates for therapies once the setup was complete: “I think it was a day or half a day, something like that.”

Zou says the agents decided to study anti-covid nanobodies, a cousin of antibodies that are much smaller in size and less common in the wild. Zou was shocked, though, at the reason. He claims the models landed on nanobodies after making the connection that these smaller molecules would be well-suited to the limited computational resources the models were given. “It actually turned out to be a good decision, because the agents were able to design these nanobodies efficiently,” he says. 

The nanobodies the models designed were genuinely new advances in science, and most were able to bind to the original covid-19 variant, according to the study. But Pak and Zou both admit that the main contribution of their article is really the Virtual Lab as a tool. Yi Shi, a pharmacologist at the University of Pennsylvania who was not involved in the work but made some of the underlying nanobodies the Virtual Lab modified, agrees. He says he loves the Virtual Lab demonstration and that “the major novelty is the automation.” 

Nature accepted the article and fast-tracked it for publication preview—Zou knew leveraging AI agents for science was a hot area, and he wanted to be one of the first to test it. 

The AI scientists host a conference

When he was submitting his paper, Zou was dismayed to see that he couldn’t properly credit AI for its role in the research. Most conferences and journals don’t allow AI to be listed as coauthors on papers, and many explicitly prohibit researchers from using AI to write papers or reviews. Nature, for instance, cites uncertainties over accountability, copyright, and inaccuracies among its reasons for banning the practice. “I think that’s limiting,” says Zou. “These kinds of policies are essentially incentivizing researchers to either hide or minimize their usage of AI.”

Zou wanted to flip the script by creating the Agents4Science conference, which requires the primary author on all submissions to be an AI. Other bots then will attempt to evaluate the work and determine its scientific merits. But people won’t be left out of the loop entirely: A team of human experts, including a Nobel laureate in economics, will review the top papers. 

Zou isn’t sure what will come of the conference, but he hopes there will be some gems among the hundreds of submissions he expects to receive across all domains. “There could be AI submissions that make interesting discoveries,” he says. “There could also be AI submissions that have a lot of interesting mistakes.”

While Zou says the response to the conference has been positive, some scientists are less than impressed.

“How do you get leaps of insight?”

Lisa Messeri

Lisa Messeri, an anthropologist of science at Yale University, has loads of questions about AI’s ability to review science: “How do you get leaps of insight? And what happens if a leap of insight comes onto the reviewer’s desk?” She doubts the conference will be able to give satisfying answers.

Last year, Messeri and her collaborator Molly Crockett investigated obstacles to using AI for science in another Nature article. They remain unconvinced of its ability to produce novel results, including those shared in Zou’s nanobodies paper. 

“I’m the kind of scientist who is the target audience for these kinds of tools because I’m not a computer scientist … but I am doing computationally oriented work,” says Crockett, a cognitive scientist at Princeton University. “But I am at the same time very skeptical of the broader claims, especially with regard to how [AI scientists] might be able to simulate certain aspects of human thinking.” 

And they’re both skeptical of the value of using AI to do science if automation prevents human scientists from building up the expertise they need to oversee the bots. Instead, they advocate for involving experts from a wider range of disciplines to design more thoughtful experiments before trusting AI to perform and review science. 

“We need to be talking to epistemologists, philosophers of science, anthropologists of science, scholars who are thinking really hard about what knowledge is,” says Crockett. 

But Zou sees his conference as exactly the kind of experiment that could help push the field forward. When it comes to AI-generated science, he says, “there’s a lot of hype and a lot of anecdotes, but there’s really no systematic data.” Whether Agents4Science can provide that kind of data is an open question, but in October, the bots will at least try to show the world what they’ve got. 

The Download: Google’s AI energy expenditure, and handing over DNA data to the police

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

In a first, Google has released data on how much energy an AI prompt uses

Google has just released a report detailing how much energy its Gemini apps use for each query. In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity, the equivalent of running a standard microwave for about one second. The company also provided average estimates for the water consumption (five drops per query) and carbon emissions associated with a text prompt to Gemini.

It’s the most transparent estimate yet from a Big Tech company with a popular AI product, and the report includes detailed information about how the company calculated its final estimate.

Earlier this year, MIT Technology Review published a comprehensive series on AI and energy, at which time none of the major AI companies would reveal their per-prompt energy usage. Google’s new publication, at last, allows for a peek behind the curtain that researchers and analysts have long hoped for. Read the full story.

—Casey Crownhart

I gave the police access to my DNA—and maybe some of yours

Last year, I added my DNA profile to a private genealogical database, FamilyTreeDNA, and clicked “Yes” to allow the police to search my genes.

In 2018, police in California announced they’d caught the Golden State Killer, a man who had eluded capture for decades. Once the police had “matches” to a few relatives of the killer, they built a large family tree from which they plucked the likely suspect.

This process, called forensic investigative genetic genealogy, or FIGG, has since helped solve hundreds of murders and sexual assaults.

But I wasn’t really driven by some urge to capture distantly related serial killers. Rather, my spit had a less gallant and more quarrelsome motive: to troll privacy advocates whose fears around DNA I think are overblown and unhelpful. By giving up my saliva for inspection, I was going against the view that a person’s DNA is the individualized, sacred text that privacy advocates sometimes claim. Read the full story.

—Antonio Regalado

This article appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Meet the researcher hosting a scientific conference by and for AI

In October, a new academic conference will debut that’s unlike any other. All of the work shared at Agents4Science will have been researched, written, and reviewed primarily by AI, and will be presented using text-to-speech technology. 

That idea is not without its detractors. Among other issues, many feel AI is not capable of the creative thought needed in research, makes too many mistakes and hallucinations, and may limit opportunities for young researchers. 

Nevertheless, a number of scientists and policymakers are very keen on the promise of AI scientists—and some even think they could unlock scientific discoveries that humans could never find alone. Read the full story.

—Peter Hall

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk tried to persuade Mark Zuckerberg to buy OpenAI
But the bid was rejected earlier this year. (Insider $)
+ OpenAI is asking Meta for evidence of any coordinated plans. (TechCrunch)
+ I’m guessing the cage fight is still off then. (FT $)

2 AI giants are seeking real-world data that can’t be scraped from the internet
It’s a bid to make their models more accurate and to find new use cases. (Rest of World)

3 Russia’s state-backed messenger app will be preinstalled on all phones
Critics say the MAX app is essentially a government spy tool. (Reuters)
+ Around 18 million people have registered to use it so far. (CNN)
+ How Russia killed its tech industry. (MIT Technology Review)

4 The Trump administration is refusing to fully fund a major HIV program
It’s ignoring a directive from Congress to withhold around $3 billion. (NYT $)
+ HIV could infect 1,400 infants every day because of US aid disruptions. (MIT Technology Review)

5 How Trump decides which chip companies may have to give up equity
Increasing your investments in the US? You’re off the hook. (WSJ $)
+ America-first chipmaking remains a fantasy, though. (Economist $)
+ Experts think Trump’s unconventional Intel deal may backfire. (Wired $)
+ DeepSeek’s new AI model is compatible with Chinese-made chips. (FT $)

6 The EU is speeding up its plans for a digital euro 💶
It’s considering running it on a public blockchain, to experts’ concern. (FT $)
+ Is the digital dollar dead? (MIT Technology Review)

7 We don’t have to open new mines to obtain minerals for clean energy
Although we have to get better at using the material we do mine. (New Scientist $)
+ How one mine could unlock billions in EV subsidies. (MIT Technology Review)

8 This newly-discovered gene could usher in new chronic pain treatments
One day, cutting out certain foods could lessen discomfort. (Economist $)
+ The pain is real. The painkillers are virtual reality. (MIT Technology Review)

9 Why Africa is buying so many solar panels
It’s not just its more affluent nations snapping them up, either. (Wired $)
+ The race to get next-generation solar technology on the market. (MIT Technology Review)

10 How families are using AI to run their households
No more quibbling over meal planning. (WP $)

Quote of the day

“If AGI doesn’t come to pass sometime soon, I wouldn’t be surprised if this whole thing pops.”

—Bhavya Kashyap, an angel investor, tells Insider why investors are fuelling a risky bubble by rushing to buy stocks in the hottest AI companies.

One more thing

How AI is changing gymnastics judging

The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.

At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.

For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.

—Jessica Taylor Price

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Finally, some good news—a sweet little Australian marsupial called an ampurta is no longer endangered (thanks Glen!)
+ What would a GTA set in London look like?
+ Why glass houses aren’t all they’re cracked up to be (geddit?)
+ Over in Denmark, there’s a national competition encouraging cities to get rid of their gray concrete tiles and replace them with peaceful green spaces (thanks Alice!)

Triple Whale’s Moby AI Gets Things Done

We’ve all heard the buzz surrounding agentic AI agents. What’s missing for many of us is how they can help our business. What is an AI agent? Can it really perform tasks and get things done?

I asked those questions and more to Anthony DelPizzo. He’s with Triple Whale, the Shopify-backed ecommerce analytics platform that has launched its own AI agent called Moby. It responds to ChatGPT-like prompts, suggests marketing channels, and even composes emails.

The entire audio of our conversation is embedded below. The transcript is condensed and edited for clarity.

Eric Bandholz: Who are you, and what do you do?

Anthony DelPizzo: I lead product marketing at Triple Whale and have been here for about nine months. Before that, I spent nearly four years at Klaviyo.

Triple Whale is an analytics platform for ecommerce brands. We merge fragmented data across marketing and sales into a single system and dashboard to help merchants make strategic decisions. To date, we’ve processed over $65 billion in gross merchandise volume.

We launched Moby, an agentic AI agent, about a month ago after a long testing phase. Moby is a set of AI tools that interact directly with merchants’ data. Think of it as ChatGPT focused on the platforms you already work with. Merchants can ask Moby both simple and complex questions and get answers tailored to their own data.

Moby Agents take it a step further. They’re akin to autonomous teammates that can analyze information, generate insights, and even take actions across ad platforms, marketing channels, operations, and more. The result could be much higher conversions or lower overhead.

Moby is built on Triple Whale’s massive data warehouse. It draws on those benchmarks and works natively with metrics such as CAC and ROAS. By using the data, Moby can connect cleanly with large language models such as Anthropic and OpenAI for each type of query.

Moby is embedded within the Triple Whale platform. It doesn’t just analyze; it can also perform tasks such as activating ads or drafting emails.

Bandholz: Do you share customer data with those LLMs?

DelPizzo: We have privacy agreements with all LLM  partners. Data stays within Triple Whale’s private environment. We’re not sending entire datasets to Anthropic, OpenAI, or any other company. Instead, Moby provides context to the LLMs based on the prompt, allowing our customers to use the LLMs securely.

For example, a prompt could be, “How should I prepare for BFCM to grow revenue 30%?” Moby’s Deep Dive feature breaks requests like this into multiple steps, with each acting as an agent examining a different aspect of the business. The result is a structured plan merchants can use to prepare for Black Friday and Cyber Monday.

Merchants use Moby for general prompts and analysis, not just seasonal planning. We provide a prompting guide to help start with effective questions and then refine the queries through follow-ups.

Bandholz: Say I prompt Moby to analyze my sales, margins, and ads, for guidance. What then?

DelPizzo: Moby would connect to your data as a Triple Whale client — product margins, SKUs, ad performance, Klaviyo, Attentive, logistics, and more. By analyzing these inputs, it can identify growth levers, such as which products or channels drove profit last year and which ones are trending now. For instance, if a brand has started performing well on AppLovin, the mobile ad platform, Moby might suggest scaling there for BFCM.

Triple Whale’s platform includes eight attribution models, along with post-purchase surveys, to track what’s driving results. We’ve also added marketing mix modeling to measure the impact from click and non-click channels, including Amazon. Moby can run correlations at a statistically significant level, which gives merchants confidence in the conclusions.

Based on that, it forecasts likely outcomes tied to business goals. If a brand wants to grow revenue by 30%, Moby highlights which levers — spending, channels, creative — are likely to help reach that target. Merchants can even see Moby’s reasoning step by step, like watching strategists think through a plan.

Moby’s analysis isn’t limited to numbers. Using AI vision, it can review ad creative, such as color choices, hooks, and copy. It also analyzes email performance by scanning HTML, subject lines, and preview text. It can draft email copy informed by this analysis, giving merchants ideas to test.

Bandholz: Can you cite anonymous customer wins from Moby?

DelPizzo: We rolled out early access to Moby and Moby Agents in February, five months ago. In April, a $100 million global brand used it during a four-day giveaway. On the final day, the team asked Moby, “What should we adjust in our plan?”

Moby responded with a detailed budget allocation by channel and predicted the revenue impact. They followed it exactly and ended up having their highest revenue day ever — 35% above their previous record, more than $200,000 higher.

Another example is LSKD, a fitness apparel brand in Australia with more than 50 stores. They used Moby to analyze the performance of their marketing channels. One agent uncovered over $100,000 in fraudulent spend from an influencer’s self-bidding, which saved the company that money. Since adopting Moby Agents, LSKD’s ROAS has grown about 40%.

Bandholz: How can merchants go wrong with Moby?

DelPizzo: The most common challenge is trying to adopt too much at once. Success usually comes from starting small. We provide a library of 70 pre-built agents, but using all of them right away can feel overwhelming.

The best outcomes are from teams that begin with a single agent, adapt it to their business, and build confidence with steady results. From there, they expand to other areas — maybe they start with the conversion rate optimization team, then retention, then other steps in the funnel. That gradual approach tends to be more sustainable.

Bandholz: Why use Moby instead of building a custom data tool with an LLM such as DeepSeek?

DelPizzo: One factor is the dataset it draws from. Moby is trained on $65 billion in GMV and has access to broad ecommerce benchmarks. It’s not about sharing brand-specific data but rather using aggregated insights to provide context — like knowing typical CAC or ROAS levels in different industries, or, say, margins for apparel versus skincare.

Another piece is the infrastructure. Building from scratch requires a unified schema for orders, events, and performance data. At Triple Whale, our large team of engineers has worked on this for years, and it’s still evolving. Without that groundwork, it’s hard to achieve the same level of ecommerce-specific intelligence.

Custom setups are possible, but Moby combines benchmarks, context, and infrastructure in a way that’s difficult to replicate.

Bandholz: Where can people support you, follow you, reach out?

DelPizzo: Our site is TripleWhale.com. Our socials include X and LinkedIn. I’m on LinkedIn.

Tips For Running Competitor Campaigns In Paid Search via @sejournal, @timothyjjensen

Paid search professionals constantly debate the merits of running paid search campaigns bidding on competitor brand names. Questions such as the following may arise:

  • Is bidding on your competitors ethical?
  • Are the high costs-per-click (CPCs) worth spending the budget on?
  • Are you actually reaching people with buying intent?

In this article, I’ll talk through answers to these questions and more to help you understand if a competitor search campaign might be right for your brand.

Competitor Bidding Ethics

Google and Microsoft allow you to bid on your competitor’s name within keywords (and this right has even been tested in the courts here and here.), but you cannot directly mention a trademarked brand name (that you don’t have the rights to use) in ad copy.

In addition, even if you don’t include their name, you should not write your ad copy in a way that a user thinks they may be going to your competitor’s site instead of yours.

For instance, you might use the headline “Official Site” (without mentioning whose official site you’re pointing to). When a user sees that in conjunction with having searched for the competitor’s name, they may naturally think they’re going to that company’s site.

Finally, the landing page should also clearly feature your brand’s name and logo in order to avoid deception.

Cost-Benefit Analysis Of Competitor Bidding

Let’s face it: competitor keywords can have expensive CPCs. High competition around these keywords in many industries drives up cost.

You’ll also generally struggle to achieve a decent quality score due to other companies’ brand keywords naturally being deemed less relevant to your ads and landing pages, which can also impact cost.

Because of the high potential cost, competitor bidding does not make sense for all industries or brands.

For instance, if you’re selling products with a low profit margin, bidding on these pricy keywords may not work. Generally, this tactic works best for higher cost, higher margin products and services, as it’s easier to still yield a return on investment (ROI) after higher costs-per-acquisition (CPAs) and lower conversion rates.

Be careful also about entering competitor bidding “wars” for the sole reason that other brands are bidding on your name. This action can quickly lead to rising CPCs for all with little payoff.

One scenario where I’ve seen competitor bidding work best is when a company offers a very specific, complex service that’s difficult to sum up in a search query but has established brands that the right prospects would be familiar with.

For instance, if you’re promoting software for a particular type of industrial machine, niche buyers may be aware of companies that already provide that software.

Once you’ve established a use case for competitor bidding, you should establish a list of brands to use.

Determining Competitors To Bid On

When figuring out which competitor brands to bid on, you should rely on a combination of both internal company data as well as ad platform data.

First of all, talk with key stakeholders in marketing and sales to determine who the brand considers to be top competitors.

Who has similar products and services? Which brands target similar prospects (whether by location, demographic, or company traits)?

Note that this list may not and likely will not contain all potential competitors.

If you have established paid search campaigns already, use auction insights to see the top brands showing up for the same queries as yours. Of course, these may not all be completely relevant and will require some vetting through.

Once you’ve compiled a list, it’s time to think through the keywords you’ll bid on.

Who Is (And Isn’t) Your Audience

Be careful about going unnecessarily broad in the keywords you’re using in competitor campaigns.

Generally, if you’re just bidding on the brand name alone, you’re likely reaching a lot of existing customers looking to log in, place online orders, or find a nearby location without giving a second thought to anything else.

For instance, Apple isn’t going to sell many MacBooks by bidding on the word “Microsoft.”

Ideally, you want to reach people who are in a research phase, indicated by wording in their search query:

  • [Brand name] + cost/pricing
  • [Brand name] + compare/vs
  • [Brand name] + reviews
  • [Brand name] + pros/cons
  • [Brand name] + alternatives
  • [Brand name] + features

While a potentially riskier strategy, as people may be in a heated moment, you could also test targeting people experiencing issues and potentially in the market to switch:

  • [Brand name] + support
  • [Brand name] + troubleshoot
  • [Brand name] + cancel

Create Your Ads

Now, think through the ad copy you’ll put in front of prospects searching for competitors. Take some time to review competitor ads and offers, considering how your calls-to-action (CTAs) will stack up.

Think through areas where you “win” against certain competitors and highlight those. Remember that these may vary based on the brand you’re bidding against.

For instance, you may have lower costs than a certain competitor and highlight pricing for those searches, while you may have higher costs than another competitor but have unique features to highlight.

Also, look at how your offers compare. If one competitor offers a seven-day demo and you offer a 30-day demo, feature that in your ad.

This also should be an area you regularly monitor and adjust CTAs based on how competitors tweak their ads and offers.

What Happens After The Ad?

One maxim applicable to any paid search campaign is that what happens on the search engine results page up to the ad click is only one portion of the user experience.

A significant portion of the decision process happens after reaching the landing page, beyond what you can control in keywords and ad copy.

Think through what your prospect is seeing based on the context that they were researching a competitor. Your homepage probably isn’t the best place to land them, and the same sales landing page you use for more general keywords may not be ideal either.

Assuming a user is comparison shopping, placing some content on your landing page positioning your brand against others will likely help.

For instance, you could create a table showing how your features and pricing stack up vs. competitors (either mentioning specific names or providing industry averages).

You could also hone in on trust signals that set your brand apart. Highlight industry awards you’ve won. Mention the number of accounts serviced. Talk about how many integrations you have with commonly used products.

If you need to establish a baseline for comparing against other companies, prompt a large language model (LLM) to put together a list of features for your brand and a list of top competitors.

Provide the URLs for pages that would contain products/services to flesh this out.

Launch And Monitor Results

Once you have your competitor campaigns fleshed out, it’s time to get them off the ground and see what performance looks like.

In addition to ensuring proper conversion tracking and watching for lead/sale quality, you’ll also want to keep an eye out for both how current competitors change up their offers and new competitors entering the space that may be worth targeting.

With a carefully thought-out setup and proper monitoring, you may find that competitor search campaigns allow you to capture leads or sales from queries you were not previously reaching.

On the other hand, you may discover that for your industry, the CPAs and conversion rates aren’t worthwhile, but as with anything in PPC, you ran a test and learned the results.

At the very least, take stock of potential competitors in your field and consider testing if you are looking to expand your reach in paid search.

More Resources:


Featured Image: SvetaZi/Shutterstock

New Ecommerce Tools: August 21, 2025

Every week we handpick and publish a list of new products and services from vendors of ecommerce merchants. This installment includes updates on cross-border transactions, marketing, social commerce, AI shopping agents, AI-powered subscriptions, and order management platforms.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

dLocal and Tiendamia partner on cross-border ecommerce in Latin America. dLocal, a cross-border payment platform connecting global merchants to emerging markets, has partnered with Tiendamia, a Latin America-based marketplace. Through the integration, Tiendamia can accept cross-border payments and offer a range of local payment methods, from cards and cash-based options, across Ecuador, Costa Rica, Peru, and Argentina, where it also supports digital wallets, and enable domestic transactions in Uruguay. Tiendamia can pay local providers in Ecuador, Costa Rica, Peru, and Uruguay.

Home page of dLocal

dLocal

Privy debuts marketing automation features for ecommerce brands. Privy, an ecommerce marketing platform, has launched features to help merchants simplify workflow and personalize the customer journey. Privy Flows is a new visual marketing automation tool to trigger emails and texts based on customer actions. The SMS and MMS campaign composer enables sellers to design and send high-performing mobile campaigns quickly. Smart Triggers offers combining conditions, such as exit intent and scroll depth, to deliver a message at the right moment.

Payoneer and Stripe enhance online checkout experience for SMBs. Payoneer, a global payments and funding platform, has partnered with Stripe, the payment processor, expanding Payoneer’s Online Checkout offering for cross-border direct-to-consumer merchants. Launching in the Asia Pacific region first, the upgraded Payoneer Checkout capabilities, powered by Stripe, will empower SMBs to accept a broader range of payments via online webstore checkout, including buy-now pay-later options such as Affirm and Klarna, and digital wallets such as Apple Pay and Google Pay.

BlueSnap partners with Commerce for B2B payments and accounts receivable automation. BlueSnap, a payment orchestration platform for B2B and B2C businesses, has announced its integration with Commerce, the parent company of BigCommerce, to deliver automations for B2B payments and accounts receivable. BigCommerce merchants can sync customer and invoice data in real time with back-office systems. Buyers can view and pay inventory orders and vendor invoices in one branded portal. Enable autopay, early pay discounts, invoice reminders, and real-time updates.

Riskified partners with Human on AI shopping agent commerce. Riskified, a provider of ecommerce fraud prevention and risk intelligence, has partnered with Human Security, a cybersecurity company, to advance a unified security framework for merchants. By aligning Human Security’s recently launched Sightline featuring AgenticTrust with Riskified’s ecommerce risk management expertise in fraud prevention, chargeback protection, and policy abuse prevention, merchants can apply consistent trust policies and transaction decisions across both human and AI-driven interactions, according to Riskified.

Home page of Riskified

Riskified

Subotiz launches AI-powered subscription platform. Subotiz is a new platform combining subscription management, global payments, and intelligent automation. Flexible subscription billing supports recurring plans, tiered pricing, trials, promotions, and dynamic billing cycles. AI-powered automation monitors user behavior and payment patterns to reduce churn. Subotiz provides built-in tools for customizable cross-border tax configurations and compliance. Also, Subotiz connects to over 200 payment methods across multiple gateways, currencies, and regions.

eBay launches AI-powered seller tools. At its Open25 seller event in April, eBay unveiled a suite of features and updates to its marketplace, including embedded text offer-making, AI-powered messaging support, and new seller protections. eBay has released an AI assistant for messaging on the eBay mobile app and web in the U.S. and U.K. With Offers in Messaging, buyers and sellers can negotiate directly in the message thread — sending, receiving, countering, and accepting offers without switching screens.

Warp launches SMB suite to simplify logistics. Warp, an enterprise freight transportation service, has launched the SMB Suite, a bundled multichannel logistics platform tailored for the apparel, retail, and consumer goods sectors. Warp’s offerings include less-than-truckload, pool distribution, big and bulky final mile delivery, inbound vendor consolidation, and zone skipping, integrating its nationwide network of tech-enabled cross-docks, flexible routing systems, and unified technology stack.

WebSell integrates with Microsoft Dynamics 365 Business Central. WebSell, an ecommerce platform for retailers and wholesalers, has integrated with Microsoft Dynamics 365 Business Central, allowing companies to connect back-office and ecommerce operations in a streamlined platform. The integration automatically syncs data between Microsoft Dynamics 365 Business Central and an online store, including products, inventory, prices, customer information, and orders. WebSell supports both B2B and B2C models and includes tools such as customer-specific pricing, multi-store management, search engine optimization, and marketing services.

Home page of WebSell

WebSell

Mayple and Emirates Courier Express partner for cross-border delivery. Mayple Global, an ecommerce logistics platform, has partnered with Emirates Courier Express to expand cross-border delivery capabilities for U.S. merchants. The collaboration utilizes Mayple’s centralized logistics hub in Dubai and Emirates Courier Express’s network to deliver packages to eight international markets, shortening transit times, reaching challenging markets, simplifying customs handling, and accessing competitive shipping rates. Mayple’s model centralizes inventory in Dubai, so that brands can ship from a single hub.

CollAble launches Find.ly storefront tool for influencers. CollAble, a digital influencer network, has launched Find.ly, a storefront and link aggregator tool turning influencers’ social media posts into shoppable storefronts. Influencers can (i) curate and organize products in a visually appealing storefront, (ii) synchronize social media posts with shoppable product links, enabling audiences to purchase directly from the post, and (iii) integrate with affiliate networks to ensure real-time tracking and commission attribution.

THG Commerce expands social commerce features as TikTok Shop Partner. THG Commerce, an ecommerce platform from THG Ingenuity, a sales acceleration provider, has announced the expansion of its social commerce capabilities and official TikTok Shop Partner status. The expanded services include social strategy development, live commerce execution, content creation, influencer and affiliate marketing, community management, and social analysis and reporting.

Deck Commerce launches modular order management solution. Deck Commerce, an order management system for D2C brands, has launched Commerce Centers, a modular order management platform that helps brands retain shoppers through fulfillment, delivery, and returns. According to Deck Commerce, each Center helps brands improve key steps in the process by making inventory more available, orders easier to manage, fulfillment faster and more accurate, and service more reliable.

Home page of Deck Commerce

Deck Commerce

Google: Why Lazy Loading Can Delay Largest Contentful Paint (LCP) via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller discussed when lazy loading helps and when it can slow pages.

Splitt used a real-world example on developers.google.com to illustrate a common pattern: making every image lazy by default can delay Largest Contentful Paint (LCP) if it includes above-the-fold visuals.

Splitt said:

“The content management system that we are using for developers.google.com … defaults all images to lazy loading, which is not great.”

Splitt used the example to explain why lazy-loading hero images is risky: you tell the browser to wait on the most visible element, which can push back LCP and cause layout shifts if dimensions aren’t set.

Splitt said:

“If you are using lazy loading on an image that is immediately visible, that is most likely going to have an impact on your largest contentful paint. It’s like almost guaranteed.”

How Lazy Loading Delays LCP

LCP measures the moment the largest text or image in the initial viewport is painted.

Normally, the browser’s preload scanner finds that hero image early and fetches it with high priority so it can paint fast.

When you add loading="lazy" to that same hero, you change the browser’s scheduling:

  • The image is treated as lower priority, so other resources start first.
  • The browser waits until layout and other work progress before it requests the hero image.
  • The hero then competes for bandwidth after scripts, styles, and other assets have already queued.

That delay shifts the paint time of the largest element later, which increases your LCP.

On slow networks or CPU-limited devices, the effect is more noticeable. If width and height are missing, the late image can also nudge layout and feel “jarring.”

SEO Risk With Some Libraries

Browsers now support a built-in loading attribute for images and iframes, which removes the need for heavy JavaScript in standard scenarios. WordPress adopted native lazy loading by default, helping it spread.

Splitt said:

“Browsers got a native attribute for images and iframes, the loading attribute … which makes the browser take care of the lazy loading for you.”

Older or custom lazy-loading libraries can hide image URLs in nonstandard attributes. If the real URL never lands in src or srcset in the HTML Google renders, images may not get picked up for indexing.

Splitt said:

“We’ve seen multiple lazy loading libraries … that use some sort of data-source attribute rather than the source attribute… If it’s not in the source attribute, we won’t pick it up if it’s in some custom attribute.”

How To Check Your Pages

Use Search Console’s URL Inspection to review the rendered HTML and confirm that above-the-fold images and lazy-loaded modules resolve to standard attributes. Avoid relying on the screenshot.

Splitt advised:

“If the rendered HTML looks like it contains all the image URLs in the source attribute of an image tag … then you will be fine.”

Ranking Impact

Splitt framed ranking effects as modest. Core Web Vitals contribute to ranking, but he called it “a tiny minute factor in most cases.”

What You Should Do Next

  • Keep hero and other above-the-fold images eager with width and height set.
  • Use native loading="lazy" for below-the-fold images and iframes.
  • If you rely on a library for previews, videos, or dynamic sections, make sure the final markup exposes real URLs in standard attributes, and confirm in rendered HTML.

Looking Ahead

Lazy loading is useful when applied selectively. Treat it as an opt-in for noncritical content.

Verify your implementation with rendered HTML, and watch how your LCP trends over time.


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, August 2025. 

Google Confirms New Google Verified Badge for Local Services Ads via @sejournal, @brookeosmundson

Google just announced a new unifying identity for its Local Services Ads (LSAs) verification badges.

Called Google Verified, the badge will replace several different trust signals that advertisers and consumers have been seeing over the years.

This includes the Google Guaranteed, Google Screened, License Verified by Google, and the Money Back Guarantee program.

Starting in October 2025, eligible LSAs that pass the necessary screenings will display this streamlined mark: a single badge designed to communicate credibility in a more consistent way.

Why is Google Consolidating Badges?

In the past, Google’s verification system was fragmented.

Different types of businesses had different badges, and consumers were left guessing what each one actually meant. Was a “Screened” provider more trustworthy than a “Guaranteed” one? Did a license verification carry more weight than a money-back promise?

The lack of consistency made it harder for advertisers to explain their value and for consumers to make decisions.

By rolling everything into one identity, Google Verified aims to simplify the process for everyone involved.

The badge will not only appear across Local Service Ads but will also include transparency for consumers. When a user taps or hovers over the badge, they can see the specific checks a business has passed.

How Does This Change Impact Advertisers?

For marketers and business owners, the simplified badge system removes some of the confusion around what signals matter.

Instead of juggling multiple programs, the message is now clear: your business is either Google Verified, or it’s not.

That said, the bar for participation may feel higher. Businesses that don’t keep their documentation, licensing, and other requirements up to date risk losing the badge.

Since Google has indicated it may only show the badge when it predicts it will help users make decisions, credibility and visibility could become even more closely linked.

In short, advertisers who maintain verification stand to benefit from increased trust, while those who lag behind could see their ads appear less competitive.

This update doesn’t require marketers to overhaul their entire strategy by any means. However, there are a few practical steps you can take to ensure a smooth transition by October.

  • Review eligibility now. Make sure your licenses, insurance, and background checks are up-to-date before October.
  • Build in reminders. Treat verification like an ongoing compliance process, not a one-time task.
  • Educate clients or internal teams. If you manage LSA campaigns for others, help them understand that the badge isn’t just a cosmetic update. It reflects ongoing credibility.
  • Monitor performance post-launch. Once the new badge rolls out, watch for shifts in click-thru rate (CTR) and conversion rates. If verification gives a measurable lift, you’ll want to highlight that value in your reporting.

A Shift Toward Ongoing Trust

Google Verified may look like a rebrand on the surface, but it’s also a signal that trust in digital advertising is moving toward continuous validation.

For businesses, this means credibility is not something you earn once; it’s something you prove over and over again.

For advertisers, the key takeaway is simple: don’t treat this as a one-time update. Verification will become an expectation, not a nice-to-have, and it could influence not just how consumers view your ads but how often those ads are shown.

Semantic Overlap Vs. Density: Finding The Balance That Wins Retrieval via @sejournal, @DuaneForrester

Marketers today spend their time on keyword research to uncover opportunities, closing content gaps, making sure pages are crawlable, and aligning content with E-E-A-T principles. Those things still matter. But in a world where generative AI increasingly mediates information, they are not enough.

The difference now is retrieval. It doesn’t matter how polished or authoritative your content looks to a human if the machine never pulls it into the answer set. Retrieval isn’t just about whether your page exists or whether it’s technically optimized. It’s about how machines interpret the meaning inside your words.

That brings us to two factors most people don’t think about much, but which are quickly becoming essential: semantic density and semantic overlap. They’re closely related, often confused, but in practice, they drive very different outcomes in GenAI retrieval. Understanding them, and learning how to balance them, may help shape the future of content optimization. Think of them as part of the new on-page optimization layer.

Image Credit:: Duane Forrester

Semantic density is about meaning per token. A dense block of text communicates maximum information in the fewest possible words. Think of a crisp definition in a glossary or a tightly written executive summary. Humans tend to like dense content because it signals authority, saves time, and feels efficient.

Semantic overlap is different. Overlap measures how well your content aligns with a model’s latent representation of a query. Retrieval engines don’t read like humans. They encode meaning into vectors and compare similarities. If your chunk of content shares many of the same signals as the query embedding, it gets retrieved. If it doesn’t, it stays invisible, no matter how elegant the prose.

This concept is already formalized in natural language processing (NLP) evaluation. One of the most widely used measures is BERTScore (https://arxiv.org/abs/1904.09675), introduced by researchers in 2020. It compares the embeddings of two texts, such as a query and a response, and produces a similarity score that reflects semantic overlap. BERTScore is not a Google SEO tool. It’s an open-source metric rooted in the BERT model family, originally developed by Google Research, and has become a standard way to evaluate alignment in natural language processing.

Now, here’s where things split. Humans reward density. Machines reward overlap. A dense sentence may be admired by readers but skipped by the machine if it doesn’t overlap with the query vector. A longer passage that repeats synonyms, rephrases questions, and surfaces related entities may look redundant to people, but it aligns more strongly with the query and wins retrieval.

In the keyword era of SEO, density and overlap were blurred together under optimization practices. Writing naturally while including enough variations of a keyword often achieved both. In GenAI retrieval, the two diverge. Optimizing for one doesn’t guarantee the other.

This distinction is recognized in evaluation frameworks already used in machine learning. BERTScore, for example, shows that a higher score means greater alignment with the intended meaning. That overlap matters far more for retrieval than density alone. And if you really want to deep-dive into LLM evaluation metrics, this article is a great resource.

Generative systems don’t ingest and retrieve entire webpages. They work with chunks. Large language models are paired with vector databases in retrieval-augmented generation (RAG) systems. When a query comes in, it is converted into an embedding. That embedding is compared against a library of content embeddings. The system doesn’t ask “what’s the best-written page?” It asks “which chunks live closest to this query in vector space?”

This is why semantic overlap matters more than density. The retrieval layer is blind to elegance. It prioritizes alignment and coherence through similarity scores.

Chunk size and structure add complexity. Too small, and a dense chunk may miss overlap signals and get passed over. Too large, and a verbose chunk may rank well but frustrate users with bloat once it’s surfaced. The art is in balancing compact meaning with overlap cues, structuring chunks so they are both semantically aligned and easy to read once retrieved. Practitioners often test chunk sizes between 200 and 500 tokens and 800 and 1,000 tokens to find the balance that fits their domain and query patterns.

Microsoft Research offers a striking example. In a 2025 study analyzing 200,000 anonymized Bing Copilot conversations, researchers found that information gathering and writing tasks scored highest in both retrieval success and user satisfaction. Retrieval success didn’t track with compactness of response; it tracked with overlap between the model’s understanding of the query and the phrasing used in the response. In fact, in 40% of conversations, the overlap between the user’s goal and the AI’s action was asymmetric. Retrieval happened where overlap was high, even when density was not. Full study here.

This reflects a structural truth of retrieval-augmented systems. Overlap, not brevity, is what gets you in the answer set. Dense text without alignment is invisible. Verbose text with alignment can surface. The retrieval engine cares more about embedding similarity.

This isn’t just theory. Semantic search practitioners already measure quality through intent-alignment metrics rather than keyword frequency. For example, Milvus, a leading open-source vector database, highlights overlap-based metrics as the right way to evaluate semantic search performance. Their reference guide emphasizes matching semantic meaning over surface forms.

The lesson is clear. Machines don’t reward you for elegance. They reward you for alignment.

There’s also a shift in how we think about structure needed here. Most people see bullet points as shorthand; quick, scannable fragments. That works for humans, but machines read them differently. To a retrieval system, a bullet is a structural signal that defines a chunk. What matters is the overlap inside that chunk. A short, stripped-down bullet may look clean but carry little alignment. A longer, richer bullet, one that repeats key entities, includes synonyms, and phrases ideas in multiple ways, has a higher chance of retrieval. In practice, that means bullets may need to be fuller and more detailed than we’re used to writing. Brevity doesn’t get you into the answer set. Overlap does.

If overlap drives retrieval, does that mean density doesn’t matter? Not at all.

Overlap gets you retrieved. Density keeps you credible. Once your chunk is surfaced, a human still has to read it. If that reader finds it bloated, repetitive, or sloppy, your authority erodes. The machine decides visibility. The human decides trust.

What’s missing today is a composite metric that balances both. We can imagine two scores:

Semantic Density Score: This measures meaning per token, evaluating how efficiently information is conveyed. This could be approximated by compression ratios, readability formulas, or even human scoring.

Semantic Overlap Score: This measures how strongly a chunk aligns with a query embedding. This is already approximated by tools like BERTScore or cosine similarity in vector space.

Together, these two measures give us a fuller picture. A piece of content with a high density score but low overlap reads beautifully, but may never be retrieved. A piece with a high overlap score but low density may be retrieved constantly, but frustrate readers. The winning strategy is aiming for both.

Imagine two short passages answering the same query:

Dense version: “RAG systems retrieve chunks of data relevant to a query and feed them to an LLM.”

Overlap version: “Retrieval-augmented generation, often called RAG, retrieves relevant content chunks, compares their embeddings to the user’s query, and passes the aligned chunks to a large language model for generating an answer.”

Both are factually correct. The first is compact and clear. The second is wordier, repeats key entities, and uses synonyms. The dense version scores higher with humans. The overlap version scores higher with machines. Which one gets retrieved more often? The overlap version. Which one earns trust once retrieved? The dense one.

Let’s consider a non-technical example.

Dense version: “Vitamin D regulates calcium and bone health.”

Overlap‑rich version: “Vitamin D, also called calciferol, supports calcium absorption, bone growth, and bone density, helping prevent conditions such as osteoporosis.”

Both are correct. The second includes synonyms and related concepts, which increases overlap and the likelihood of retrieval.

This Is Why The Future Of Optimization Is Not Choosing Density Or Overlap, It’s Balancing Both

Just as the early days of SEO saw metrics like keyword density and backlinks evolve into more sophisticated measures of authority, the next wave will hopefully formalize density and overlap scores into standard optimization dashboards. For now, it remains a balancing act. If you choose overlap, it’s likely a safe-ish bet, as at least it gets you retrieved. Then, you have to hope the people reading your content as an answer find it engaging enough to stick around.

The machine decides if you are visible. The human decides if you are trusted. Semantic density sharpens meaning. Semantic overlap wins retrieval. The work is balancing both, then watching how readers engage, so you can keep improving.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: CaptainMCity/Shutterstock

Ask An SEO: Should Small Brands Go All In On TikTok For Audience Growth? via @sejournal, @MordyOberstein

This week’s Ask An SEO question is about whether small brands should prioritize TikTok over Google to grow their audience:

“I keep hearing that TikTok is a better platform for small brands with an easier route to an audience. Do you think that Google is still relevant, or should I go all in on TikTok?”

The short answer to your question is that you do not want to pigeonhole your business into one channel, no matter the size. There’s also no such thing as an “easier” way. They are all hard.

I’m going to get the obvious out of the way so we can get to something beyond the usual answers to this question.

Your brand should be where your audience is.

Great, now that we didn’t spend four paragraphs saying the same thing that’s been said 100 times before, let me tell you something you want to consider beyond “be where your audience is.”

It’s Not About Channel, It’s About Traction

I have a lot of opinions here, so let me just “channel” my inner Big Lebowski and preface this with … this is just my opinion, man.

Stop thinking about channels. That’s way down the funnel (yet marketers make channels the seminal question all the time).

Start thinking about traction. How do you generate the most traction?

When I say “traction,” what I really mean is how to start resonating with your audience so that the “chatter” and momentum about who you are compound so that new doors of opportunity open up.

The answer to that question is not, “We will focus on TikTok.”

The answer is also not, “We will focus on Google.”

The answer is also not, “We will focus on YouTube.”

I could go on.

Now, there is another side to this: resources and operations. The question is, how do you balance traction with the amount of resources you have?

For smaller brands, I would think about: What can you do to gain traction that bigger brands have a hard time with?

For example, big brands have a very hard time with video content. They have all sorts of production standards, operations, and a litany of people who have a say, who shouldn’t even be in sniffing distance of having a say.

They can’t simply turn on their phone, record a video, and share something of value.

You can.

Does that mean you should focus on TikTok?

Nope.

It means you should think about what you can put out there that would resonate and help your audience, and does that work for the format?

If so, you may want to go with video shorts. I’m not sure why you would limit that to just TikTok.

Also, if your age demographic is not on TikTok, don’t do that. (“Being where your audience is” is a fundamental truth. Although I think the question is more about being in tune with your audience overall than “being where they are.” If you’re attuned to your audience, then you would know where they are and where to go just naturally.)

I’ll throw another example at you.

Big brands have a hard time communicating with honesty, transparency, and a basic level of authenticity. As a result, a lot of their content is “stale,” at best.

In this instance, trying to generate traction and even traffic by writing more authentic content that speaks to your audience, and not at them, seems quite reasonable.

In other words, the question is, “What resonates with your audience and what opportunities can you seize that bigger brands can’t?”

It’s a framework. It’s what resonates + what resources do you have + what vulnerabilities do the bigger brands in your vertical have that you can capitalize on.

There’s no one-size-fits-all answer to that. Forget your audience for a second, where are the vulnerabilities of the bigger brands in your space?

They might be super-focused on TikTok and have figured out all of the production hurdles I mentioned earlier, but they might not be focused on text-based content in a healthy way, if at all.

Is TikTok “easier” in that scenario?

Maybe not.

Don’t Pigeonhole Yourself

Every platform has its idiosyncrasies. One of the problems with going all-in on a platform is that your brand adopts those idiosyncrasies.

If I were all about Google traffic, my brand might sound like (as too many do) “SEO content.” Across the board. It all seeps through.

The problem with “channels” to me is that it produces a mindset of “optimizing” for the channel. When that happens – which inevitably it does (just look at all the SEO content on the web) – the only way out is very painful.

While you might start with the right mindset, it’s very easy to lose your brand’s actual voice along the way.

That can pigeonhole your brand’s ability to maneuver as time goes on.

For starters, one day what you had on TikTok may no longer exist (I’m just using TikTok as an example).

Your audience may evolve and grow older with you, and move to other forms of content consumption. The TikTok algorithm may gobble up your reach one day. Who knows.

What I am saying is, it is possible to wake up one day and what you had with a specific channel doesn’t exist anymore.

That’s a real problem.

That very real problem gets compounded if your overarching brand voice is impacted by your channel approach. Which it often is.

Now, you have to reinvent the wheel, so to speak.

Now, you have to adjust your channel approach (and never leave all your eggs in one basket), and you have to find your actual voice again.

This whole time, you were focused on speaking to a channel and what the channel demanded (i.e., the algorithm) and not your audience.

All of this is why I recommend a “traction-focused” approach. If you’re focused on traction, then this whole time, you’ve been building yourself up to become less and less reliant on the channel.

If you’re focused on traction, which inherently focuses on resonance, people start to come to you. You become a destination that people seek out, or, at a minimum, are familiar with.

That leaves you less vulnerable to changes within a specific channel.

It also helps you perform better across other channels. When you resonate and people start to recognize you, it makes performing easier (and less costly).

Let’s play it out.

You start creating material for TikTok, but you do it with a traction, not a channel mindset.

The content you produce starts to resonate. People start talking about you, tagging you on social, mentioning you in articles, etc.

All of that would, in theory, help your web content become more visible within organic search and your brand overall more visible in large language models (LLMs), no?

Let’s play it out even more.

One day, TikTok shuts down.

Now, you have to switch channels (old TV reference).

If you focused more on traction:

  1. You should have more direct traffic or branded search traffic than you had when you started your “TikTok-ing.”
  2. You should have more cache to rank better if you decide to create content for Google Search (just as an example).

The opposite is true as well. If Google shut down one day, and you had to move to TikTok, you would:

  1. Have more direct traffic than when you started to focus on Google.
  2. Have more cache and awareness to start building a following on TikTok.

It’s all one song.

Changing The Channel

I feel like, and this is a bit of a controversial take (for some reason), the less you “focus” on channels, the better.

The more you see a channel as less of a strategy and more of a way to actualize the traction you’re looking to create, the better off you’ll be.

You’ll also have an easier time answering questions like “Which channel is better?”.

To reiterate:

  • Don’t lose your brand voice to any channel.
  • Build up traction (resonance) so that when a channel changes, you’re not stuck.
  • Build up traction so that you already have cache when pivoting to the new channel.
  • It’s better to be a destination than anything.
  • All of this depends on your vertical, your resources, your competition, and most importantly, what your audience needs from you.

The moment you think beyond “channels” is the moment you start operating with a bit more clarity about channels. (It’s a kind of “there is no spoon” sort of thing.)

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google AI Mode Adds Agentic Booking, Expands To More Countries via @sejournal, @MattGSouthern

Google is adding agentic booking features to AI Mode in Search, beginning with restaurant reservations for U.S. Google AI Ultra subscribers enrolled in Labs.

What’s New

Booking Reservations

AI Mode can interpret a detailed request, check real-time availability across reservation sites, and link you to the booking page to complete the task.

For businesses, that shifts more discovery and conversion activity inside Google’s surfaces.

Robby Stein wrote on The Keyword:

“We’re starting to roll out today with finding restaurant reservations, and expanding soon to local service appointments and event tickets.”

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Planning Features

Google is introducing planning features that make results easier to share and tailor queries.

In the U.S., you can share an AI Mode response with others so they can ask follow-ups and continue research on their own, and you can revoke the link at any time.

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Separately, U.S. users who opt in to the Labs experiment can receive personalized dining suggestions informed by prior conversations and interactions in Search and Maps, with controls in Google Account settings.

How It Works

Under the hood, Google cites live web browsing via Project Mariner, partner integrations, and signals from the Knowledge Graph and Maps.

Named partners include OpenTable, Resy, Tock, Ticketmaster, StubHub, SeatGeek, and Booksy. Dining is first; local services and ticketing are next on the roadmap.

Availability

Availability is gated. Agentic reservations are limited to Google AI Ultra subscribers in the U.S. through the “Agentic capabilities in AI Mode” Labs experiment.

Personalization is U.S. and opt-in, with dining topics first. Link sharing is available in the U.S. Global access to AI Mode is expanding to more than 180 countries and territories in English, with additional languages planned.

Looking Ahead

AI Mode is moving from answer generation to task completion.

If your category relies on reservation or ticketing partners, verify inventory accuracy, hours, and policies now, and make sure your structured data and Business Profile attributes are clean.

Track how bookings and referrals appear in analytics as Google widens coverage to more tasks and regions.