Google AI Overviews Target Of Legal Complaints In The UK And EU via @sejournal, @martinibuster

The Movement For An Open Web and other organizations filed a legal challenge against Google, alleging harm to UK news publishers. The crux of the legal filing is the allegation that Google’s AI Overviews product is using news content as part of its summaries and for grounding AI answers, but not allowing publishers to opt out of that use without also opting out of appearing in search results.

The Movement For An Open Web (MOW) in the UK published details of a complaint to the UK’s Competition and Markets Authority (CMA):

“Last week, the CMA announced plans to consult on how to make Google search fairer, including providing “more control and transparency for publishers over how their content collected for search is used, including in AI-generated responses.” However, the complaint from Foxglove, the Alliance and MOW warns that news organisations are already being harmed in the UK and action is needed immediately.

In particular, publishers urgently need the ability to opt out of Google’s AI summaries without being removed from search altogether. This is a measure that has already been proposed by other leading regulators, including the US Department of Justice and the South African Competition Commission. Foxglove is warning that without immediate action, the UK – and its news industry – risks being left behind, while other states take steps to protect independent news from Google.

Foxglove is therefore seeking interim measures to prevent Google misusing publisher content pending the outcome of the CMA’s more detailed review.”

Reuters is reporting on an EU antitrust complaint filed in Brussels seeking relief for the same thing:

“Google’s core search engine service is misusing web content for Google’s AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss.”

Publishers And SEOs Critical Of AI Overviews

Google is under increasing criticism from the publisher and the SEO community for sending fewer clicks to users, although Google itself insists it is sending more traffic than ever. This may be one of those occasions where the phrase “let the judge decide” describes where this is all going, because there are no signs that Google is backing down from its decade-long trend of showing fewer links and more answers.

Featured Image by Shutterstock/nitpicker

5 Content Marketing Ideas for August 2025

Marketers hoping to drive traffic and convert visitors in August 2025 can produce content tailored to students, pet owners, readers, spa enthusiasts, and value shoppers.

Content marketing is the act of creating, publishing, and promoting articles, videos, podcasts, and the like to attract, engage, and retain customers.

A downside of the tactic is the seemingly unending need to produce new material. With this in mind, here are five content marketing ideas your company can use in August 2025.

Discoverable Back-to-School Lists

A mom and a grade-school daughter in front of a school bus

Back-to-school product listicles can appear in Google Discover, leading to a surge in traffic.

Google Discover is a personalized article feed in Google’s Search mobile app, Chrome app, and various mobile pages.

The feature is Google’s way of helping folks discover relevant, interesting, and timely content, with an emphasis on timely.

Some professional search engine optimizers believe that Discover favors recent articles, such as news stories or seasonal shopping listicles. There is no guarantee Google Discover will pick up an article, but it can drive significant traffic when it does.

Most content marketers launch back-to-school content in July, yet August could be the month to publish product listicles aimed at Discover.

Here are some example titles:

  • “21 Essentials Every High School Student Forgot to Buy.”
  • “15 Back-to-School Deals You Cannot Afford to Miss.”
  • “10 STEM Toys to Boost Your Kid’s Grades.”

Celebrate Cats and Dogs

Photo of a cat and a dog

August 2025 has a “day” for both cats and dogs.

August 2025 features International Cat Day on the 8th and International Dog Day on the 26th.

This duo of pet-centered remembrances can honor our feline and canine companions while also raising awareness about their overall well-being.

For content marketers, the cat and dog days offer an opportunity to engage with the millions of pet-loving shoppers.

Roughly two-thirds of American households own at least one pet, according to Forbes. Sixty-five million families have a dog, and 47 million keep a cat.

Certainly pet supply retailers can capitalize on the two occasions, although nearly any online store could likely connect pets to the products it sells. Here are some example titles.

  • A Pet Supply Store: “10 Ways to Spoil Your Pup on International Dog Day”
  • An Outdoor Gear Company: “The Ultimate Checklist for Hiking with Your Dog”
  • A Home Goods Retailer: “5 Tips for a Stylish and Pet-Proof Home”
  • A Car Accessories Store: “The Best Car Accessories for a Dog”

National Book Lovers Day

Photo of a female in an outdoor patio reading a book

National observances offer an opportunity to associate content with real-world events.

Almost any national observance — such as National Book Lovers Day on August 9 — can serve as a content anchor. It’s an opportunity to associate your marketing with timely, real-world happenings, however niche.

The trick is connecting your products to the day’s theme.

Imagine an online home decor shop. The company does not sell books, but it can still write about Book Lovers Day. For example, it could publish an article titled “How to Decorate the Perfect Reading Nook.”

Similarly, an electronics store could produce a video sharing “The Top eReaders for National Book Lovers Day.” A tea merchant might publish clever genre pairing guides.

National Relaxation Day

Photo of a 20-something female in a swimming pool

Relaxation can mean different things to consumers, making it ideal for content marketers.

Observed on August 15, 2025, National Relaxation Day is about taking a breather. For some, it will be a day at the spa. For others, relaxation will be watching the Seattle Mariners play the New York Mets at Citi Field.

Regardless, National Relaxation Day comes at an opportune time. As summer ends, many folks look to unwind. It’s an opportunity for businesses to position products for self-care and stress relief.

Here are some ideas.

  • Beauty boutique: “Step-by-Step Guide to an At-Home Spa Day”
  • Candle purveyor: “5 Calming Scents for Your Home”
  • Hobby shop: “5 Screen-Free Hobbies for Relaxation”

Interactive Pricing

Content marketing is evolving to include interactive site experiences, AI-generated.

Generative artificial intelligence has become ubiquitous. Content marketers often prompt genAI platforms for article topics and outlines.

In August 2025, take your company’s AI use to the next level. Instead of just generating articles, create an interactive price-related tool using your favorite AI model and also a code generator such as Replit.

Here’s an example using an online secondhand clothing shop.

This shop carefully curates clothing from thrift shops and estate sales. The staff cleans, repairs, and sells the items on the shop’s ecommerce site. But some shoppers question the store’s prices. “Aren’t these items just used shirts and pants?”

To respond, the store’s content team utilizes AI to generate an interactive “cost per wear” calculator, reframing the conversation from “price” to “value.” It’s a tangible, data-driven justification for a higher-priced, quality purchase.

Once generated, deploy the tool on product detail pages, category pages, and even social media campaigns.

Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together.

The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind.

Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform.

And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users canceling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.

In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs.

Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.

In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.

There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking.

None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it.

That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.

Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

Google’s electricity demand is skyrocketing

We got two big pieces of energy news from Google this week. The company announced that it’s signed an agreement to purchase electricity from a fusion company’s forthcoming first power plant. Google also released its latest environmental report, which shows that its energy use from data centers has doubled since 2020.

Taken together, these two bits of news offer a fascinating look at just how desperately big tech companies are hunting for clean electricity to power their data centers as energy demand and emissions balloon in the age of AI. Of course, we don’t know exactly how much of this pollution is attributable to AI because Google doesn’t break that out. (Also a problem!) So, what’s next and what does this all mean? 

Let’s start with fusion: Google’s deal with Commonwealth Fusion Systems is intended to provide the tech giant with 200 megawatts of power. This will come from Commonwealth’s first commercial plant, a facility planned for Virginia that the company refers to as the Arc power plant. The agreement represents half its capacity.

What’s important to note here is that this power plant doesn’t exist yet. In fact, Commonwealth still needs to get its Sparc demonstration reactor, located outside Boston, up and running. That site, which I visited in the fall, should be completed in 2026.

(An aside: This isn’t the first deal between Big Tech and a fusion company. Microsoft signed an agreement with Helion a couple of years ago to buy 50 megawatts of power from a planned power plant, scheduled to come online in 2028. Experts expressed skepticism in the wake of that deal, as my colleague James Temple reported.)

Nonetheless, Google’s announcement is a big moment for fusion, in part because of the size of the commitment and also because Commonwealth, a spinout company from MIT’s Plasma Science and Fusion Center, is seen by many in the industry as a likely candidate to be the first to get a commercial plant off the ground. (MIT Technology Review is owned by MIT but is editorially independent.)

Google leadership was very up-front about the length of the timeline. “We would certainly put this in the long-term category,” said Michael Terrell, Google’s head of advanced energy, in a press call about the deal.

The news of Google’s foray into fusion comes just days after the tech giant’s release of its latest environmental report. While the company highlighted some wins, some of the numbers in this report are eye-catching, and not in a positive way.

Google’s emissions have increased by over 50% since 2019, rising 6% in the last year alone. That’s decidedly the wrong direction for a company that’s set a goal to reach net-zero greenhouse-gas emissions by the end of the decade.

It’s true that the company has committed billions to clean energy projects, including big investments in next-generation technologies like advanced nuclear and enhanced geothermal systems. Those deals have helped dampen emissions growth, but it’s an arguably impossible task to keep up with the energy demand the company is seeing.

Google’s electricity consumption from data centers was up 27% from the year before. It’s doubled since 2020, reaching over 30 terawatt-hours. That’s nearly the annual electricity consumption from the entire country of Ireland.

As an outsider, it’s tempting to point the finger at AI, since that technology has crashed into the mainstream and percolated into every corner of Google’s products and business. And yet the report downplays the role of AI. Here’s one bit that struck me:

“However, it’s important to note that our growing electricity needs aren’t solely driven by AI. The accelerating growth of Google Cloud, continued investments in Search, the expanding reach of YouTube, and more, have also contributed to this overall growth.”

There is enough wiggle room in that statement to drive a large electric truck through. When I asked about the relative contributions here, company representative Mara Harris said via email that they don’t break out what portion comes from AI. When I followed up asking if the company didn’t have this information or just wouldn’t share it, she said she’d check but didn’t get back to me.

I’ll make the point here that we’ve made before, including in our recent package on AI and energy: Big companies should be disclosing more about the energy demands of AI. We shouldn’t be guessing at this technology’s effects.

Google has put a ton of effort and resources into setting and chasing ambitious climate goals. But as its energy needs and those of the rest of the industry continue to explode, it’s obvious that this problem is getting tougher, and it’s also clear that more transparency is a crucial part of the way forward.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Inside India’s scramble for AI independence

In Bengaluru, India, Adithya Kolavi felt a mix of excitement and validation as he watched DeepSeek unleash its disruptive language model on the world earlier this year. The Chinese technology rivaled the best of the West in terms of benchmarks, but it had been built with far less capital in far less time. 

“I thought: ‘This is how we disrupt with less,’” says Kolavi, the 20-year-old founder of the Indian AI startup CognitiveLab. “If DeepSeek could do it, why not us?” 

But for Abhishek Upperwal, founder of Soket AI Labs and architect of one of India’s earliest efforts to develop a foundation model, the moment felt more bittersweet. 

Upperwal’s model, called Pragna-1B, had struggled to stay afloat with tiny grants while he watched global peers raise millions. The multilingual model had a relatively modest 1.25 billion parameters and was designed to reduce the “language tax,” the extra costs that arise because India—unlike the US or even China—has a multitude of languages to support. His team had trained it, but limited resources meant it couldn’t scale. As a result, he says, the project became a proof of concept rather than a product. 

“If we had been funded two years ago, there’s a good chance we’d be the ones building what DeepSeek just released,” he says.

Kolavi’s enthusiasm and Upperwal’s dismay reflect the spectrum of emotions among India’s AI builders. Despite its status as a global tech hub, the country lags far behind the likes of the US and China when it comes to homegrown AI. That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere. 

Historically known as the global back office for the software industry, India has a tech ecosystem that evolved with a services-first mindset. Giants like Infosys and TCS built their success on efficient software delivery, but invention was neither prioritized nor rewarded. Meanwhile, India’s R&D spending hovered at just 0.65% of GDP ($25.4 billion) in 2024, far behind China’s 2.68% ($476.2 billion) and the US’s 3.5% ($962.3 billion). The muscle to invent and commercialize deep tech, from algorithms to chips, was just never built.

Isolated pockets of world-class research do exist within government agencies like the DRDO (Defense Research & Development Organization) and ISRO (Indian Space Research Organization), but their breakthroughs rarely spill into civilian or commercial use. India lacks the bridges to connect risk-taking research to commercial pathways, the way DARPA does in the US. Meanwhile, much of India’s top talent migrates abroad, drawn to ecosystems that better understand and, crucially, fund deep tech.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure, and how urgently it needed to respond.

India responds

In January 2025, 10 days after DeepSeek-R1’s launch, the Ministry of Electronics and Information Technology (MeitY) solicited proposals for India’s own foundation models, which are large AI models that can be adapted to a wide range of tasks. Its public tender invited private-sector cloud and data‑center companies to reserve GPU compute capacity for government‑led AI research. 

Providers including Jio, Yotta, E2E Networks, Tata, AWS partners, and CDAC responded. Through this arrangement, MeitY suddenly had access to nearly 19,000 GPUs at subsidized rates, repurposed from private infrastructure and allocated specifically to foundational AI projects. This triggered a surge of proposals from companies wanting to build their own models. 

Within two weeks, it had 67 proposals in hand. That number tripled by mid-March. 

In April, the government announced plans to develop six large-scale models by the end of 2025, plus 18 additional AI applications targeting sectors like agriculture, education, and climate action. Most notably, it tapped Sarvam AI to build a 70-billion-parameter model optimized for Indian languages and needs. 

For a nation long restricted by limited research infrastructure, things moved at record speed, marking a rare convergence of ambition, talent, and political will.

“India could do a Mangalyaan in AI,” said Gautam Shroff of IIIT-Delhi, referencing the country’s cost-effective, and successful, Mars orbiter mission. 

Jaspreet Bindra, cofounder of AI&Beyond, an organization focused on teaching AI literacy, captured the urgency: “DeepSeek is probably the best thing that happened to India. It gave us a kick in the backside to stop talking and start doing something.”

The language problem

One of the most fundamental challenges in building foundational AI models for India is the country’s sheer linguistic diversity. With 22 official languages, hundreds of dialects, and millions of people who are multilingual, India poses a problem that few existing LLMs are equipped to handle.

Whereas a massive amount of high-quality web data is available in English, Indian languages collectively make up less than 1% of online content. The lack of digitized, labeled, and cleaned data in languages like Bhojpuri and Kannada makes it difficult to train LLMs that understand how Indians actually speak or search.

Global tokenizers, which break text into units a model can process, also perform poorly on many Indian scripts, misinterpreting characters or skipping some altogether. As a result, even when Indian languages are included in multilingual models, they’re often poorly understood and inaccurately generated.

And unlike OpenAI and DeepSeek, which achieved scale using structured English-language data, Indian teams often begin with fragmented and low-quality data sets encompassing dozens of Indian languages. This makes the early steps of training foundation models far more complex.

Nonetheless, a small but determined group of Indian builders is starting to shape the country’s AI future.

For example, Sarvam AI has created OpenHathi-Hi-v0.1, an open-source Hindi language model that shows the Indian AI field’s growing ability to address the country’s vast linguistic diversity. The model, built on Meta’s Llama 2 architecture, was trained on 40 billion tokens of Hindi and related Indian-language content, making it one of the largest open-source Hindi models available to date.

Pragna-1B, the multilingual model from Upperwal, is more evidence that India could solve for its own linguistic complexity. Trained on 300 billion tokens for just $250,000, it introduced a technique called “balanced tokenization” to address a unique challenge in Indian AI, enabling a 1.25-billion-parameter model to behave like a much larger one.

The issue is that Indian languages use complex scripts and agglutinative grammar, where words are formed by stringing together many smaller units of meaning using prefixes and suffixes. Unlike English, which separates words with spaces and follows relatively simple structures, Indian languages like Hindi, Tamil, and Kannada often lack clear word boundaries and pack a lot of information into single words. Standard tokenizers struggle with such inputs. They end up breaking Indian words into too many tokens, which bloats the input and makes it harder for models to understand the meaning efficiently or respond accurately.

With the new technique, however, “a billion-parameter model was equivalent to a 7 billion one like Llama 2,” Upperwal says. This performance was particularly marked in Hindi and Gujarati, where global models often underperform because of limited multilingual training data. It was a reminder that with smart engineering, small teams could still push boundaries.

Upperwal eventually repurposed his core tech to build speech APIs for 22 Indian languages, a more immediate solution better suited to rural users who are often left out of English-first AI experiences.

“If the path to AGI is a hundred-step process, training a language model is just step one,” he says. 

At the other end of the spectrum are startups with more audacious aims. Krutrim-2, for instance, is a 12-billion-parameter multilingual language model optimized for English and 22 Indian languages. 

Krutrim-2 is attempting to solve India’s specific problems of linguistic diversity, low-quality data, and cost constraints. The team built a custom Indic tokenizer, optimized training infrastructure, and designed models for multimodal and voice-first use cases from the start, crucial in a country where text interfaces can be a problem.

Krutrim’s bet is that its approach will not only enable Indian AI sovereignty but also offer a model for AI that works across the Global South.

Besides public funding and compute infrastructure, India also needs the institutional support of talent, the research depth, and the long-horizon capital that produce globally competitive science.

While venture capital still hesitates to bet on research, new experiments are emerging. Paras Chopra, an entrepreneur who previously built and sold the software-as-a-service company Wingify, is now personally funding Lossfunk, a Bell Labs–style AI residency program designed to attract independent researchers with a taste for open-source science. 

“We don’t have role models in academia or industry,” says Chopra. “So we’re creating a space where top researchers can learn from each other and have startup-style equity upside.”

Government-backed bet on sovereign AI

The clearest marker of India’s AI ambitions came when the government selected Sarvam AI to develop a model focused on Indian languages and voice fluency.

The idea is that it would not only help Indian companies compete in the global AI arms race but benefit the wider population as well. “If it becomes part of the India stack, you can educate hundreds of millions through conversational interfaces,” says Bindra. 

Sarvam was given access to 4,096 Nvidia H100 GPUs for training a 70-billion-parameter Indian language model over six months. (The company previously released a 2-billion-parameter model trained in 10 Indian languages, called Sarvam-1.)

Sarvam’s project and others are part of a larger strategy called the IndiaAI Mission, a $1.25 billion national initiative launched in March 2024 to build out India’s core AI infrastructure and make advanced tools more widely accessible. Led by MeitY, the mission is focused on supporting AI startups, particularly those developing foundation models in Indian languages and applying AI to key sectors such as health care, education, and agriculture.

Under its compute program, the government is deploying more than 18,000 GPUs, including nearly 13,000 high-end H100 chips, to a select group of Indian startups that currently includes Sarvam, Upperwal’s Soket Labs, Gnani AI, and Gan AI

The mission also includes plans to launch a national multilingual data set repository, establish AI labs in smaller cities, and fund deep-tech R&D. The broader goal is to equip Indian developers with the infrastructure needed to build globally competitive AI and ensure that the results are grounded in the linguistic and cultural realities of India and the Global South.

According to Abhishek Singh, CEO of IndiaAI and an officer with MeitY, India’s broader push into deep tech is expected to raise around $12 billion in research and development investment over the next five years. 

This includes approximately $162 million through the IndiaAI Mission, with about $32 million earmarked for direct startup funding. The National Quantum Mission is contributing another $730 million to support India’s ambitions in quantum research. In addition to this, the national budget document for 2025-26 announced a $1.2 billion Deep Tech Fund of Funds aimed at catalyzing early-stage innovation in the private sector.

The rest, nearly $9.9 billion, is expected to come from private and international sources including corporate R&D, venture capital firms, high-net-worth individuals, philanthropists, and global technology leaders such as Microsoft. 

IndiaAI has now received more than 500 applications from startups proposing use cases in sectors like health, governance, and agriculture. 

“We’ve already announced support for Sarvam, and 10 to 12 more startups will be funded solely for foundational models,” says Singh. Selection criteria include access to training data, talent depth, sector fit, and scalability.

Open or closed?

The IndiaAI program, however, is not without controversy. Sarvam is being built as a closed model, not open-source, despite its public tech roots. That has sparked debate about the proper balance between private enterprise and the public good. 

“True sovereignty should be rooted in openness and transparency,” says Amlan Mohanty, an AI policy specialist. He points to DeepSeek-R1, which despite its 236-billion parameter size was made freely available for commercial use. 

Its release allowed developers around the world to fine-tune it on low-cost GPUs, creating faster variants and extending its capabilities to non-English applications.

“Releasing an open-weight model with efficient inference can democratize AI,” says Hancheng Cao, an assistant professor of information systems and operations management at Emory University. “It makes it usable by developers who don’t have massive infrastructure.”

IndiaAI, however, has taken a neutral stance on whether publicly funded models should be open-source. 

“We didn’t want to dictate business models,” says Singh. “India has always supported open standards and open source, but it’s up to the teams. The goal is strong Indian models, whatever the route.”

There are other challenges as well. In late May, Sarvam AI unveiled Sarvam‑M, a 24-billion-parameter multilingual LLM fine-tuned for 10 Indian languages and built on top of Mistral Small, an efficient model developed by the French company Mistral AI. Sarvam’s cofounder Vivek Raghavan called the model “an important stepping stone on our journey to build sovereign AI for India.” But its download numbers were underwhelming, with only 300 in the first two days. The venture capitalist Deedy Das called the launch “embarrassing.”

And the issues go beyond the lukewarm early reception. Many developers in India still lack easy access to GPUs and the broader ecosystem for Indian-language AI applications is still nascent. 

The compute question

Compute scarcity is emerging as one of the most significant bottlenecks in generative AI, not just in India but across the globe. For countries still heavily reliant on imported GPUs and lacking domestic fabrication capacity, the cost of building and running large models is often prohibitive. 

India still imports most of its chips rather than producing them domestically, and training large models remains expensive. That’s why startups and researchers alike are focusing on software-level efficiencies that involve smaller models, better inference, and fine-tuning frameworks that optimize for performance on fewer GPUs.

“The absence of infrastructure doesn’t mean the absence of innovation,” says Cao. “Supporting optimization science is a smart way to work within constraints.” 

Yet Singh of IndiaAI argues that the tide is turning on the infrastructure challenge thanks to the new government programs and private-public partnerships. “I believe that within the next three months, we will no longer face the kind of compute bottlenecks we saw last year,” he says.

India also has a cost advantage.

According to Gupta, building a hyperscale data center in India costs about $5 million, roughly half what it would cost in markets like the US, Europe, or Singapore. That’s thanks to affordable land, lower construction and labor costs, and a large pool of skilled engineers. 

For now, India’s AI ambitions seem less about leapfrogging OpenAI or DeepSeek and more about strategic self-determination. Whether its approach takes the form of smaller sovereign models, open ecosystems, or public-private hybrids, the country is betting that it can chart its own course. 

While some experts argue that the government’s action, or reaction (to DeepSeek), is performative and aligned with its nationalistic agenda, many startup founders are energized. They see the growing collaboration between the state and the private sector as a real opportunity to overcome India’s long-standing structural challenges in tech innovation.

At a Meta summit held in Bengaluru last year, Nandan Nilekani, the chairman of Infosys, urged India to resist chasing a me-too AI dream. 

“Let the big boys in the Valley do it,” he said of building LLMs. “We will use it to create synthetic data, build small language models quickly, and train them using appropriate data.” 

His view that India should prioritize strength over spectacle had a divided reception. But it reflects a broader growing consensus on whether India should play a different game altogether.

“Trying to dominate every layer of the stack isn’t realistic, even for China,” says Shobhankita Reddy, a researcher at the Takshashila Institution, an Indian public policy nonprofit. “Dominate one layer, like applications, services, or talent, so you remain indispensable.” 

Correction: We amended Reddy’s name

The Download: India’s AI independence, and predicting future epidemics

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside India’s scramble for AI independence

Despite its status as a global tech hub, India lags far behind the likes of the US and China when it comes to homegrown AI.

That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure—and how urgently it needed to respond. Read the full story.

—Shadma Shaikh

Job titles of the future: Pandemic oracle

Officially, Conor Browne is a biorisk consultant. Based in Belfast, Northern Ireland, he has advanced degrees in security studies and medical and business ethics, along with United Nations certifications in counterterrorism and conflict resolution.

Early in the emergence of SARS-CoV-2, international energy conglomerates seeking expert guidance on navigating the potential turmoil in markets and transportation became his main clients. 

Having studied the 2002 SARS outbreak, he predicted the exponential spread of the new airborne virus. In fact, he forecast the epidemic’s broadscale impact and its implications for business so accurately that he has come to be seen as a pandemic oracle. Read the full story.

—Britta Shoot

This story is from the most recent print edition of MIT Technology Review, which explores power—who has it, and who wants it. Subscribe here to receive future copies once they drop.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump’s ‘big beautiful bill’ has passed 

Which is terrible news for the clean energy industry. (Vox)
+ An energy-affordability crisis is looming in the US. (The Atlantic $)
+ The President struck deals with House Republican holdouts to get it over the line. (WSJ $)
+ The Trump administration has shut down more than 100 climate studies. (MIT Technology Review)

2 Daniel Gross is joining Meta’s superintelligence lab 
He’s jumping ship from the startup he co-founded with Ilya Sutskever. (Bloomberg $)
+ Sutskever is stepping into the CEO role in his absence. (TechCrunch)
+ Here’s what we can infer from Meta’s recent hires. (Semafor)

3 AI’s energy demands could destabilize the global supply
That’s according to the head of the world’s largest transformer maker. (FT $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

4 Elon Musk is threatening to start his own political party
Would anyone vote for him, though? (WP $)
+ You’d think his bruising experience in the White House would have put him off. (NY Mag $)

5 The US has lifted exports on chip design software to China
It suggests that frosty relations between the nations may be thawing. (Reuters)

6 Trump officials are going after this ICE warning app
But lawyers say there’s nothing illegal about it. (Wired $)
+ Downloads of ICEBlock are rising. (NBC News)

7 Wildfires are making it harder to monitor air pollutants
Current tracking technology isn’t built to accommodate shifting smoke. (Undark)
+ How AI can help spot wildfires. (MIT Technology Review)

8 Apple’s iOS 26 software can detect nudity on FaceTime calls
The feature will pause the call and ask if you want to continue. (Gizmodo)

9 Threads has finally launched DMs
But users are arguing there should be a way to opt out of them entirely. (TechCrunch)

10 You can hire a robot to write a handwritten note 🖊🤖
Or, y’know, pick up a pen and write it yourself. (Insider $)

Quote of the day

“It’s almost like we never even spoke.”

Richard Wilson, an online dater who is convinced his most recent love interest used a chatbot to converse with him online before they awkwardly met in person, tells the Washington Post about his disappointment.

One more thing

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother, and they discuss his day-to-day life. But Sun’s mother died five years ago, and the person he’s talking to isn’t actually a person, but a digital replica he made of her.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them.

But some question whether interacting with AI replicas of the dead is truly a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. Read the full story.

—Zeyi Yang

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ There’s nothing cooler than wooden interiors right now.
+ Talented artist Ian Robinson creates beautiful paintings of people’s vinyl collections.
+ You’ll find me in every one of Europe’s top wine destinations this summer.
+ Here’s everything you need to remember before Stranger Things returns this fall.

MNLY’s At-Home AI Powers Men’s Health

Next-gen health and wellness is an apt description of MNLY. Luke Hartelust launched the platform in 2021, pronouncing it “manly,” and then pivoted twice while remaining focused on modern care for men.

The current version combines AI with home-based testing, diagnoses, and nutrition. Customers pay an upfront fee and a monthly subscription afterward.

In our recent conversation, Luke shared the company’s origins, growth, mistakes, and more. The entire audio of that discussion is embedded below. The transcript is condensed and edited for clarity.

Eric Bandholz: Tell us about your work.

Luke Hartelust: I’m the founder and CEO of MNLY, a men’s health and wellness platform. We use at-home diagnostics, AI, and advanced tech to create custom supplement, lifestyle, and nutrition solutions.

My background is in fitness franchising. I led multiple locations across Southern California and worked closely with male entrepreneurs and executives. That experience revealed gaps in men’s healthcare, particularly in the lack of proactive, preventative approaches.

Telehealth has improved access to care, but the model has flaws. Most providers have long waitlists — often up to 90 days for lab results and treatment plans due to backlogged consultations.

At MNLY, we streamlined the process. We removed the practitioner bottleneck and built a scientific advisory board to train a complex AI model. The result is an automated analysis and quick, personalized health recommendations, going from signup to actionable results much faster than traditional telehealth providers.

Bandholz: Walk me through the customer journey.

Hartelust: Customers start by purchasing our at-home blood sample kit — a simple finger prick using dried blood spot sampling, eliminating the friction of in-person visits. Once received, our partner lab processes samples within hours.

While awaiting results, users complete an 86-question health assessment. It focuses on seven areas: concentration, confidence, stamina, mood, sleep, libido, and recovery.

We combine lab and assessment data — roughly 100 data points per user — to generate a clean, easy-to-understand health dashboard. It explains results and provides reference ranges, visuals, and comparison metrics. An overall health score benchmarks the data.

Next, our AI builds a personalized health plan, including nutrition suggestions based on biomarkers and lifestyle hacks such as breathwork and even testicular cooling for hormone support.

Finally, we formulate a custom dietary supplement. Based on the user’s data, our AI prescribes specific nutrients and doses. We then manufacture the supplement and ship every 30 days. It’s fully automated.

Bandholz: What does it cost customers?

Hartelust: The initial lab kit is $199. Supplements are $249 per month.

We recommend retesting with new blood samples every three to five months. Each time new bloodwork is submitted, our system updates all biomarkers, adjusts supplement dosages, and revises the health plan. Users experience clear visual progress, including changes to their overall health score.

We’ve just completed our first year in business. It’s our third iteration under the MNLY brand. We launched in 2021 as a nutritional subscription box provider, with two attempts.

A year ago, with this version, we didn’t prioritize retention. Our small team focused on product development, and we lacked an automated customer journey to guide and remind users about retesting. We started those reminders 90 days ago.

From an ecommerce perspective, not building that journey sooner was one of our biggest missteps. Many customers experienced strong results in the first six weeks — improved libido, mood, sleep, recovery, and focus — but when those effects plateaued, some dropped off around the five- or six-month mark. Even though biological improvements continued, users weren’t always aware without updated data. That’s why consistent testing and communication are now central to our retention strategy.

Bandholz: What’s your growth strategy?

Hartelust: As a startup raising capital in a tough market, I needed a strategic partner to expand our reach. I secured a deal last year with Hyrox, an indoor fitness competition, as its exclusive U.S. men’s health partner. I landed the deal with just a minimal viable product and a pitch deck, right before Hydrox’s U.S. expansion took off.

The company’s events grew in a year from 2,000 athletes to 14,000, and its audience — 50,000 social followers, 30,000 email subscribers, and 200 gym partners — aligned perfectly with our brand. We paid for the sponsorship, but it gave us massive exposure, credibility, and direct access to our core demographic.

We could have taken out, say, $100,000 in Meta Ads. That same $100,000 in a strategic Hyrox sponsorship gets us brand equity, athletes, investors, and a much lower acquisition cost — around $200 per customer, far better than we could achieve with ads alone.

Bandholz: How do you convert Hyrox athletes?

Hartelust: A presence on-site at the competitions is our most effective strategy. We recently wrapped an eight-month national tour where we set up our brand installation inside each venue. Our core leadership team was there to bring deep product knowledge, passion, and real connection.

The sponsorship provided us with access to email lists and social media audiences. Before the competition, we emailed attendees with offers, a discount code, and booth details. We reminded them of the promotion during the event and shared recaps after. We encouraged the participants to show the code at the booth for a lower rate.

Bandholz: How did you raise the capital to fund such a complex launch?

Hartelust: I spent the first six years of my career building wellness and fitness studios and nurturing strategic relationships. When we sold the company in 2021 for several million dollars, I reinvested some capital to start MNLY. But, again, before our current model, MNLY failed twice as a subscription box concept. I lost a lot on those early versions before pivoting to what we have now.

Launching this model required more than just personal funds, so I began raising a true pre-seed round about 18 months ago. I had raised capital before, but never for a startup. I tapped every possible connection — friends, family, clients — and hired a virtual assistant for cold outreach. One of our venture capital partners shared a valuable investor database. I ended up doing roughly 250 pitches and raised just under $800,000.

This round focused on micro angels rather than traditional VCs. Many brands rely heavily on Meta ads and lack a real connection. We leveraged our Hyrox community and offered equity to athlete ambassadors, which provided us with additional operational capital. That blend of brand, relationships, and community has fueled our growth.

Bandholz: Where can people support you?

Hartelust: Our website is getMNLY.com. We’re @getMNLY on Instagram and Facebook. I’m on LinkedIn.

What Is Paid Media: The Different Types & Examples via @sejournal, @brookeosmundson

Paid media is often treated like a checklist item in a marketing plan: launch a few search ads, run a Meta campaign, maybe test YouTube if there’s budget left.

But not all paid media is created equal, and treating every channel the same is a fast way to burn through budget with little to show for it.

Whether you’re working in-house or managing campaigns for clients, understanding the different types of paid media (and what each one is actually good for) can help you prioritize the right tactics, set realistic expectations, and answer the dreaded question: “What are we getting out of this?”

This article breaks down the main types of paid media with real-world examples so you can make smarter decisions about where to spend your money.

What Is Paid Media?

Paid media is any type of marketing where you pay to get in front of your audience. That includes things like search ads, social ads, display banners, video pre-roll, and even influencer sponsorships.

While paid media is often used interchangeably with the term cost-per-click (CPC), it’s important to note the differentiation.

It’s the part of your marketing strategy that gives you scale and control. You’re not waiting for someone to discover your blog post or share your Instagram reel organically.

You’re putting money behind your message to drive attention right now.

Paid media works best when it’s tied to a clear goal, like driving leads, sales, or downloads. Without a strategy, it’s just noise with a price tag.

The Difference Between Earned, Owned, And Paid Media

Think of paid, owned, and earned media as different ways to get your message out. You need a mix of all three, but each serves a different purpose.

  • Paid media is when you pay for attention. Think of tactics like search ads, social ads, sponsored posts, affiliate placements, etc.
  • Owned media is what you control. Think of assets like your website, blog, email list, and social channels.
  • Earned media is what others say about you. This often comes in the form of reviews, PR coverage, social shares, and more.

Some examples of earned media include:

  • Social sharing from customers.
  • Customer reviews.
  • External media coverage (public relations).

Owned media examples include:

The overlap matters, too. A paid campaign might drive traffic to a landing page (owned media), which then gets shared by a happy customer (earned media). When these channels work together, your efforts go further.

Types Of Paid Media Channels

Now that we’ve identified the definition of paid media, let’s take a look at the different types of paid media channels and the purposes they serve.

Before we dive into the different paid media channels, it’s also important to note the difference between ad formats and ad channels.

Ad formats are the type of ads shown in a particular channel. An ad format example could be:

So, while ad formats are important and will depend on the channel, below we will focus on the channels themselves.

There are other types of paid media channels available that are not listed here, such as more traditional methods like direct mail or billboards. These paid media channels have a more physical presence.

Here, we will focus on digital channels.

Paid Search

Paid search puts your ads at the top of search results for specific keywords. It’s often the first paid channel marketers try because it targets people already looking for what you offer.

Platforms like Google Ads and Microsoft Ads let you bid on search terms so your ad shows when someone types in something relevant.

Google is the leading search engine in market share, with its sites generating 60.4% of user searches in the United States.

It’s high-intent, measurable, and scalable. But, it’s also competitive, especially in industries like legal, finance, or ecommerce.

Success here depends on more than just bidding. Your landing page, ad copy, keyword match types, and conversion tracking all matter. You’re not just paying for clicks – you’re paying for the opportunity to convert interest into action.

Paid Social

Paid social platforms let you reach people based on who they are, not just what they search.

Many of the platforms offer detailed targeting based on demographics, interests, behaviors, and even job titles.

Some of the most common paid social platforms include:

  • Meta (Facebook).
  • Instagram.
  • LinkedIn.
  • TikTok.
  • Pinterest.
  • Snapchat.
  • X (Twitter).

The most common ad format in social channels is placed within a user’s newsfeed as they scroll. These ads will either consist of one (or more) static images or a video as the main visual.

It’s not just about brand awareness. Many brands use social to drive signups, sales, or downloads. You can run video ads, carousels, static images, or Stories, depending on what fits your brand and goal.

Some paid social platforms are more beneficial for B2B companies than for B2C brands.

For example, LinkedIn advertising consists mainly of B2B brands marketing their product or service to other professionals.

Other platforms like TikTok and Snapchat may be better suited for B2C or ecommerce brands.

The tricky part? Creative fatigue is real.

If you’re not refreshing your assets often or testing different hooks, performance will drop fast. Social ads require constant iteration, but the upside is speed: you can test ideas and get feedback quickly.

Programmatic & Display

Display advertising is what most people think of as “banner ads.” These are the visual ads you see on news sites, blogs, or apps, usually managed through platforms like the Google Display Network or programmatic buying platforms.

The upside is scale. You can reach millions of people across the web without relying on social platforms. The downside? Banner blindness is real. If your creative isn’t compelling, people will scroll right past.

That’s why display works best for remarketing or supporting a broader campaign. Use it to stay top of mind, promote limited-time offers, or drive awareness ahead of a product launch. Just don’t expect cold traffic to convert on the first click.

Affiliate Marketing

Affiliate marketing is a way to scale your reach by letting others promote your product for you. You only pay when they drive a sale or lead, which makes it one of the lowest-risk paid media options available.

This model works especially well in industries like fashion, tech, travel, and finance, where bloggers, influencers, or content sites already have built-in audiences.

The key to making affiliate work? Vet your partners. A bunch of low-quality traffic from coupon sites won’t move the needle.

Look for affiliates who create content, have authority, or drive meaningful referral traffic.

And keep an eye on attribution. Affiliate-driven sales often overlap with other paid efforts, so tracking needs to be tight.

Examples Of Paid Media

This is where the ad formats are married to the paid media channels.

Below are examples of paid media ads from the popular channels listed above. These examples can help provide context when deciding what types of paid media to run.

Search Examples

When searching for [top parental control apps] in Google, the first three positions are examples of search ads.

Screenshot from Google search for [top parental control apps], Google, May 2025

While conducting the same search on Microsoft Bing, the ads look slightly different.

There’s even a section above the sponsored ads showcasing different brands and a brief description about what they do.

Screenshot from Bing search for [top parental control apps], Microsoft Bing, May 2025

When searching for a product like [nike shoes for women], the ads below are a shopping ad format.

Screenshot from Google search for [nike shoes for women], Google, May 2025

Paid Social Examples

Each social platform’s ad formats look different within their respective newsfeeds.

Here is a LinkedIn newsfeed example:

Screenshot from author’s LinkedIn newsfeed, desktop ad, May 2025

A Facebook ad newsfeed example:

Screenshot from author’s Facebook newsfeed, desktop ad, May 2025

Instagram also offers ads in its “Stories” placement. An example from Fountainhead is below:

Screenshot from author’s Instagram Stories feed, Stories ad, May 2025

Display Examples

Display ads can be in all shapes and sizes, depending on the website or app.

Below is an example of two different display ads shown on one webpage.

Screenshot from author, May 2025

Affiliate Examples

Sometimes, affiliate ads can be difficult to spot.

For example, “Listicle” articles, where a publisher is paid by other brands to be included in a “Top” product article.

Screenshot from FamilyOnlineSafety.com, May 2025

However, if you take a closer look at this example’s “Advertising Disclosure,” you’ll notice that this publisher is paid by the brands for exclusive placement:

affiliate marketing disclaimerScreenshot from FamilyOnlineSafety.com, May 2025

Summary

Paid media doesn’t have to be a guessing game. When you understand the role each channel plays, you’re in a much better spot to build campaigns that actually drive results, not just impressions.

From keyword-targeted search ads to affiliate partnerships and social retargeting, each paid media type has its own strengths. Use them deliberately.

Think about where your audience is, how they like to interact, and what action you want them to take.

Remember: success isn’t just about being present on every channel. It’s about showing up with the right message, in the right place, at the right time.

More resources: 


Featured Image: Lana Sham/Shutterstock

SEO Rockstar “Proves” You Don’t Need Meta Descriptions via @sejournal, @martinibuster

An SEO shared on social media that his SEO tests proved that not using a meta description resulted in a lift in traffic. Coincidentally, another well-known SEO published an article that claims that SEO tests misunderstand how Google and the internet actually work and lead to the deprioritization of meaningful changes. Who is right?

SEO Says Pages Without Meta Descriptions Received Ranking Improvement

Mark Williams-Cook posted the results of his SEO test on LinkedIn about using and omitting meta descriptions, concluding that pages lacking a meta description received an average traffic lift of approximately 3%.

Here’s some of what he wrote:

“This will get some people’s backs up, but we don’t recommend writing meta descriptions anymore, and that’s based on data and testing.

We have consistently found a small, usually around 3%, but statistically significant uplift to organic traffic on groups of pages with no meta descriptions vs test groups of pages with meta descriptions via SEOTesting.

I’ve come to the conclusion if you’re writing meta descriptions manually, you’re wasting time. If you’re using AI to do it, you’re probably wasting a small amount of time.”

Williams-Cook asserted that Google rewrites around 80% of meta descriptions and insisted that the best meta descriptions are query dependent, meaning that the ideal meta description would be one that’s custom written for the specific queries the page is ranking for, which is what Google does when the meta description is missing.

He expressed the opinion that omitting the meta description increases the likelihood that Google will step in and inject a query-relevant meta description into the search results which will “outperform” the normal meta description that’s optimized for whatever the page is about.

Although I have reservations about SEO tests in general, his suggestion is intriguing and has the ring of plausibility.

Are SEO Tests Performative Theater?

Coincidentally, Jono Alderson, a technical SEO consultant, published an article last week titled, “Stop testing. Start shipping.” where he discusses his view on SEO tests, calling it “performative theater.”

Alderson writes:

“The idea of SEO testing appeals because it feels scientific. Controlled. Safe…

You tweak one thing, you measure the outcome, you learn, you scale. It works for paid media, so why not here?

Because SEO isn’t a closed system. …It’s architecture, semantics, signals, and systems. And trying to test it like you would test a paid campaign misunderstands how the web – and Google – actually work.

Your site doesn’t exist in a vacuum. Search results are volatile. …Even the weather can influence click-through rates.

Trying to isolate the impact of a single change in that chaos isn’t scientific. It’s theatre.

…A/B testing, as it’s traditionally understood, doesn’t even cleanly work in SEO.

…most SEO A/B testing isn’t remotely scientific. It’s just a best-effort simulation, riddled with assumptions and susceptible to confounding variables. Even the cleanest tests can only hint at causality – and only in narrowly defined environments.”

Jono makes a valid point about the unreliability of tests where the inputs and the outputs are not fully controlled.

Statistical tests are generally done within a closed system where all the data being compared follow the same rules and patterns. But if you compare multiple sets of pages, where some pages target long-tail phrases and others target high-volume queries, then the pages will differ in their potential outcomes. External changes (daily traffic fluctuation, users clicking on the search results) aren’t controllable. As Jono suggested, even the weather can influence click rates.

Although Williams-Cook asserted that he had a control group for testing purposes, it’s extremely difficult to isolate a single variable on live websites due to the uncontrollable external factors as Jono points out.

So, even though Williams-Cook asserts that the 3% change he noted is consistent and statistically relevant, the unobservable factors within Google’s black box algorithm that determines the outcome makes it difficult to treat that result as a reliable causal finding in the way one could with a truly controlled and observable statistical testing method.

If it’s not possible to isolate one change then it’s very difficult to make reliable claims about the resulting SEO test results.

Focus On Meaningful SEO Improvements

Jono’s article calls out the shortcomings of SEO tests but the point of his essay is to call attention to how focusing on what can be tested and measured can become prioritized over the “meaningful” changes that should be made but aren’t because they cannot be measured. He argues that it’s important to focus on the things that matter in today’s search environment that are related to content and a better user experience.

And that’s where we circle back to Williams-Cook because although statistically valid A/B SEO tests may be “theatre” as Jono suggests, it doesn’t mean that Williams-Cook’s suggestion is wrong. He may actually may be correct that it’s better to omit the meta description and let Google rewrite them.

SEO is subjective which means what’s good for one might not be a priority for someone else. So the question remains, is removing all meta descriptions a meaningful change?

Featured Image by Shutterstock/baranq

New Ecommerce Tools: July 3, 2025

Every week we select and publish a rundown of new products and services from vendors to ecommerce merchants. This installment includes updates on AI-powered storefronts, B2B payments, generative AI content, email marketing, stablecoin payments, reverse logistics, and product information management.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Adobe Commerce releases two product offerings. Adobe has announced two new product offerings to help brands deliver personalized shopping journeys. Adobe Commerce as a Cloud Service is a full-stack, cloud-native ecommerce platform. It enables brands to create global, multi-brand B2B and B2C storefronts, with genAI-powered content creation and streamlined digital asset management. Adobe Commerce Optimizer is a headless storefront that empowers brands to upgrade the front end of their ecommerce sites while allowing the backend system to remain unchanged.

Home page of Adobe Commerce

Adobe Commerce

Mirakl launches connector for Shopify. Mirakl, a provider of ecommerce software, is releasing Mirakl Platform Connector for Shopify, designed to accelerate marketplace deployment through pre-built synchronization of taxonomy, products, offers, sellers, and orders. The connector syncs critical marketplace operations, with features including a built-in checkout module, out-of-the-box technical integration, and support for themes and headless storefronts. Per Mirakl, businesses can benefit from standardized and guided integration processes, automated product catalog onboarding, streamlined order management, dedicated technical support, and real-time synchronization of operations.

Balance and Alibaba.com partner on flexible B2B payments for U.S. SMBs. Balance, an AI-powered financial infrastructure platform for B2B commerce, has launched “Pay Later for Business” on Alibaba.com. The embedded financing tool gives U.S. small and medium-sized businesses greater purchasing power and more control over how and when they pay. Alibaba.com’s U.S. business users now have the option to access instant credit at checkout. This BNPL feature is powered by Balance’s AI risk infrastructure with real-time credit management.

Feedonomics and BigCommerce team up with Perplexity for AI product search. Feedonomics, a data feed management platform, and BigCommerce have announced that their customers have access to Perplexity, an AI-powered search engine, to optimize visibility and relevance in search results. Feedonomics now provides Perplexity with pre-optimized, structured product data, ensuring that the LLM understands and recognizes merchants’ products, leading to search results that favor the brand.

Home page of Feedonomics

Feedonomics

Bolt launches Connect for marketplace onboarding and support for stablecoin payments. Bolt, a checkout, identity, and payments platform, has announced Connect to help marketplaces onboard merchants. Bolt Connect gives marketplace operators a single integration to support one-click merchant onboarding, built-in compliance workflows, and low-fee or no-fee payouts. With Bolt managing the infrastructure, marketplaces can grow efficiently while controlling their brand experience and business model. Bolt also announced support for stablecoin payments, giving merchants and shoppers enhanced flexibility through new digital payment infrastructure.

Akeneo joins Shopify’s partner program, providing product content solutions. Akeneo, a provider of product information management tools, is now a Premier Partner of Shopify. Akeneo helps brands, manufacturers, distributors, and retailers centralize, enrich, and optimize product content with AI tools, while Shopify provides a composable infrastructure for high-performing digital storefronts. With the partnership, Akeneo has launched its App for Shopify, allowing merchants to connect Akeneo’s PIM to Shopify storefronts, eliminating manual data entry by automating the syndication of localized product data.

Omnisend launches AI-powered suite for ecommerce email marketing. Omnisend, an email and SMS marketing platform for ecommerce brands, has launched a suite of AI-powered tools. The AI Segment Builder lets marketers describe audiences in plain language and instantly generate precise segments. New AI content tools include generators for subject lines and preheaders as well as direct copywriting — all in multiple languages, using previous campaign insights, and the brand’s tone of voice. The new Personalized Product Recommender suggests products tailored to each customer’s browsing behavior.

Home page of Omnisend

Omnisend

Amazon expands Prime delivery to thousands of small towns and rural areas. Amazon is expanding Same-Day and Next-Day delivery to over 4,000 small U.S. cities, towns, and rural communities by the end of 2025. Additionally, Amazon is investing over $4 billion to triple the size of its delivery network by 2026, with a focus on small towns and rural communities across the U.S. Customers can see whether their area has Same-Day Delivery by visiting Amazon.com/samedaystore and browsing by category, price point, and retail store.

Geodis unveils two complementary returns tools. Geodis, a global logistics provider, has launched two returns tools: workflow automation and management. The returns workflow automation module enables the end consumer to create a shipping label to initiate the return or exchange without requiring the shipper’s involvement. The returns management module optimizes the reverse logistics process inside the warehouse. Geodis integrates with Shopify, BigCommerce, WooCommerce, Magento, and other ecommerce platforms.

Tulip AI debuts for personalized customer engagement. Tulip, a cloud-based retail customer engagement platform, has launched a suite of AI tools to create stronger customer experiences. Tulip AI acts as a virtual assistant, streamlining daily tasks and enabling more meaningful customer connections at scale, including intelligent message writing, instant profile summaries, customer search, and segmentation, according to Tulip.

Klaviyo launches AI-powered platform enhancements for omnichannel marketing. Klaviyo has announced AI-powered platform enhancements. Omnichannel Campaign Builder helps plan, launch, and measure complex, multi-day campaigns across email, SMS/RCS, push, and WhatsApp. Channel Affinity automatically learns customer preferences, then delivers messages where and when they’re most likely to convert. Multi-Touch Attribution provides real-time visibility into what’s driving revenue.

Klaviyo home page

Klaviyo