Inside India’s scramble for AI independence

In Bengaluru, India, Adithya Kolavi felt a mix of excitement and validation as he watched DeepSeek unleash its disruptive language model on the world earlier this year. The Chinese technology rivaled the best of the West in terms of benchmarks, but it had been built with far less capital in far less time. 

“I thought: ‘This is how we disrupt with less,’” says Kolavi, the 20-year-old founder of the Indian AI startup CognitiveLab. “If DeepSeek could do it, why not us?” 

But for Abhishek Upperwal, founder of Soket AI Labs and architect of one of India’s earliest efforts to develop a foundation model, the moment felt more bittersweet. 

Upperwal’s model, called Pragna-1B, had struggled to stay afloat with tiny grants while he watched global peers raise millions. The multilingual model had a relatively modest 1.25 billion parameters and was designed to reduce the “language tax,” the extra costs that arise because India—unlike the US or even China—has a multitude of languages to support. His team had trained it, but limited resources meant it couldn’t scale. As a result, he says, the project became a proof of concept rather than a product. 

“If we had been funded two years ago, there’s a good chance we’d be the ones building what DeepSeek just released,” he says.

Kolavi’s enthusiasm and Upperwal’s dismay reflect the spectrum of emotions among India’s AI builders. Despite its status as a global tech hub, the country lags far behind the likes of the US and China when it comes to homegrown AI. That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere. 

Historically known as the global back office for the software industry, India has a tech ecosystem that evolved with a services-first mindset. Giants like Infosys and TCS built their success on efficient software delivery, but invention was neither prioritized nor rewarded. Meanwhile, India’s R&D spending hovered at just 0.65% of GDP ($25.4 billion) in 2024, far behind China’s 2.68% ($476.2 billion) and the US’s 3.5% ($962.3 billion). The muscle to invent and commercialize deep tech, from algorithms to chips, was just never built.

Isolated pockets of world-class research do exist within government agencies like the DRDO (Defense Research & Development Organization) and ISRO (Indian Space Research Organization), but their breakthroughs rarely spill into civilian or commercial use. India lacks the bridges to connect risk-taking research to commercial pathways, the way DARPA does in the US. Meanwhile, much of India’s top talent migrates abroad, drawn to ecosystems that better understand and, crucially, fund deep tech.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure, and how urgently it needed to respond.

India responds

In January 2025, 10 days after DeepSeek-R1’s launch, the Ministry of Electronics and Information Technology (MeitY) solicited proposals for India’s own foundation models, which are large AI models that can be adapted to a wide range of tasks. Its public tender invited private-sector cloud and data‑center companies to reserve GPU compute capacity for government‑led AI research. 

Providers including Jio, Yotta, E2E Networks, Tata, AWS partners, and CDAC responded. Through this arrangement, MeitY suddenly had access to nearly 19,000 GPUs at subsidized rates, repurposed from private infrastructure and allocated specifically to foundational AI projects. This triggered a surge of proposals from companies wanting to build their own models. 

Within two weeks, it had 67 proposals in hand. That number tripled by mid-March. 

In April, the government announced plans to develop six large-scale models by the end of 2025, plus 18 additional AI applications targeting sectors like agriculture, education, and climate action. Most notably, it tapped Sarvam AI to build a 70-billion-parameter model optimized for Indian languages and needs. 

For a nation long restricted by limited research infrastructure, things moved at record speed, marking a rare convergence of ambition, talent, and political will.

“India could do a Mangalyaan in AI,” said Gautam Shroff of IIIT-Delhi, referencing the country’s cost-effective, and successful, Mars orbiter mission. 

Jaspreet Bindra, cofounder of AI&Beyond, an organization focused on teaching AI literacy, captured the urgency: “DeepSeek is probably the best thing that happened to India. It gave us a kick in the backside to stop talking and start doing something.”

The language problem

One of the most fundamental challenges in building foundational AI models for India is the country’s sheer linguistic diversity. With 22 official languages, hundreds of dialects, and millions of people who are multilingual, India poses a problem that few existing LLMs are equipped to handle.

Whereas a massive amount of high-quality web data is available in English, Indian languages collectively make up less than 1% of online content. The lack of digitized, labeled, and cleaned data in languages like Bhojpuri and Kannada makes it difficult to train LLMs that understand how Indians actually speak or search.

Global tokenizers, which break text into units a model can process, also perform poorly on many Indian scripts, misinterpreting characters or skipping some altogether. As a result, even when Indian languages are included in multilingual models, they’re often poorly understood and inaccurately generated.

And unlike OpenAI and DeepSeek, which achieved scale using structured English-language data, Indian teams often begin with fragmented and low-quality data sets encompassing dozens of Indian languages. This makes the early steps of training foundation models far more complex.

Nonetheless, a small but determined group of Indian builders is starting to shape the country’s AI future.

For example, Sarvam AI has created OpenHathi-Hi-v0.1, an open-source Hindi language model that shows the Indian AI field’s growing ability to address the country’s vast linguistic diversity. The model, built on Meta’s Llama 2 architecture, was trained on 40 billion tokens of Hindi and related Indian-language content, making it one of the largest open-source Hindi models available to date.

Pragna-1B, the multilingual model from Upperwal, is more evidence that India could solve for its own linguistic complexity. Trained on 300 billion tokens for just $250,000, it introduced a technique called “balanced tokenization” to address a unique challenge in Indian AI, enabling a 1.25-billion-parameter model to behave like a much larger one.

The issue is that Indian languages use complex scripts and agglutinative grammar, where words are formed by stringing together many smaller units of meaning using prefixes and suffixes. Unlike English, which separates words with spaces and follows relatively simple structures, Indian languages like Hindi, Tamil, and Kannada often lack clear word boundaries and pack a lot of information into single words. Standard tokenizers struggle with such inputs. They end up breaking Indian words into too many tokens, which bloats the input and makes it harder for models to understand the meaning efficiently or respond accurately.

With the new technique, however, “a billion-parameter model was equivalent to a 7 billion one like Llama 2,” Upperwal says. This performance was particularly marked in Hindi and Gujarati, where global models often underperform because of limited multilingual training data. It was a reminder that with smart engineering, small teams could still push boundaries.

Upperwal eventually repurposed his core tech to build speech APIs for 22 Indian languages, a more immediate solution better suited to rural users who are often left out of English-first AI experiences.

“If the path to AGI is a hundred-step process, training a language model is just step one,” he says. 

At the other end of the spectrum are startups with more audacious aims. Krutrim-2, for instance, is a 12-billion-parameter multilingual language model optimized for English and 22 Indian languages. 

Krutrim-2 is attempting to solve India’s specific problems of linguistic diversity, low-quality data, and cost constraints. The team built a custom Indic tokenizer, optimized training infrastructure, and designed models for multimodal and voice-first use cases from the start, crucial in a country where text interfaces can be a problem.

Krutrim’s bet is that its approach will not only enable Indian AI sovereignty but also offer a model for AI that works across the Global South.

Besides public funding and compute infrastructure, India also needs the institutional support of talent, the research depth, and the long-horizon capital that produce globally competitive science.

While venture capital still hesitates to bet on research, new experiments are emerging. Paras Chopra, an entrepreneur who previously built and sold the software-as-a-service company Wingify, is now personally funding Lossfunk, a Bell Labs–style AI residency program designed to attract independent researchers with a taste for open-source science. 

“We don’t have role models in academia or industry,” says Chopra. “So we’re creating a space where top researchers can learn from each other and have startup-style equity upside.”

Government-backed bet on sovereign AI

The clearest marker of India’s AI ambitions came when the government selected Sarvam AI to develop a model focused on Indian languages and voice fluency.

The idea is that it would not only help Indian companies compete in the global AI arms race but benefit the wider population as well. “If it becomes part of the India stack, you can educate hundreds of millions through conversational interfaces,” says Bindra. 

Sarvam was given access to 4,096 Nvidia H100 GPUs for training a 70-billion-parameter Indian language model over six months. (The company previously released a 2-billion-parameter model trained in 10 Indian languages, called Sarvam-1.)

Sarvam’s project and others are part of a larger strategy called the IndiaAI Mission, a $1.25 billion national initiative launched in March 2024 to build out India’s core AI infrastructure and make advanced tools more widely accessible. Led by MeitY, the mission is focused on supporting AI startups, particularly those developing foundation models in Indian languages and applying AI to key sectors such as health care, education, and agriculture.

Under its compute program, the government is deploying more than 18,000 GPUs, including nearly 13,000 high-end H100 chips, to a select group of Indian startups that currently includes Sarvam, Upperwal’s Soket Labs, Gnani AI, and Gan AI

The mission also includes plans to launch a national multilingual data set repository, establish AI labs in smaller cities, and fund deep-tech R&D. The broader goal is to equip Indian developers with the infrastructure needed to build globally competitive AI and ensure that the results are grounded in the linguistic and cultural realities of India and the Global South.

According to Abhishek Singh, CEO of IndiaAI and an officer with MeitY, India’s broader push into deep tech is expected to raise around $12 billion in research and development investment over the next five years. 

This includes approximately $162 million through the IndiaAI Mission, with about $32 million earmarked for direct startup funding. The National Quantum Mission is contributing another $730 million to support India’s ambitions in quantum research. In addition to this, the national budget document for 2025-26 announced a $1.2 billion Deep Tech Fund of Funds aimed at catalyzing early-stage innovation in the private sector.

The rest, nearly $9.9 billion, is expected to come from private and international sources including corporate R&D, venture capital firms, high-net-worth individuals, philanthropists, and global technology leaders such as Microsoft. 

IndiaAI has now received more than 500 applications from startups proposing use cases in sectors like health, governance, and agriculture. 

“We’ve already announced support for Sarvam, and 10 to 12 more startups will be funded solely for foundational models,” says Singh. Selection criteria include access to training data, talent depth, sector fit, and scalability.

Open or closed?

The IndiaAI program, however, is not without controversy. Sarvam is being built as a closed model, not open-source, despite its public tech roots. That has sparked debate about the proper balance between private enterprise and the public good. 

“True sovereignty should be rooted in openness and transparency,” says Amlan Mohanty, an AI policy specialist. He points to DeepSeek-R1, which despite its 236-billion parameter size was made freely available for commercial use. 

Its release allowed developers around the world to fine-tune it on low-cost GPUs, creating faster variants and extending its capabilities to non-English applications.

“Releasing an open-weight model with efficient inference can democratize AI,” says Hancheng Cao, an assistant professor of information systems and operations management at Emory University. “It makes it usable by developers who don’t have massive infrastructure.”

IndiaAI, however, has taken a neutral stance on whether publicly funded models should be open-source. 

“We didn’t want to dictate business models,” says Singh. “India has always supported open standards and open source, but it’s up to the teams. The goal is strong Indian models, whatever the route.”

There are other challenges as well. In late May, Sarvam AI unveiled Sarvam‑M, a 24-billion-parameter multilingual LLM fine-tuned for 10 Indian languages and built on top of Mistral Small, an efficient model developed by the French company Mistral AI. Sarvam’s cofounder Vivek Raghavan called the model “an important stepping stone on our journey to build sovereign AI for India.” But its download numbers were underwhelming, with only 300 in the first two days. The venture capitalist Deedy Das called the launch “embarrassing.”

And the issues go beyond the lukewarm early reception. Many developers in India still lack easy access to GPUs and the broader ecosystem for Indian-language AI applications is still nascent. 

The compute question

Compute scarcity is emerging as one of the most significant bottlenecks in generative AI, not just in India but across the globe. For countries still heavily reliant on imported GPUs and lacking domestic fabrication capacity, the cost of building and running large models is often prohibitive. 

India still imports most of its chips rather than producing them domestically, and training large models remains expensive. That’s why startups and researchers alike are focusing on software-level efficiencies that involve smaller models, better inference, and fine-tuning frameworks that optimize for performance on fewer GPUs.

“The absence of infrastructure doesn’t mean the absence of innovation,” says Cao. “Supporting optimization science is a smart way to work within constraints.” 

Yet Singh of IndiaAI argues that the tide is turning on the infrastructure challenge thanks to the new government programs and private-public partnerships. “I believe that within the next three months, we will no longer face the kind of compute bottlenecks we saw last year,” he says.

India also has a cost advantage.

According to Gupta, building a hyperscale data center in India costs about $5 million, roughly half what it would cost in markets like the US, Europe, or Singapore. That’s thanks to affordable land, lower construction and labor costs, and a large pool of skilled engineers. 

For now, India’s AI ambitions seem less about leapfrogging OpenAI or DeepSeek and more about strategic self-determination. Whether its approach takes the form of smaller sovereign models, open ecosystems, or public-private hybrids, the country is betting that it can chart its own course. 

While some experts argue that the government’s action, or reaction (to DeepSeek), is performative and aligned with its nationalistic agenda, many startup founders are energized. They see the growing collaboration between the state and the private sector as a real opportunity to overcome India’s long-standing structural challenges in tech innovation.

At a Meta summit held in Bengaluru last year, Nandan Nilekani, the chairman of Infosys, urged India to resist chasing a me-too AI dream. 

“Let the big boys in the Valley do it,” he said of building LLMs. “We will use it to create synthetic data, build small language models quickly, and train them using appropriate data.” 

His view that India should prioritize strength over spectacle had a divided reception. But it reflects a broader growing consensus on whether India should play a different game altogether.

“Trying to dominate every layer of the stack isn’t realistic, even for China,” says Shobhankita Reddy, a researcher at the Takshashila Institution, an Indian public policy nonprofit. “Dominate one layer, like applications, services, or talent, so you remain indispensable.” 

Correction: We amended Reddy’s name

The Download: India’s AI independence, and predicting future epidemics

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside India’s scramble for AI independence

Despite its status as a global tech hub, India lags far behind the likes of the US and China when it comes to homegrown AI.

That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure—and how urgently it needed to respond. Read the full story.

—Shadma Shaikh

Job titles of the future: Pandemic oracle

Officially, Conor Browne is a biorisk consultant. Based in Belfast, Northern Ireland, he has advanced degrees in security studies and medical and business ethics, along with United Nations certifications in counterterrorism and conflict resolution.

Early in the emergence of SARS-CoV-2, international energy conglomerates seeking expert guidance on navigating the potential turmoil in markets and transportation became his main clients. 

Having studied the 2002 SARS outbreak, he predicted the exponential spread of the new airborne virus. In fact, he forecast the epidemic’s broadscale impact and its implications for business so accurately that he has come to be seen as a pandemic oracle. Read the full story.

—Britta Shoot

This story is from the most recent print edition of MIT Technology Review, which explores power—who has it, and who wants it. Subscribe here to receive future copies once they drop.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump’s ‘big beautiful bill’ has passed 

Which is terrible news for the clean energy industry. (Vox)
+ An energy-affordability crisis is looming in the US. (The Atlantic $)
+ The President struck deals with House Republican holdouts to get it over the line. (WSJ $)
+ The Trump administration has shut down more than 100 climate studies. (MIT Technology Review)

2 Daniel Gross is joining Meta’s superintelligence lab 
He’s jumping ship from the startup he co-founded with Ilya Sutskever. (Bloomberg $)
+ Sutskever is stepping into the CEO role in his absence. (TechCrunch)
+ Here’s what we can infer from Meta’s recent hires. (Semafor)

3 AI’s energy demands could destabilize the global supply
That’s according to the head of the world’s largest transformer maker. (FT $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

4 Elon Musk is threatening to start his own political party
Would anyone vote for him, though? (WP $)
+ You’d think his bruising experience in the White House would have put him off. (NY Mag $)

5 The US has lifted exports on chip design software to China
It suggests that frosty relations between the nations may be thawing. (Reuters)

6 Trump officials are going after this ICE warning app
But lawyers say there’s nothing illegal about it. (Wired $)
+ Downloads of ICEBlock are rising. (NBC News)

7 Wildfires are making it harder to monitor air pollutants
Current tracking technology isn’t built to accommodate shifting smoke. (Undark)
+ How AI can help spot wildfires. (MIT Technology Review)

8 Apple’s iOS 26 software can detect nudity on FaceTime calls
The feature will pause the call and ask if you want to continue. (Gizmodo)

9 Threads has finally launched DMs
But users are arguing there should be a way to opt out of them entirely. (TechCrunch)

10 You can hire a robot to write a handwritten note 🖊🤖
Or, y’know, pick up a pen and write it yourself. (Insider $)

Quote of the day

“It’s almost like we never even spoke.”

Richard Wilson, an online dater who is convinced his most recent love interest used a chatbot to converse with him online before they awkwardly met in person, tells the Washington Post about his disappointment.

One more thing

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother, and they discuss his day-to-day life. But Sun’s mother died five years ago, and the person he’s talking to isn’t actually a person, but a digital replica he made of her.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them.

But some question whether interacting with AI replicas of the dead is truly a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. Read the full story.

—Zeyi Yang

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ There’s nothing cooler than wooden interiors right now.
+ Talented artist Ian Robinson creates beautiful paintings of people’s vinyl collections.
+ You’ll find me in every one of Europe’s top wine destinations this summer.
+ Here’s everything you need to remember before Stranger Things returns this fall.

MNLY’s At-Home AI Powers Men’s Health

Next-gen health and wellness is an apt description of MNLY. Luke Hartelust launched the platform in 2021, pronouncing it “manly,” and then pivoted twice while remaining focused on modern care for men.

The current version combines AI with home-based testing, diagnoses, and nutrition. Customers pay an upfront fee and a monthly subscription afterward.

In our recent conversation, Luke shared the company’s origins, growth, mistakes, and more. The entire audio of that discussion is embedded below. The transcript is condensed and edited for clarity.

Eric Bandholz: Tell us about your work.

Luke Hartelust: I’m the founder and CEO of MNLY, a men’s health and wellness platform. We use at-home diagnostics, AI, and advanced tech to create custom supplement, lifestyle, and nutrition solutions.

My background is in fitness franchising. I led multiple locations across Southern California and worked closely with male entrepreneurs and executives. That experience revealed gaps in men’s healthcare, particularly in the lack of proactive, preventative approaches.

Telehealth has improved access to care, but the model has flaws. Most providers have long waitlists — often up to 90 days for lab results and treatment plans due to backlogged consultations.

At MNLY, we streamlined the process. We removed the practitioner bottleneck and built a scientific advisory board to train a complex AI model. The result is an automated analysis and quick, personalized health recommendations, going from signup to actionable results much faster than traditional telehealth providers.

Bandholz: Walk me through the customer journey.

Hartelust: Customers start by purchasing our at-home blood sample kit — a simple finger prick using dried blood spot sampling, eliminating the friction of in-person visits. Once received, our partner lab processes samples within hours.

While awaiting results, users complete an 86-question health assessment. It focuses on seven areas: concentration, confidence, stamina, mood, sleep, libido, and recovery.

We combine lab and assessment data — roughly 100 data points per user — to generate a clean, easy-to-understand health dashboard. It explains results and provides reference ranges, visuals, and comparison metrics. An overall health score benchmarks the data.

Next, our AI builds a personalized health plan, including nutrition suggestions based on biomarkers and lifestyle hacks such as breathwork and even testicular cooling for hormone support.

Finally, we formulate a custom dietary supplement. Based on the user’s data, our AI prescribes specific nutrients and doses. We then manufacture the supplement and ship every 30 days. It’s fully automated.

Bandholz: What does it cost customers?

Hartelust: The initial lab kit is $199. Supplements are $249 per month.

We recommend retesting with new blood samples every three to five months. Each time new bloodwork is submitted, our system updates all biomarkers, adjusts supplement dosages, and revises the health plan. Users experience clear visual progress, including changes to their overall health score.

We’ve just completed our first year in business. It’s our third iteration under the MNLY brand. We launched in 2021 as a nutritional subscription box provider, with two attempts.

A year ago, with this version, we didn’t prioritize retention. Our small team focused on product development, and we lacked an automated customer journey to guide and remind users about retesting. We started those reminders 90 days ago.

From an ecommerce perspective, not building that journey sooner was one of our biggest missteps. Many customers experienced strong results in the first six weeks — improved libido, mood, sleep, recovery, and focus — but when those effects plateaued, some dropped off around the five- or six-month mark. Even though biological improvements continued, users weren’t always aware without updated data. That’s why consistent testing and communication are now central to our retention strategy.

Bandholz: What’s your growth strategy?

Hartelust: As a startup raising capital in a tough market, I needed a strategic partner to expand our reach. I secured a deal last year with Hyrox, an indoor fitness competition, as its exclusive U.S. men’s health partner. I landed the deal with just a minimal viable product and a pitch deck, right before Hydrox’s U.S. expansion took off.

The company’s events grew in a year from 2,000 athletes to 14,000, and its audience — 50,000 social followers, 30,000 email subscribers, and 200 gym partners — aligned perfectly with our brand. We paid for the sponsorship, but it gave us massive exposure, credibility, and direct access to our core demographic.

We could have taken out, say, $100,000 in Meta Ads. That same $100,000 in a strategic Hyrox sponsorship gets us brand equity, athletes, investors, and a much lower acquisition cost — around $200 per customer, far better than we could achieve with ads alone.

Bandholz: How do you convert Hyrox athletes?

Hartelust: A presence on-site at the competitions is our most effective strategy. We recently wrapped an eight-month national tour where we set up our brand installation inside each venue. Our core leadership team was there to bring deep product knowledge, passion, and real connection.

The sponsorship provided us with access to email lists and social media audiences. Before the competition, we emailed attendees with offers, a discount code, and booth details. We reminded them of the promotion during the event and shared recaps after. We encouraged the participants to show the code at the booth for a lower rate.

Bandholz: How did you raise the capital to fund such a complex launch?

Hartelust: I spent the first six years of my career building wellness and fitness studios and nurturing strategic relationships. When we sold the company in 2021 for several million dollars, I reinvested some capital to start MNLY. But, again, before our current model, MNLY failed twice as a subscription box concept. I lost a lot on those early versions before pivoting to what we have now.

Launching this model required more than just personal funds, so I began raising a true pre-seed round about 18 months ago. I had raised capital before, but never for a startup. I tapped every possible connection — friends, family, clients — and hired a virtual assistant for cold outreach. One of our venture capital partners shared a valuable investor database. I ended up doing roughly 250 pitches and raised just under $800,000.

This round focused on micro angels rather than traditional VCs. Many brands rely heavily on Meta ads and lack a real connection. We leveraged our Hyrox community and offered equity to athlete ambassadors, which provided us with additional operational capital. That blend of brand, relationships, and community has fueled our growth.

Bandholz: Where can people support you?

Hartelust: Our website is getMNLY.com. We’re @getMNLY on Instagram and Facebook. I’m on LinkedIn.

What Is Paid Media: The Different Types & Examples via @sejournal, @brookeosmundson

Paid media is often treated like a checklist item in a marketing plan: launch a few search ads, run a Meta campaign, maybe test YouTube if there’s budget left.

But not all paid media is created equal, and treating every channel the same is a fast way to burn through budget with little to show for it.

Whether you’re working in-house or managing campaigns for clients, understanding the different types of paid media (and what each one is actually good for) can help you prioritize the right tactics, set realistic expectations, and answer the dreaded question: “What are we getting out of this?”

This article breaks down the main types of paid media with real-world examples so you can make smarter decisions about where to spend your money.

What Is Paid Media?

Paid media is any type of marketing where you pay to get in front of your audience. That includes things like search ads, social ads, display banners, video pre-roll, and even influencer sponsorships.

While paid media is often used interchangeably with the term cost-per-click (CPC), it’s important to note the differentiation.

It’s the part of your marketing strategy that gives you scale and control. You’re not waiting for someone to discover your blog post or share your Instagram reel organically.

You’re putting money behind your message to drive attention right now.

Paid media works best when it’s tied to a clear goal, like driving leads, sales, or downloads. Without a strategy, it’s just noise with a price tag.

The Difference Between Earned, Owned, And Paid Media

Think of paid, owned, and earned media as different ways to get your message out. You need a mix of all three, but each serves a different purpose.

  • Paid media is when you pay for attention. Think of tactics like search ads, social ads, sponsored posts, affiliate placements, etc.
  • Owned media is what you control. Think of assets like your website, blog, email list, and social channels.
  • Earned media is what others say about you. This often comes in the form of reviews, PR coverage, social shares, and more.

Some examples of earned media include:

  • Social sharing from customers.
  • Customer reviews.
  • External media coverage (public relations).

Owned media examples include:

The overlap matters, too. A paid campaign might drive traffic to a landing page (owned media), which then gets shared by a happy customer (earned media). When these channels work together, your efforts go further.

Types Of Paid Media Channels

Now that we’ve identified the definition of paid media, let’s take a look at the different types of paid media channels and the purposes they serve.

Before we dive into the different paid media channels, it’s also important to note the difference between ad formats and ad channels.

Ad formats are the type of ads shown in a particular channel. An ad format example could be:

So, while ad formats are important and will depend on the channel, below we will focus on the channels themselves.

There are other types of paid media channels available that are not listed here, such as more traditional methods like direct mail or billboards. These paid media channels have a more physical presence.

Here, we will focus on digital channels.

Paid Search

Paid search puts your ads at the top of search results for specific keywords. It’s often the first paid channel marketers try because it targets people already looking for what you offer.

Platforms like Google Ads and Microsoft Ads let you bid on search terms so your ad shows when someone types in something relevant.

Google is the leading search engine in market share, with its sites generating 60.4% of user searches in the United States.

It’s high-intent, measurable, and scalable. But, it’s also competitive, especially in industries like legal, finance, or ecommerce.

Success here depends on more than just bidding. Your landing page, ad copy, keyword match types, and conversion tracking all matter. You’re not just paying for clicks – you’re paying for the opportunity to convert interest into action.

Paid Social

Paid social platforms let you reach people based on who they are, not just what they search.

Many of the platforms offer detailed targeting based on demographics, interests, behaviors, and even job titles.

Some of the most common paid social platforms include:

  • Meta (Facebook).
  • Instagram.
  • LinkedIn.
  • TikTok.
  • Pinterest.
  • Snapchat.
  • X (Twitter).

The most common ad format in social channels is placed within a user’s newsfeed as they scroll. These ads will either consist of one (or more) static images or a video as the main visual.

It’s not just about brand awareness. Many brands use social to drive signups, sales, or downloads. You can run video ads, carousels, static images, or Stories, depending on what fits your brand and goal.

Some paid social platforms are more beneficial for B2B companies than for B2C brands.

For example, LinkedIn advertising consists mainly of B2B brands marketing their product or service to other professionals.

Other platforms like TikTok and Snapchat may be better suited for B2C or ecommerce brands.

The tricky part? Creative fatigue is real.

If you’re not refreshing your assets often or testing different hooks, performance will drop fast. Social ads require constant iteration, but the upside is speed: you can test ideas and get feedback quickly.

Programmatic & Display

Display advertising is what most people think of as “banner ads.” These are the visual ads you see on news sites, blogs, or apps, usually managed through platforms like the Google Display Network or programmatic buying platforms.

The upside is scale. You can reach millions of people across the web without relying on social platforms. The downside? Banner blindness is real. If your creative isn’t compelling, people will scroll right past.

That’s why display works best for remarketing or supporting a broader campaign. Use it to stay top of mind, promote limited-time offers, or drive awareness ahead of a product launch. Just don’t expect cold traffic to convert on the first click.

Affiliate Marketing

Affiliate marketing is a way to scale your reach by letting others promote your product for you. You only pay when they drive a sale or lead, which makes it one of the lowest-risk paid media options available.

This model works especially well in industries like fashion, tech, travel, and finance, where bloggers, influencers, or content sites already have built-in audiences.

The key to making affiliate work? Vet your partners. A bunch of low-quality traffic from coupon sites won’t move the needle.

Look for affiliates who create content, have authority, or drive meaningful referral traffic.

And keep an eye on attribution. Affiliate-driven sales often overlap with other paid efforts, so tracking needs to be tight.

Examples Of Paid Media

This is where the ad formats are married to the paid media channels.

Below are examples of paid media ads from the popular channels listed above. These examples can help provide context when deciding what types of paid media to run.

Search Examples

When searching for [top parental control apps] in Google, the first three positions are examples of search ads.

Screenshot from Google search for [top parental control apps], Google, May 2025

While conducting the same search on Microsoft Bing, the ads look slightly different.

There’s even a section above the sponsored ads showcasing different brands and a brief description about what they do.

Screenshot from Bing search for [top parental control apps], Microsoft Bing, May 2025

When searching for a product like [nike shoes for women], the ads below are a shopping ad format.

Screenshot from Google search for [nike shoes for women], Google, May 2025

Paid Social Examples

Each social platform’s ad formats look different within their respective newsfeeds.

Here is a LinkedIn newsfeed example:

Screenshot from author’s LinkedIn newsfeed, desktop ad, May 2025

A Facebook ad newsfeed example:

Screenshot from author’s Facebook newsfeed, desktop ad, May 2025

Instagram also offers ads in its “Stories” placement. An example from Fountainhead is below:

Screenshot from author’s Instagram Stories feed, Stories ad, May 2025

Display Examples

Display ads can be in all shapes and sizes, depending on the website or app.

Below is an example of two different display ads shown on one webpage.

Screenshot from author, May 2025

Affiliate Examples

Sometimes, affiliate ads can be difficult to spot.

For example, “Listicle” articles, where a publisher is paid by other brands to be included in a “Top” product article.

Screenshot from FamilyOnlineSafety.com, May 2025

However, if you take a closer look at this example’s “Advertising Disclosure,” you’ll notice that this publisher is paid by the brands for exclusive placement:

affiliate marketing disclaimerScreenshot from FamilyOnlineSafety.com, May 2025

Summary

Paid media doesn’t have to be a guessing game. When you understand the role each channel plays, you’re in a much better spot to build campaigns that actually drive results, not just impressions.

From keyword-targeted search ads to affiliate partnerships and social retargeting, each paid media type has its own strengths. Use them deliberately.

Think about where your audience is, how they like to interact, and what action you want them to take.

Remember: success isn’t just about being present on every channel. It’s about showing up with the right message, in the right place, at the right time.

More resources: 


Featured Image: Lana Sham/Shutterstock

SEO Rockstar “Proves” You Don’t Need Meta Descriptions via @sejournal, @martinibuster

An SEO shared on social media that his SEO tests proved that not using a meta description resulted in a lift in traffic. Coincidentally, another well-known SEO published an article that claims that SEO tests misunderstand how Google and the internet actually work and lead to the deprioritization of meaningful changes. Who is right?

SEO Says Pages Without Meta Descriptions Received Ranking Improvement

Mark Williams-Cook posted the results of his SEO test on LinkedIn about using and omitting meta descriptions, concluding that pages lacking a meta description received an average traffic lift of approximately 3%.

Here’s some of what he wrote:

“This will get some people’s backs up, but we don’t recommend writing meta descriptions anymore, and that’s based on data and testing.

We have consistently found a small, usually around 3%, but statistically significant uplift to organic traffic on groups of pages with no meta descriptions vs test groups of pages with meta descriptions via SEOTesting.

I’ve come to the conclusion if you’re writing meta descriptions manually, you’re wasting time. If you’re using AI to do it, you’re probably wasting a small amount of time.”

Williams-Cook asserted that Google rewrites around 80% of meta descriptions and insisted that the best meta descriptions are query dependent, meaning that the ideal meta description would be one that’s custom written for the specific queries the page is ranking for, which is what Google does when the meta description is missing.

He expressed the opinion that omitting the meta description increases the likelihood that Google will step in and inject a query-relevant meta description into the search results which will “outperform” the normal meta description that’s optimized for whatever the page is about.

Although I have reservations about SEO tests in general, his suggestion is intriguing and has the ring of plausibility.

Are SEO Tests Performative Theater?

Coincidentally, Jono Alderson, a technical SEO consultant, published an article last week titled, “Stop testing. Start shipping.” where he discusses his view on SEO tests, calling it “performative theater.”

Alderson writes:

“The idea of SEO testing appeals because it feels scientific. Controlled. Safe…

You tweak one thing, you measure the outcome, you learn, you scale. It works for paid media, so why not here?

Because SEO isn’t a closed system. …It’s architecture, semantics, signals, and systems. And trying to test it like you would test a paid campaign misunderstands how the web – and Google – actually work.

Your site doesn’t exist in a vacuum. Search results are volatile. …Even the weather can influence click-through rates.

Trying to isolate the impact of a single change in that chaos isn’t scientific. It’s theatre.

…A/B testing, as it’s traditionally understood, doesn’t even cleanly work in SEO.

…most SEO A/B testing isn’t remotely scientific. It’s just a best-effort simulation, riddled with assumptions and susceptible to confounding variables. Even the cleanest tests can only hint at causality – and only in narrowly defined environments.”

Jono makes a valid point about the unreliability of tests where the inputs and the outputs are not fully controlled.

Statistical tests are generally done within a closed system where all the data being compared follow the same rules and patterns. But if you compare multiple sets of pages, where some pages target long-tail phrases and others target high-volume queries, then the pages will differ in their potential outcomes. External changes (daily traffic fluctuation, users clicking on the search results) aren’t controllable. As Jono suggested, even the weather can influence click rates.

Although Williams-Cook asserted that he had a control group for testing purposes, it’s extremely difficult to isolate a single variable on live websites due to the uncontrollable external factors as Jono points out.

So, even though Williams-Cook asserts that the 3% change he noted is consistent and statistically relevant, the unobservable factors within Google’s black box algorithm that determines the outcome makes it difficult to treat that result as a reliable causal finding in the way one could with a truly controlled and observable statistical testing method.

If it’s not possible to isolate one change then it’s very difficult to make reliable claims about the resulting SEO test results.

Focus On Meaningful SEO Improvements

Jono’s article calls out the shortcomings of SEO tests but the point of his essay is to call attention to how focusing on what can be tested and measured can become prioritized over the “meaningful” changes that should be made but aren’t because they cannot be measured. He argues that it’s important to focus on the things that matter in today’s search environment that are related to content and a better user experience.

And that’s where we circle back to Williams-Cook because although statistically valid A/B SEO tests may be “theatre” as Jono suggests, it doesn’t mean that Williams-Cook’s suggestion is wrong. He may actually may be correct that it’s better to omit the meta description and let Google rewrite them.

SEO is subjective which means what’s good for one might not be a priority for someone else. So the question remains, is removing all meta descriptions a meaningful change?

Featured Image by Shutterstock/baranq

New Ecommerce Tools: July 3, 2025

Every week we select and publish a rundown of new products and services from vendors to ecommerce merchants. This installment includes updates on AI-powered storefronts, B2B payments, generative AI content, email marketing, stablecoin payments, reverse logistics, and product information management.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Adobe Commerce releases two product offerings. Adobe has announced two new product offerings to help brands deliver personalized shopping journeys. Adobe Commerce as a Cloud Service is a full-stack, cloud-native ecommerce platform. It enables brands to create global, multi-brand B2B and B2C storefronts, with genAI-powered content creation and streamlined digital asset management. Adobe Commerce Optimizer is a headless storefront that empowers brands to upgrade the front end of their ecommerce sites while allowing the backend system to remain unchanged.

Home page of Adobe Commerce

Adobe Commerce

Mirakl launches connector for Shopify. Mirakl, a provider of ecommerce software, is releasing Mirakl Platform Connector for Shopify, designed to accelerate marketplace deployment through pre-built synchronization of taxonomy, products, offers, sellers, and orders. The connector syncs critical marketplace operations, with features including a built-in checkout module, out-of-the-box technical integration, and support for themes and headless storefronts. Per Mirakl, businesses can benefit from standardized and guided integration processes, automated product catalog onboarding, streamlined order management, dedicated technical support, and real-time synchronization of operations.

Balance and Alibaba.com partner on flexible B2B payments for U.S. SMBs. Balance, an AI-powered financial infrastructure platform for B2B commerce, has launched “Pay Later for Business” on Alibaba.com. The embedded financing tool gives U.S. small and medium-sized businesses greater purchasing power and more control over how and when they pay. Alibaba.com’s U.S. business users now have the option to access instant credit at checkout. This BNPL feature is powered by Balance’s AI risk infrastructure with real-time credit management.

Feedonomics and BigCommerce team up with Perplexity for AI product search. Feedonomics, a data feed management platform, and BigCommerce have announced that their customers have access to Perplexity, an AI-powered search engine, to optimize visibility and relevance in search results. Feedonomics now provides Perplexity with pre-optimized, structured product data, ensuring that the LLM understands and recognizes merchants’ products, leading to search results that favor the brand.

Home page of Feedonomics

Feedonomics

Bolt launches Connect for marketplace onboarding and support for stablecoin payments. Bolt, a checkout, identity, and payments platform, has announced Connect to help marketplaces onboard merchants. Bolt Connect gives marketplace operators a single integration to support one-click merchant onboarding, built-in compliance workflows, and low-fee or no-fee payouts. With Bolt managing the infrastructure, marketplaces can grow efficiently while controlling their brand experience and business model. Bolt also announced support for stablecoin payments, giving merchants and shoppers enhanced flexibility through new digital payment infrastructure.

Akeneo joins Shopify’s partner program, providing product content solutions. Akeneo, a provider of product information management tools, is now a Premier Partner of Shopify. Akeneo helps brands, manufacturers, distributors, and retailers centralize, enrich, and optimize product content with AI tools, while Shopify provides a composable infrastructure for high-performing digital storefronts. With the partnership, Akeneo has launched its App for Shopify, allowing merchants to connect Akeneo’s PIM to Shopify storefronts, eliminating manual data entry by automating the syndication of localized product data.

Omnisend launches AI-powered suite for ecommerce email marketing. Omnisend, an email and SMS marketing platform for ecommerce brands, has launched a suite of AI-powered tools. The AI Segment Builder lets marketers describe audiences in plain language and instantly generate precise segments. New AI content tools include generators for subject lines and preheaders as well as direct copywriting — all in multiple languages, using previous campaign insights, and the brand’s tone of voice. The new Personalized Product Recommender suggests products tailored to each customer’s browsing behavior.

Home page of Omnisend

Omnisend

Amazon expands Prime delivery to thousands of small towns and rural areas. Amazon is expanding Same-Day and Next-Day delivery to over 4,000 small U.S. cities, towns, and rural communities by the end of 2025. Additionally, Amazon is investing over $4 billion to triple the size of its delivery network by 2026, with a focus on small towns and rural communities across the U.S. Customers can see whether their area has Same-Day Delivery by visiting Amazon.com/samedaystore and browsing by category, price point, and retail store.

Geodis unveils two complementary returns tools. Geodis, a global logistics provider, has launched two returns tools: workflow automation and management. The returns workflow automation module enables the end consumer to create a shipping label to initiate the return or exchange without requiring the shipper’s involvement. The returns management module optimizes the reverse logistics process inside the warehouse. Geodis integrates with Shopify, BigCommerce, WooCommerce, Magento, and other ecommerce platforms.

Tulip AI debuts for personalized customer engagement. Tulip, a cloud-based retail customer engagement platform, has launched a suite of AI tools to create stronger customer experiences. Tulip AI acts as a virtual assistant, streamlining daily tasks and enabling more meaningful customer connections at scale, including intelligent message writing, instant profile summaries, customer search, and segmentation, according to Tulip.

Klaviyo launches AI-powered platform enhancements for omnichannel marketing. Klaviyo has announced AI-powered platform enhancements. Omnichannel Campaign Builder helps plan, launch, and measure complex, multi-day campaigns across email, SMS/RCS, push, and WhatsApp. Channel Affinity automatically learns customer preferences, then delivers messages where and when they’re most likely to convert. Multi-Touch Attribution provides real-time visibility into what’s driving revenue.

Klaviyo home page

Klaviyo

Are You Still Optimizing for Rankings? AI Search May Not Care. [Webinar] via @sejournal, @hethr_campbell

No ranking data. No impression data. 

So, how do you measure success when AI-generated answers appear and disappear, prompt by prompt?

With these significant changes to how we optimize for search, many brands are seeking to understand how to achieve SEO success.

Some Brands Are Winning in Search. Others? Invisible.

If your content isn’t appearing in AI-generated responses, like AI Overviews, ChatGPT, or Perplexity, you’re already losing ground to competitors.

👉 RSVP: Learn from the brands still dominating SERPs through AI search

In This Free Webinar, You’ll Learn:

  • Data-backed insights on what drives visibility and performance in AI search
  • A proven framework to drive results in AI search, and why this approach works
  • Purpose-built content strategies for driving success in Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO).

This webinar helps enterprise SEOs and executives move from “I don’t know what’s happening in AI search” to “I have a data-driven strategy to compete and win.”

This session is designed for:

  • Marketing managers and SEO strategists looking to stay ahead.
  • Brand leaders managing performance visibility across platforms.
  • Content teams building for modern search behaviors.

You’ll walk away with a usable playbook and a better understanding of how to optimize for the answer, not the query.

Learn from what today’s winning brands are doing right.

Secure your spot, plus get the recording sent straight to your inbox if you can’t make it live.

Explaining The Great Decoupling To C-Level via @sejournal, @TaylorDanRW

Something important is happening in Google Search.

If you’ve looked at your website data in Google Search Console, you may have noticed something odd. Your pages are showing up more often, but fewer people are clicking through.

These two signals – impressions and clicks – are used to rise and fall together. Now, they’re drifting apart.

We call this “The Great Decoupling.”

Screenshot from Jim Thornton (with permission to use), The SEO Community Slack Group, June 2025

And it’s not just your business. This is happening across industries and most website types.

It became more noticeable as Google rolled out something called AI Overviews – automated summaries that answer questions directly in search results.

If your site traffic from search is falling, but your rankings look fine, this article will help explain why.

We’ll examine the changes, their causes, how they manifest in analytics tools, and the responses of leading companies.

What’s Happening And Why It Matters

The Great Decoupling describes a new disconnect. Your website can appear more often in search results but get fewer clicks.

That was only “the expected behavior” when the SERP had things like featured snippets, or other special content result blocks from Google.

We’ve seen this clearly in client data during the first half of 2025.

Screenshot from Itamar Bauer (with permission to use), Studio Hawk, June 2025

Near the end of 2024, impressions and clicks were still closely linked. But by early 2025, impressions kept going up while clicks went down.

The click-through rate, the percentage of people who click, dropped sharply.

This trend is widespread. Whether your site is an ecommerce store, a B2B company, or a blog, the same thing is happening – more visibility but less engagement.

Martin Splitt has said that when your pages are shown in AI Overviews, you may get more impressions but fewer clicks.

He also said that people might still convert later, perhaps after seeing your brand in search results, even if they never click the first time.

So, we’re in a new “normal”; impressions alone no longer signal opportunity. It’s what happens after the impression that counts.

Why This Is Happening

Google’s move toward AI-powered results is driving this change. The most significant shift is the introduction of AI Overviews.

AI Overviews are summaries shown at the top of search results.

Instead of a list of websites, Google provides an instant answer. That answer is generated from various sources across the web, including yours, without requiring the user to click.

Your Content May Appear Twice, But It Only Gets One Chance To Earn A Click

Your site may show up as both a traditional link and as part of the AI Overview. That boosts impressions but often reduces clicks. People get what they need from the overview.

Less Friction Means Fewer Visits

The AI Overview gives users what they want quickly. However, if their need is met before they reach your site, your traffic will drop.

Some Search Terms Are Hit Harder Than Others

Generic questions, how-tos, and mid-funnel queries are more likely to trigger AI Overviews. These are often top-of-funnel keywords marketers use to drive discovery.

On the other hand, brand searches and high-intent queries are more resilient.

The point is that it’s not just about where you rank. It’s about whether Google decides to answer the question for the user without needing you.

Zero-Click Search Isn’t New

This isn’t entirely new. For years, Google has provided users with quick answers. Featured snippets, “People Also Ask” boxes, and knowledge panels all reduced the need to click.

AI Overviews are just the next step. They are more advanced, appear more often, and answer a broader range of questions. But, the principle is the same: to reduce the effort for the user.

We’ve adapted before. We can adapt again. However, this shift is more significant and impacts multiple stages of the customer journey, necessitating a more strategic approach.

What This Looks Like In Your Analytics

In Google Search Console, the gap between impressions and clicks is clear. In Google Analytics 4, you see the impact on your traffic and behavior metrics.

Organic Traffic Is Falling

Your GA4 report shows fewer sessions from Google, even though your rankings haven’t changed. That’s the result of fewer clicks.

Engagement May Look Better

Because fewer but more qualified visitors are reaching your site, session length and conversion rates may look stronger. But, overall, reach is down.

Attribution Becomes Less Clear

GA4 does not show traffic that came through AI Overviews separately.

Some visitors might return later and be counted as “direct” traffic. Others won’t be tracked at all. This makes it more challenging to attribute SEO’s role in brand discovery.

To understand what’s happening, you need to look at GSC and GA4 together. One shows the visibility. The other shows the outcomes.

How I Think You Should Adjust & Act

The most forward-thinking businesses are making strategic shifts to protect and grow their visibility. Here are four things they’re doing:

1. Strengthen Brand

When users search for you by name, Google is less likely to intervene. These clicks are holding steady and, in some cases, growing.

Investing in brand and trust is generic advice being thrown around a lot at the moment, but I think you should be looking at your brand in consideration of a user journey and what scope AI platforms have to alter that user journey before your brand is discovered.

Image from author, June 2025

This also means working to understand how well-known your brand is before a user starts at the “top of the funnel,” and whether or not they’re more likely to steer towards your brand due to previous positive brand touchpoints, or the sentiment and user stories of others online.

2. Publish Content That AI Can’t Copy

If your content is generic, Google’s AI can summarize it. If it’s unique, based on experience, data, or opinion, it’s much harder to replace.

Focus on:

  • Original research.
  • Customer stories.
  • Side-by-side product comparisons.
  • Tools and calculators.
  • Real customer feedback.

3. Build Around Topics, Not Keywords

Create clusters of related content around a theme. This signals authority to search engines and gives users more reasons to explore your site.

4. Turn Product Pages Into Useful Resources

Don’t just list specs. Add real information that helps the buyer:

  • FAQs.
  • Reviews.
  • Comparison tables.
  • Guides and videos.

You want to help the buyer better forecast their experience with the product or service, as well as their understanding of your brand.

Be upfront about as much information as possible, as a negative brand or product experience can be damaging in the long run.

Why SEO Still Matters

Yes, SEO remains highly relevant despite the rise of AI.

While AI tools are changing how search works and how users find answers, they haven’t replaced the need for a smart, well-executed SEO strategy.

SEO is evolving and becoming more important in new ways.

AI Needs High-Quality Content To Learn From

AI Overviews don’t invent answers. They draw from trusted online sources. That means Google still relies on high-quality, well-optimized content to build its responses.

SEO helps ensure your content meets the standards of E-E-A-T: experience, expertise, authoritativeness, and trustworthiness.

Search Engines Still Rank Pages

Even with AI features in search results, users still scroll through traditional listings and click on websites.

SEO ensures your content performs well in these results, whether it’s in the top 10 links, a featured snippet, or a “People Also Ask” box.

AI Enhances, Not Replaces, SEO

AI tools can automate certain aspects of SEO, such as keyword research and content suggestions. But, they don’t replace strategic thinking.

SEO experts continue to guide site architecture, content structure, technical fixes, and intent-based optimization – tasks that AI can’t fully handle alone.

SEO isn’t going away; it’s becoming more sophisticated.

The businesses that succeed will be the ones that blend innovative tools with strategic thinking and treat SEO as a long-term investment in visibility and value.

The new wave of SEO isn’t just about driving traffic. It’s about showing up where your customers are asking questions, building credibility, and creating a footprint that supports all your other channels.

  • Visibility builds trust. Even if someone doesn’t click, seeing your name in search results reinforces brand awareness.
  • SEO feeds other channels. The insights you gain from search, what people ask, how they ask it, and what ranking help shape your messaging everywhere else.
  • Strong content earns attention. Helpful, original content can drive engagement on-site, across social media, and in sales conversations.
  • It remains one of the most cost-effective ways to acquire leads, especially for branded and high-intent queries.

Search may not deliver the same volume of clicks, but it still shapes perception, influence, and decision-making.

SEO remains one of the most effective ways to stay visible and valuable in an increasingly AI-driven world.

Change What You Measure

The Great Decoupling is not just an SEO story. It’s a business visibility story. More people may see your brand, but fewer will visit your site.

That means you can’t just measure success by traffic. You need to consider engagement, recall, and brand strength.

Search is becoming a reputation game. If people trust you, they’ll find you, even if they don’t click the first time.

The companies that win won’t be the ones who chase rankings; they’ll be the ones who earn attention. Attention is potentially the “new click.”

More Resources:


Featured Image: Master1305/Shutterstock

How generative AI could help make construction sites safer

Last winter, during the construction of an affordable housing project on Martha’s Vineyard, Massachusetts, a 32-year-old worker named Jose Luis Collaguazo Crespo slipped off a ladder on the second floor and plunged to his death in the basement. He was one of more than 1,000 construction workers who die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls.

“Everyone talks about [how] ‘safety is the number-one priority,’” entrepreneur and executive Philip Lorenzo said during a presentation at Construction Innovation Day 2025, a conference at the University of California, Berkeley, in April. “But then maybe internally, it’s not that high priority. People take shortcuts on job sites. And so there’s this whole tug-of-war between … safety and productivity.”

To combat the shortcuts and risk-taking, Lorenzo is working on a tool for the San Francisco–based company DroneDeploy, which sells software that creates daily digital models of work progress from videos and images, known in the trade as “reality capture.”  The tool, called Safety AI, analyzes each day’s reality capture imagery and flags conditions that violate Occupational Safety and Health Administration (OSHA) rules, with what he claims is 95% accuracy.

That means that for any safety risk the software flags, there is 95% certainty that the flag is accurate and relates to a specific OSHA regulation. Launched in October 2024, it’s now being deployed on hundreds of construction sites in the US, Lorenzo says, and versions specific to the building regulations in countries including Canada, the UK, South Korea, and Australia have also been deployed.

Safety AI is one of multiple AI construction safety tools that have emerged in recent years, from Silicon Valley to Hong Kong to Jerusalem. Many of these rely on teams of human “clickers,” often in low-wage countries, to manually draw bounding boxes around images of key objects like ladders, in order to label large volumes of data to train an algorithm.

Lorenzo says Safety AI is the first one to use generative AI to flag safety violations, which means an algorithm that can do more than recognize objects such as ladders or hard hats. The software can “reason” about what is going on in an image of a site and draw a conclusion about whether there is an OSHA violation. This is a more advanced form of analysis than the object detection that is the current industry standard, Lorenzo claims. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence. It requires an experienced safety inspector as an overseer.  

A visual language model in the real world

Robots and AI tend to thrive in controlled, largely static environments, like factory floors or shipping terminals. But construction sites are, by definition, changing a little bit every day. 

Lorenzo thinks he’s built a better way to monitor sites, using a type of generative AI called a visual language model, or VLM. A VLM is an LLM with a vision encoder, allowing it to “see” images of the world and analyze what is going on in the scene. 

Using years of reality capture imagery gathered from customers, with their explicit permission, Lorenzo’s team has assembled what he calls a “golden data set” encompassing tens of thousands of images of OSHA violations. Having carefully stockpiled this specific data for years, he is not worried that even a billion-dollar tech giant will be able to “copy and crush” him.

To help train the model, Lorenzo has a smaller team of construction safety pros ask strategic questions of the AI. The trainers input test scenes from the golden data set to the VLM and ask questions that guide the model through the process of breaking down the scene and analyzing it step by step the way an experienced human would. If the VLM doesn’t generate the correct response—for example, it misses a violation or registers a false positive—the human trainers go back and tweak the prompts or inputs. Lorenzo says that rather than simply learning to recognize objects, the VLM is taught “how to think in a certain way,” which means it can draw subtle conclusions about what is happening in an image. 

Examples from nine categories of safety risks at construction sites that DroneDeploy can detect.
Examples of safety risk categories that Safety AI can detect.
COURTESY DRONEDEPLOY

As an example, Lorenzo says VLMs are much better than older methods at analyzing ladder usage, which is responsible for 24% of the fall deaths in the construction industry. 

“With traditional machine learning, it’s very difficult to answer the question of ‘Is a person using a ladder unsafely?’” says Lorenzo. “You can find the ladders. You can find the people. But to logically reason and say ‘Well, that person is fine’ or ‘Oh no, that person’s standing on the top step’—only the VLM can logically reason and then be like, ‘All right, it’s unsafe. And here’s the OSHA reference that says you can’t be on the top rung.’”

Answers to multiple questions (Does the person on the ladder have three points of contact? Are they using the ladder as stilts to move around?) are combined to determine whether the ladder in the picture is being used safely. “Our system has over a dozen layers of questioning just to get to that answer,” Lorenzo says. DroneDeploy has not publicly released its data for review, but he says he hopes to have his methodology independently audited by safety experts.  

The missing 5%

Using vision language models for construction AI shows promise, but there are “some pretty fundamental issues” to resolve, including hallucinations and the problem of edge cases, those anomalous hazards for which the VLM hasn’t trained, says Chen Feng. He leads New York University’s AI4CE lab, which develops technologies for 3D mapping and scene understanding in construction robotics and other areas. “Ninety-five percent is encouraging—but how do we fix that remaining 5%?” he asks of Safety AI’s success rate.

Feng points to a 2024 paper called “Eyes Wide Shut?”—written by Shengbang Tong, a PhD student at NYU, and coauthored by AI luminary Yann LeCun—that noted “systematic shortcomings” in VLMs.  “For object detection, they can reach human-level performance pretty well,” Feng says. “However, for more complicated things—these capabilities are still to be improved.” He notes that VLMs have struggled to interpret 3D scene structure from 2D images, don’t have good situational awareness in reasoning about spatial relationships, and often lack “common sense” about visual scenes.

Lorenzo concedes that there are “some major flaws” with LLMs and that they struggle with spatial reasoning. So Safety AI also employs some older machine-learning methods to help create spatial models of construction sites. These methods include the segmentation of images into crucial components and photogrammetry, an established technique for creating a 3D digital model from a 2D image. Safety AI has also trained heavily in 10 different problem areas, including ladder usage, to anticipate the most common violations.

Even so, Lorenzo admits there are edge cases that the LLM will fail to recognize. But he notes that for overworked safety managers, who are often responsible for as many as 15 sites at once, having an extra set of digital “eyes” is still an improvement.

Aaron Tan, a concrete project manager based in the San Francisco Bay Area, says that a tool like Safety AI could be helpful for these overextended safety managers, who will save a lot of time if they can get an emailed alert rather than having to make a two-hour drive to visit a site in person. And if the software can demonstrate that it is helping keep people safe, he thinks workers will eventually embrace it.  

However, Tan notes that workers also fear that these types of tools will be “bossware” used to get them in trouble. “At my last company, we implemented cameras [as] a security system. And the guys didn’t like that,” he says. “They were like, ‘Oh, Big Brother. You guys are always watching me—I have no privacy.’”

Older doesn’t mean obsolete

Izhak Paz, CEO of a Jerusalem-based company called Safeguard AI, has considered incorporating VLMs, but he has stuck with the older machine-learning paradigm because he considers it more reliable. The “old computer vision” based on machine learning “is still better, because it’s hybrid between the machine itself and human intervention on dealing with deviation,” he says. To train the algorithm on a new category of danger, his team aggregates a large volume of labeled footage related to the specific hazard and then optimizes the algorithm by trimming false positives and false negatives. The process can take anywhere from weeks to over six months, Paz says.

With training completed, Safeguard AI performs a risk assessment to identify potential hazards on the site. It can “see” the site in real time by accessing footage from any nearby internet-connected camera. Then it uses an AI agent to push instructions on what to do next to the site managers’ mobile devices. Paz declines to give a precise price tag, but he says his product is affordable only for builders at the “mid-market” level and above, specifically those managing multiple sites. The tool is in use at roughly 3,500 sites in Israel, the United States, and Brazil.

Buildots, a company based in Tel Aviv that MIT Technology Review profiled back in 2020, doesn’t do safety analysis but instead creates once- or twice-weekly visual progress reports of sites. Buildots also uses the older method of machine learning with labeled training data. “Our system needs to be 99%—we cannot have any hallucinations,” says CEO Roy Danon. 

He says that gaining labeled training data is actually much easier than it was when he and his cofounders began the project in 2018, since gathering video footage of sites means that each object, such as a socket, might be captured and then labeled in many different frames. But the tool is high-end—about 50 builders, most with revenue over $250 million, are using Buildots in Europe, the Middle East, Africa, Canada, and the US. It’s been used on over 300 projects so far.

Ryan Calo, a specialist in robotics and AI law at the University of Washington, likes the idea of AI for construction safety. Since experienced safety managers are already spread thin in construction, however, Calo worries that builders will be tempted to automate humans out of the safety process entirely. “I think AI and drones for spotting safety problems that would otherwise kill workers is super smart,” he says. “So long as it’s verified by a person.”

Andrew Rosenblum is a freelance tech journalist based in Oakland, CA.

The Download: how AI could improve construction site safety, and our Roundtables conversation with Karen Hao

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How generative AI could help make construction sites safer

More than 1,000 construction workers die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls.

A new AI tool called Safety AI could help to change that. It analyzes the progress made on a construction site each day, and flags conditions that violate Occupational Safety and Health Administration rules, with what its creator Philip Lorenzo claims is 95% accuracy.


Lorenzo says Safety AI is the first one of multiple emerging AI construction safety tools to use generative AI to flag safety violations. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence. Read the full story.

—Andrew Rosenblum

Roundtables: Inside OpenAI’s Empire with Karen Hao

Earlier this week, we held a subscriber-only Roundtable discussion with author and former MIT Technology Review senior editor Karen Hao about her new book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

You can watch her conversation with our executive editor Niall Firth here—and if you aren’t already, you can subscribe to us here

MIT Technology Review Narrated: The tech industry can’t agree on what open-source AI means. That’s a problem.

What counts as ‘open-source AI’? The answer could determine who gets to shape the future of the technology.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China’s digital IDs are coming
And they’re unlikely to stay voluntary for long. (Economist $)
+ The country’s AI models are becoming increasingly popular worldwide. (WSJ $)

2 Donald Trump has mused about using DOGE to deport Elon Musk
Musk’s comments about the President’s ‘Big Beautiful Bill’ have touched a nerve. (Axios)
+ Turns out AI models are quite good at fact checking Trump. (WP $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

3 Google must pay California’s Android users $314.6m
After a jury ruled it had misused their data. (Reuters

4 Many AI detectors overpromise and underdeliver
But that hasn’t stopped Californian colleges from investing millions in them. (Undark)
+ What’s next for college writing? Nothing good. (New Yorker $)
+ Educators are working out how to integrate AI into computer science. (NYT $)
+ AI-text detection tools are really easy to fool. (MIT Technology Review)

5 Google is making its first foray into fusion
The world’s first grid-scale fusion power plant is due to come online in the 2030s. (NBC News)
+ Google will buy half its output. (TechCrunch)
+ Inside a fusion energy facility. (MIT Technology Review)

6 China is banning certain portable batteries from flights
In the wake of two major manufacturers recalling millions of power banks. (NYT $)
+ The ban is catching travellers out. (SCMP)

7 The deepfake economy is spiralling out of control
Small business owners are drowning in online scams. (Insider $)

8 Chipmaking companies are attractive prospects for investors
And they’re likely to be better bets. (WSJ $)
+ OpenAI has denied that it plans to use Google’s in-house chip. (Reuters)

9 How cancer studies in dogs could help develop treatments for humans
The disease presents very similarly across both species. (Knowable Magazine)
+ Cancer vaccines are having a renaissance. (MIT Technology Review)

10 X is planning to task AI agents with writing Community Notes
Thankfully, humans will still review them. (Bloomberg $)
+ Why does AI hallucinate? (MIT Technology Review)

Quote of the day

“Missionaries will beat mercenaries.”

—OpenAI CEO Sam Altman takes aim at Meta’s recent spree of attempting to hire his staff, Wired reports.

One more thing

The world’s next big environmental problem could come from space

In September, a unique chase took place in the skies above Easter Island. From a rented jet, a team of researchers captured a satellite’s last moments as it fell out of space and blazed into ash across the sky, using cameras and scientific equipment. Their hope was to gather priceless insights into the physical and chemical processes that occur when satellites burn up as they fall to Earth at the end of their missions.

This kind of study is growing more urgent. The number of satellites in the sky is rapidly rising—with a tenfold increase forecast by the end of the decade. Letting these satellites burn up in the atmosphere at the end of their lives helps keep the quantity of space junk to a minimum. But doing so deposits satellite ash in the Earth’s atmosphere. This metallic ash could potentially alter the climate, and we don’t yet know how serious the problem is likely to be. Read the full story

—Tereza Pultarova

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The new Running Man film looks pretty good, even if it is without Arnold.
+ Maybe it’s just not worth trying to understand our dogs after all.
+ Cynthia Erivo, who knows a thing or two about belting out a tune, really loves The Thong Song, and who can blame her?
+ Show your face, colossal squid!