What’s next for EV batteries in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Demand for electric vehicles and the batteries that power them has never been hotter.

In 2025, EVs made up over a quarter of new vehicle sales globally, up from less than 5% in 2020. Some regions are seeing even higher uptake: In China, more than 50% of new vehicle sales last year were battery electric or plug-in hybrids. In Europe, more purely electric vehicles hit the roads in December than gas-powered ones. (The US is the notable exception here, dragging down the global average with a small sales decline from 2024.)

As EVs become increasingly common on the roads, the battery world is growing too. Looking ahead, we could soon see wider adoption of new chemistries, including some that deliver lower costs or higher performance. Meanwhile, the geopolitics of batteries are shifting, and so is the policy landscape. Here’s what’s coming next for EV batteries in 2026 and beyond.

A big opportunity for sodium-ion batteries

Lithium-ion batteries are the default chemistry used in EVs, personal devices, and even stationary storage systems on the grid today. But in a tough environment in some markets like the US, there’s a growing interest in cheaper alternatives. Automakers right now largely care just about batteries’ cost, regardless of performance improvements, says Kara Rodby, a technical principal at Volta Energy Technologies, a venture capital firm that focuses on energy storage technology.

Sodium-ion cells have long been held up as a potentially less expensive alternative to lithium. The batteries are limited in their energy density, so they deliver a shorter range than lithium-ion. But sodium is also more abundant, so they could be cheaper.

Sodium’s growth has been cursed, however, by the very success of lithium-based batteries, says Shirley Meng, a professor of molecular engineering at the University of Chicago. A lithium-ion battery cell cost $568 per kilowatt-hour in 2013, but that cost had fallen to just $74 per kilowatt-hour by 2025—quite the moving target for cheaper alternatives to chase.

Sodium-ion batteries currently cost about $59 per kilowatt-hour on average. That’s less expensive than the average lithium-ion battery. But if you consider only lithium iron phosphate (LFP) cells, a lower-end type of lithium-ion battery that averages $52 per kilowatt-hour, sodium is still more expensive today. 

We could soon see an opening for sodium-batteries, though. Lithium prices have been ticking up in recent months, a shift that could soon slow or reverse the steady downward march of prices for lithium-based batteries. 

Sodium-ion batteries are already being used commercially, largely for stationary storage on the grid. But we’re starting to see sodium-ion cells incorporated into vehicles, too. The Chinese companies Yadea, JMEV, and HiNa Battery have all started producing sodium-ion batteries in limited numbers for EVs, including small, short-range cars and electric scooters that don’t require a battery with high energy density. CATL, a Chinese battery company that’s the world’s largest, says it recently began producing sodium-ion cells. The company plans to launch its first EV using the chemistry by the middle of this year

Today, both production and demand for sodium-ion batteries are heavily centered in China. That’s likely to continue, especially after a cutback in tax credits and other financial support for the battery and EV industries in the US. One of the biggest sodium-battery companies in the US, Natron, ceased operations last year after running into funding issues.

We could also see progress in sodium-ion research: Companies and researchers are developing new materials for components including the electrolyte and electrodes, so the cells could get more comparable to lower-end lithium-ion cells in terms of energy density, Meng says. 

Major tests for solid-state batteries

As we enter the second half of this decade, many eyes in the battery world are on big promises and claims about solid-state batteries.

These batteries could pack more energy into a smaller package by removing the liquid electrolyte, the material that ions move through when a battery is charging and discharging. With a higher energy density, they could unlock longer-range EVs.

Companies have been promising solid-state batteries for years. Toyota, for example, once planned to have them in vehicles by 2020. That timeline has been delayed several times, though the company says it’s now on track to launch the new cells in cars in 2027 or 2028.

Historically, battery makers have struggled to produce solid-state batteries at the scale needed to deliver a commercially relevant supply for EVs. There’s been progress in manufacturing techniques, though, and companies could soon actually make good on their promises, Meng says. 

Factorial Energy, a US-based company making solid-state batteries, provided cells for a Mercedes test vehicle that drove over 745 miles on a single charge in a real-world test in September. The company says it plans to bring its tech to market as soon as 2027. Quantumscape, another major solid-state player in the US, is testing its cells with automotive partners and plans to have its batteries in commercial production later this decade.  

Before we see true solid-state batteries, we could see hybrid technologies, often referred to as semi-solid-state batteries. These commonly use materials like gel electrolytes, reducing the liquid inside cells without removing it entirely. Many Chinese companies are looking to build semi-solid-state batteries before transitioning to entirely solid-state ones, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF, an energy consultancy.

A global patchwork

The picture for the near future of the EV industry looks drastically different depending on where you’re standing.

Last year, China overtook Japan as the country with the most global auto sales. And more than one in three EVs made in 2025 had a CATL battery in it. Simply put, China is dominating the global battery industry, and that doesn’t seem likely to change anytime soon.

China’s influence outside its domestic market is growing especially quickly. CATL is expected to begin production this year at its second European site; the factory, located in Hungary, is an $8.2 billion project that will supply automakers including BMW and the Mercedes-Benz group. Canada recently signed a deal that will lower the import tax on Chinese EVs from 100% to roughly 6%, effectively opening the Canadian market for Chinese EVs.

Some countries that haven’t historically been major EV markets could become bigger players in the second half of the decade. Annual EV sales in Thailand and Vietnam, where the market was virtually nonexistent just a few years ago, broke 100,000 in 2025. Brazil, in particular, could see its new EV sales more than double in 2026 as major automakers including Volkswagen and BYD set up or ramp up production in the country. 

On the flip side, EVs are facing a real test in 2026 in the US, as this will be the first calendar year after the sunset of federal tax credits that were designed to push more drivers to purchase the vehicles. With those credits gone, growth in sales is expected to continue lagging. 

One bright spot for batteries in the US is outside the EV market altogether. Battery manufacturers are starting to produce low-cost LFP batteries in the US, largely for energy storage applications. LG opened a massive factory to make LFP batteries in mid-2025 in Michigan, and the Korean battery company SK On plans to start making LFP batteries at its facility in Georgia later this year. Those plants could help battery companies cash in on investments as the US EV market faces major headwinds. 

Even as the US lags behind, the world is electrifying transportation. By 2030, 40% of new vehicles sold around the world are projected to be electric. As we approach that milestone, expect to see more global players, a wider selection of EVs, and an even wider menu of batteries to power them. 

The Download: inside a deepfake marketplace, and EV batteries’ future

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the marketplace powering bespoke AI deepfakes of real women

Civitai—an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz—is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found.

The study, from researchers at Stanford and Indiana University, looked at people’s requests for content on the site, called “bounties.” The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content—but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. Read the full story.

—James O’Donnell

What’s next for EV batteries in 2026

Demand for electric vehicles and the batteries that power them has never been hotter.

In 2025, EVs made up over a quarter of new vehicle sales globally, up from less than 5% in 2020. Some regions are seeing even higher uptake: In China, more than 50% of new vehicle sales last year were battery electric or plug-in hybrids. In Europe, more purely electric vehicles hit the roads in December than gas-powered ones. (The US is the notable exception here, dragging down the global average with a small sales decline from 2024.)

As EVs become increasingly common on the roads, the battery world is growing too. Here’s what’s coming next for EV batteries in 2026 and beyond.

—Casey Crownhart

This story is part of MIT Technology Review’s What’s Next series, which examines industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

TR10: Base-edited baby

Kyle “KJ” Muldoon Jr. was born with a rare, potentially fatal genetic disorder that left his body unable to remove toxic ammonia from his blood. The University of Pennsylvania offered his parents an alternative to a liver transplant: gene-editing therapies.

The team set to work developing a tailored treatment using base editing—a form of CRISPR that can correct genetic “misspellings” by changing single bases, the basic units of DNA. KJ received an initial low dose when he was seven months old, and later received two higher doses. Today, KJ is doing well. At an event in October last year, his happy parents described how he was meeting all his developmental milestones.

Others have received gene-editing therapies intended to treat conditions including sickle cell disease and a predisposition to high cholesterol. But KJ was the first to receive a personalized treatment—one that was designed just for him and will probably never be used again. Read why we made it one of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A social network for AI agents is vulnerable to abuse
A misconfiguration meant anyone could take control of any agent. (404 Media)
+ Moltbook is loosely modeled on Reddit, but humans are unable to post. (FT $)

2 Google breached its own ethics rules to help an Israeli contractor
It helped a military worker to analyze drone footage, a whistleblower has claimed. (WP $)

3 Capgemini is selling its unit linked to ICE
After the French government asked it to clarify its work for the agency. (Bloomberg $) 
+ The company has signed $12.2mn in contracts under the Trump administration. (FT $)
+ Here’s how to film ICE activities as safely as possible. (Wired $)

4 China has a plan to prime its next generation of AI experts 
Thanks to its elite genius class system. (FT $)
+ The country is going all-in on AI healthcare. (Rest of World)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

5 Indonesia has reversed its ban on xAI’s Grok
After it announced plans to improve its compliance with the country’s laws. (Reuters)
+ Indonesia maintains a strict stance against pornographic content. (NYT $)
+ Malaysia and the Philippines have also lifted bans on the chatbot. (TechCrunch)

6 Don’t expect to hitch a ride on a Blue Origin rocket anytime soon
Jeff Bezos’ venture won’t be taking tourists into space for at least two years. (NYT $)
+ Artemis II astronauts are due to set off for the moon soon. (IEEE Spectrum)
+ Commercial space stations are on our list of 10 Breakthrough Technologies for 2026. (MIT Technology Review)

7 America’s push for high-speed internet is under threat
There aren’t enough skilled workers to meet record demand. (WSJ $)

8 Can AI help us grieve better?
A growing cluster of companies are trying to find out. (The Atlantic $)
+ Technology that lets us “speak” to our dead relatives has arrived. Are we ready? (MIT Technology Review)

9 How to fight future insect infestations 🍄
A certain species of fungus could play a key role. (Ars Technica)
+ How do fungi communicate? (MIT Technology Review)

10 What a robot-made latte tastes like, according to a former barista
Damn fine, apparently. (The Verge)

Quote of the day

 “It feels like a wild bison rampaging around in my computer.”

—A user who signed up to AI agent Moltbot remarks on the bot’s unpredictable behavior, Rest of World reports.

One more thing

How Wi-Fi sensing became usable tech

Wi-Fi sensing is a tantalizing concept: that the same routers bringing you the internet could also detect your movements. But, as a way to monitor health, it’s mostly been eclipsed by other technologies, like ultra-wideband radar. 

Despite that, Wi-Fi sensing hasn’t gone away. Instead, it has quietly become available in millions of homes, supported by leading internet service providers, smart-home companies, and chip manufacturers.

Soon it could be invisibly monitoring our day-to-day movements for all sorts of surprising—and sometimes alarming—purposes. Read the full story

—Meg Duff

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These intrepid Scottish bakers created the largest ever Empire biscuit (a classic shortbread cookie covered in icing) 🍪
+ My, what big tentacles you have!
+ If you’ve been feeling like you’re stuck in a rut lately, this advice could be exactly what you need to overcome it.
+ These works of psychedelic horror are guaranteed to send a shiver down your spine.

The crucial first step for designing a successful enterprise AI system

Many organizations rushed into generative AI, only to see pilots fail to deliver value. Now, companies want measurable outcomes—but how do you design for success?

At Mistral AI, we partner with global industry leaders to co-design tailored AI solutions that solve their most difficult problems. Whether it’s increasing CX productivity with Cisco, building a more intelligent car with Stellantis, or accelerating product innovation with ASML, we start with open frontier models and customize AI systems to deliver impact for each company’s unique challenges and goals.

Our methodology starts by identifying an iconic use case, the foundation for AI transformation that sets the blueprint for future AI solutions. Choosing the right use case can mean the difference between true transformation and endless tinkering and testing.

Identifying an iconic use case

Mistral AI has four criteria that we look for in a use case: strategic, urgent, impactful, and feasible.

First, the use case must be strategically valuable, addressing a core business process or a transformative new capability. It needs to be more than an optimization; it needs to be a gamechanger. The use case needs to be strategic enough to excite an organization’s C-suite and board of directors.

For example, use cases like an internal-facing HR chatbot are nice to have, but they are easy to solve and are not enabling any new innovation or opportunities. On the other end of the spectrum, imagine an externally facing banking assistant that can not only answer questions, but also help take actions like blocking a card, placing trades, and suggesting upsell/cross-sell opportunities. This is how a customer-support chatbot is turned into a strategic revenue-generating asset.

Second, the best use case to move forward with should be highly urgent and solve a business-critical problem that people care about right now. This project will take time out of people’s days—it needs to be important enough to justify that time investment. And it needs to help business users solve immediate pain points.

Third, the use case should be pragmatic and impactful. From day one, our shared goal with our customers is to deploy into a real-world production environment to enable testing the solution with real users and gather feedback. Many AI prototypes end up in the graveyard of fancy demos that are not good enough to put in front of customers, and without any scaffolding to evaluate and improve. We work with customers to ensure prototypes are stable enough to release, and that they have the necessary support and governance frameworks.

Finally, the best use case is feasible. There may be several urgent projects, but choosing one that can deliver a quick return on investment helps to maintain the momentum needed to continue and scale.

This means looking for a project that can be in production within three months—and a prototype can be live within a few weeks. It’s important to get a prototype in front of end users as fast as possible to get feedback to make sure the project is on track, and pivot as needed.

Where use cases fall short

Enterprises are complex, and the path forward is not usually obvious. To weed through all the possibilities and uncover the right first use case, Mistral AI will run workshops with our customers, hand-in-hand with subject-matter experts and end users.

Representatives from different functions will demo their processes and discuss business cases that could be candidates for a first use case—and together we agree on a winner. Here are some examples of types of projects that don’t qualify.

Moonshots: Ambitious bets that excite leadership but lack a path to quick ROI. While these projects can be strategic and urgent, they rarely meet the feasibility and impact requirements.

Future investments: Long-term plays that can wait. While these projects can be strategic and feasible, they rarely meet the urgency and impact requirements.

Tactical fixes: Firefighting projects that solve immediate pain but don’t move the needle. While these cases can be urgent and feasible, they rarely meet the strategy and impact requirements.

Quick wins: Useful for building momentum, but not transformative. While they can be impactful and feasible, they rarely meet the strategy and urgency requirements.

Blue sky ideas: These projects are gamechangers, but they need maturity to be viable. While they can be strategic and impactful, they rarely meet the urgency and feasibility requirements.

Hero projects: These are high-pressure initiatives that lack executive sponsorship or realistic timelines. While they can be urgent and impactful, they rarely meet the strategy and feasibility requirements.

Moving from use case to deployment

Once a clearly defined and strategic use case ready for development is identified, it’s time to move into the validation phase. This means doing an initial data exploration and data mapping, identifying a pilot infrastructure, and choosing a target deployment environment.

This step also involves agreeing on a draft pilot scope, identifying who will participate in the proof of concept, and setting up a governance process.

Once this is complete, it’s time to move into the building phase. Companies that partner with Mistral work with our in-house applied AI scientists who build our frontier models. We work together to design, build, and deploy the first solution.

During this phase, we focus on co-creation, so we can transfer knowledge and skills to the organizations we’re partnering with. That way, they can be self-sufficient far into the future. The output of this phase is a deployed AI solution with empowered teams capable of independent operation and innovation.

The first step is everything

After the first win, it’s imperative to use the momentum and learnings from the iconic use case to identify more high-value AI solutions to roll out. Success is when we have a scalable AI transformation blueprint with multiple high-value solutions across the organization.

But none of this could happen without successfully identifying that first iconic use case. This first step is not just about selecting a project—it’s about setting the foundation for your entire AI transformation.

It’s the difference between scattered experiments and a strategic, scalable journey toward impact. At Mistral AI, we’ve seen how this approach unlocks measurable value, aligns stakeholders, and builds momentum for what comes next.

The path to AI success starts with a single, well-chosen use case: one that is bold enough to inspire, urgent enough to demand action, and pragmatic enough to deliver.

This content was produced by Mistral AI. It was not written by MIT Technology Review’s editorial staff.

What we’ve been getting wrong about AI’s truth crisis

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably. 

On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public. The news comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI (like a video about “Christmas after mass deportations”).

But I received two types of reactions from readers that may explain just as much about the epistemic crisis we’re in. 

One was from people who weren’t surprised, because on January 22 the White House had posted a digitally altered photo of a woman arrested at an ICE protest, one that made her appear hysterical and in tears. Kaelan Dorr, the White House’s deputy communications director, did not respond to questions about whether the White House altered the photo but wrote, “The memes will continue.”

The second was from readers who saw no point in reporting that DHS was using AI to edit content shared with the public, because news outlets were apparently doing the same. They pointed to the fact that the news network MS Now (formerly MSNBC) shared an image of Alex Pretti that was AI-edited and appeared to make him look more handsome, a fact that led to many viral clips this week, including one from Joe Rogan’s podcast. Fight fire with fire, in other words? A spokesperson for MS Now told Snopes that the news outlet aired the image without knowing it was edited.

There is no reason to collapse these two cases of altered content into the same category, or to read them as evidence that truth no longer matters. One involved the US government sharing a clearly altered photo with the public and declining to answer whether it was intentionally manipulated; the other involved a news outlet airing a photo it should have known was altered but taking some steps to disclose the mistake.

What these reactions reveal instead is a flaw in how we were collectively preparing for this moment. Warnings about the AI truth crisis revolved around a core thesis: that not being able to tell what is real will destroy us, so we need tools to independently verify the truth. My two grim takeaways are that these tools are failing, and that while vetting the truth remains essential, it is no longer capable on its own of producing the societal trust we were promised.

For example, there was plenty of hype in 2024 about the Content Authenticity Initiative, cofounded by Adobe and adopted by major tech companies, which would attach labels to content disclosing when it was made, by whom, and whether AI was involved. But Adobe applies automatic labels only when the content is wholly AI-generated. Otherwise the labels are opt-in on the part of the creator.

And platforms like X, where the altered arrest photo was posted, can strip content of such labels anyway (a note that the photo was altered was added by users). Platforms can also simply not choose to show the label; indeed, when Adobe launched the initiative, it noted that the Pentagon’s website for sharing official images, DVIDS, would display the labels to prove authenticity, but a review of the website today shows no such labels.

Noticing how much traction the White House’s photo got even after it was shown to be AI-altered, I was struck by the findings of a very relevant new paper published in the journal Communications Psychology. In the study, participants watched a deepfake “confession” to a crime, and the researchers found that even when they were told explicitly that the evidence was fake, participants relied on it when judging an individual’s guilt. In other words, even when people learn that the content they’re looking at is entirely fake, they remain emotionally swayed by it. 

“Transparency helps, but it isn’t enough on its own,” the disinformation expert Christopher Nehring wrote recently about the study’s findings. “We have to develop a new masterplan of what to do about deepfakes.”

AI tools to generate and edit content are getting more advanced, easier to operate, and cheaper to run—all reasons why the US government is increasingly paying to use them. We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.

Update: This story was updated on February 2 with details about how Adobe applies its content authenticity labels.

Better Metrics for AI Search Visibility

The rise of AI-generated search and discovery is pushing merchants to measure their products’ visibility on those platforms. Many search optimizers are attempting to apply traditional metrics such as traffic from genAI and rankings in the answers. Both fall short.

Traffic. Focusing on traffic obscures the purpose of AI answers: to satisfy a need on-site, not to generate clicks.

AI-generated solutions do not typically include links to branded websites. Google’s AI Overviews, for example, sometimes links product names to organic search listings.

Thus visibility does not equate to traffic. A merchant’s products could appear in an AI answer and receive no clicks.

Screenshot of the AI Overview showing the North Face citation and link to an organic listing.

Brand names cited in Google’s AI Overviews often link to organic search listings, such as this example for North Face hiking boots.

Rankings. AI answers often include lists. Many sellers are trying to track those lists to rank at or near the top. Yet tracking such rankings is impossible.

AI answers are unpredictable. A recent study by Sparktoro found that AI platforms recommend different brands and different orders every time the same person asks the same question.

Better AI Metrics

Here are better metrics to measure AI visibility.

Product or brand positioning in LLM training data

Training data is fundamental to AI visibility because large language models default to what they know. Even when they query Google and elsewhere, LLMs often use their training data to guide the search terms.

It’s therefore essential to track what LLMs retain about your brand and competitors and, importantly, what is incorrect or outdated. Then focus on providing missing or corrected data on your site and across all owned channels.

Manual prompting in ChatGPT, Claude, and Gemini (at least) will help identify the gaps. The prompts can be:

  • “What do you know about [MY PRODUCT]?”
  • “Compare [MY PRODUCT] vs [MY COMPETITOR’S PRODUCT].”

Profound, Peec AI, and other AI visibility trackers can set up these prompts to monitor product positioning over time.

When using such visibility tools, keep in mind:

  • AI tracking tools enter prompts via LLMs’ APIs. Humans often see different results due to personalization and differences among AI models. API results are better for checking training data because LLMs likely return results from that data (versus live searches) to save resources.
  • The tools’ visibility scores depend entirely on the prompt. In the tools, separate branded prompts in a folder, as they will likely score 100%. Also, focus on non-branded prompts that reflect a product’s value proposition. Prompts irrelevant to an item’s key features will likely score 0%.

Most cited sources

LLM platforms increasingly conduct live searches when responding to prompts. They may query Google or Bing — yes, organic search drives AI visibility — or crawl other sources such as Reddit.

Citations, such as articles or videos, from those live searches influence the AI responses. But the citations vary widely because LLMs fan out across different (often unrelated) queries. So, trying to get included in every cited source is not realistic.

However, prompts often produce the same, influential sources repeatedly. These are worth exploring to include your brand or product. AI visibility trackers can collect the most cited URLs for your brand, product, or industry.

Brand mentions and branded search volume

Use Search Console or other traditional analytics tools to track:

  • Queries that contain your brand name or a version of it.
  • Number of clicks from those queries.
  • Impressions from those queries. The more AI answers include a brand name, the more humans will search for it.

In Search Console, create a filter in the “Performance” section to view data for branded queries.

Screenshot of the Search Console Performance section.

Create a filter in Search Console’s “Performance” section to view data for branded queries.

LinkedIn Shares What Works For AI Search Visibility via @sejournal, @MattGSouthern

LinkedIn published findings from its internal testing on what drives visibility in AI-generated search results.

The company, reportedly among the most-cited sources in AI responses, shared what worked for improving its presence in LLMs and AI Overviews. For practitioners adjusting to AI search, this is a rare look at what a heavily-cited source tested and measured.

In a blog post, Inna Meklin, Director of Digital Marketing at LinkedIn, and Cassie Dell, Group Manager, Organic Growth at LinkedIn, detailed the tactics that got results.

Content Structure And Markup

LinkedIn found that how you organize content affects whether LLMs can extract and surface it. The authors wrote that headings and information hierarchy matter because “the more structured and logical your content is, the easier it is for LLMs to understand and surface.”

Semantic HTML markup also played a role, with clear structure helping LLMs interpret what each section is for. The authors called this “AI readability.”

The takeaway is that content structure isn’t just a UX consideration anymore. Proper heading hierarchy and clean markup may affect whether your content gets cited.

Expert Authorship And Timestamps

LinkedIn’s testing also pointed to credibility signals. The authors wrote:

“LLMs favor content that signals credibility and relevance, authored by real experts, clearly time-stamped, and written in a conversational, insight-driven style.”

Named authors with visible credentials and clear publication dates appeared to perform better in LinkedIn’s testing than anonymous or undated content.

The Measurement Change

LinkedIn added new KPIs alongside traffic for awareness-stage content, tracking citation share, visibility rate, and LLM mentions using AI visibility software. The company also said it’s creating a new traffic source in its internal analytics specifically for LLM-driven visits, and monitoring LLM bot behavior in CMS logs.

The authors acknowledged the measurement challenge:

“We simply couldn’t quantify how visibility within LLM responses impacts the bottom line.”

For teams still reporting traffic as the primary SEO metric, there’s a gap here. If non-brand informational content is increasingly consumed inside AI answers rather than on your site, traffic may undercount your actual reach.

Why This Matters

What caught my attention is how much this overlaps with what AI platforms themselves are saying.

SEJ’s Roger Montti recently interviewed Jesse Dwyer from Perplexity about what drives AI search visibility. Dwyer explained that Perplexity retrieves content at the sub-document level, pulling granular fragments rather than reasoning over full pages. That means how you structure content affects whether it gets extracted at all.

LinkedIn’s findings point in the same direction from the publisher side. Structure and markup matter because LLMs parse content in fragments. The credibility signals LinkedIn identified, like expert authorship and timestamps, appear to affect which fragments get surfaced.

When a heavily-cited source and an AI search platform land on the same conclusions independently, you have something to work with beyond speculation.

Looking Ahead

The authors are adopting a different mindset that practitioners can learn from:

“We are moving away from ‘search, click, website’ thinking toward a new model: Be seen, be mentioned, be considered, be chosen.”

LinkedIn indicated Part 3 of the series will include a guide on optimizing owned content for AI search, covering answer blocks and explicit definitions.

What is NLWeb (Natural Language Web)?

Natural language is quickly becoming the default way people interact with online tools. Instead of typing a few keywords, users now ask full questions, give detailed instructions, and are starting to expect clear, conversational answers. So, how can you make sure your content provides the answer to their question? Or better yet, how can you make it possible for them to interact with your website in a similar way? That’s where Microsoft’s NLWeb comes in. 

Meet NLWeb, Microsoft’s new open project

NLWeb, short for Natural Language Web, is an open project recently launched by Microsoft. The aim of this project is to bring conversational interfaces directly to websites, rather than users having to use an external chatbot that’s in control of what’s shown. Instead of relying on traditional navigation or search bars, NLWeb is designed to allow users to ask questions and explore content in a more personal, conversational way. 

At its core, NLWeb connects website content to AI-powered tools. It enables AI to understand what a website is about, what information it contains, and how that information should be interpreted for the purpose of returning personalized results. With this project, Microsoft is moving toward a more interoperable, standards-based, and open web that allows everyone to prepare their website for the future of search.  

This project was initiated and realized by R.V. Guha, CVP and Technical Fellow at Microsoft. Guha is one of the creators of widely used web standards such as RSS and Schema.org.  

How NLWeb works

NLWeb works by combining structured data, standardized APIs and AI models capable of understanding natural language. Every NLWeb instance acts as a Model Context Protocol (MCP) server, which makes your content discoverable for all the agents operating in the MCP ecosystem. This makes it easy for these agents to find your website.  

Using structured data, website owners then present their content in a machine-readable way. AI applications can then consume this data and answer user questions accurately by matching them to the most relevant information. The result is a conversational experience powered by existing content, either directly on a website or through using an online search tool. A conversational interface for both human users and AI agents collecting information. 

An important thing to note is that NLWeb is an open project. It’s not a closed ecosystem, meaning that Microsoft wants to make it accessible to everyone. The idea is to make it easy for any website owner to create an intelligent, natural language experience for their site, while also preparing their content to interact with and be discovered by other online agents, such as AI tools and search engines.  

How does natural language work? 

Natural language simply refers to the way we speak and write. This means using full sentences that allow room for intent, context and nuance. More than keywords or short commands, natural language reflects how people think and what they are looking for exactly. 

To give you an example: a focus keyphrase might be running shoes trail. But using natural language, the request would look more like this: What are the best running shoes for trail running in wet conditions? 

Natural language in AI tools 

Modern AI tools are designed to understand this kind of input. The large language models behind these tools can analyze intent and context to generate responses that fulfill the given request. This is why conversational interfaces feel more intuitive than traditional search or forms. 

Tools like AI chat assistants, voice search, and even traditional search engines rely heavily on natural language understanding and users have quickly adapted to it. 

The current state of search 

The way people find information online is changing fast. A change that is heavily influenced by the use of AI-powered tools. We now expect personalized answers instead of a list of results to sort through ourselves. AI chatbots also give us the option to follow up on our original search query, which turns search into a conversation instead of a series of clicks. 

Research from McKinsey & Company shows that AI adoption and natural language interfaces are becoming mainstream, with 50% of consumers already using AI-driven tools for information discovery. The majority even say it’s the top digital source they use to make buying decisions. As these habits continue to grow, websites that aren’t optimized for natural language risk becoming invisible in AI-generated answers. 

Why this is interesting for you 

The shift to natural language isn’t just a technical trend. As discussed above, it directly impacts your online visibility and competitive position. 

If users ask an AI system for information, only a handful of sources will be referenced in the response. This is because, like search engines, AI platforms also need to be able to read the information on your website. Being one of those sources can be the difference between being discovered or being overlooked. 

NLWeb collaborates with Yoast 

With NLWeb, you are communicating your website’s content clearly and in a standardized way. That means your brand, products, or expertise can appear in AI-powered answers instead of your competitors. To help as many website owners as possible benefit from this shift, Yoast is collaborating with NLWeb.   

The best part? If you’re a user of any of our Yoast plans designed for WordPress, you’re well ahead here. Yoast’s integration with NLWeb will roll out in phases, starting with functionality that helps our users using WordPress express their content in ways AI systems can interpret accurately, without any additional setup required. So sit tight and let us help you prepare your website for the new world of search! 

NLWeb aims to make your content understandable not just for people, but for the AI systems that are increasingly relevant to your website’s discovery. 

Read more: Yoast collaborates with Microsoft to help AI understand Open Web »

YouTube CEO Reveals Your Video Marketing Strategy For 2026 via @sejournal, @gregjarboe

Every January, YouTube’s CEO publishes a letter outlining where the platform is headed. In most years, these updates read like a product roadmap. Neal Mohan’s 2026 letter reads more like a strategic manifesto.

“YouTube is the epicenter of culture,” Mohan writes, arguing that creators are now “reinventing entertainment and building the media companies of the future,” while YouTube becomes the infrastructure powering that transformation.

For digital marketers, this matters because YouTube is no longer simply a distribution channel for video ads or brand content. It is simultaneously:

  • A global television network.
  • A creator marketplace.
  • A commerce platform.
  • A discovery engine powered by AI.

Each of these identities has direct implications for how SEOs, content marketers, social media managers, and executives should plan their video strategies in 2026 and beyond.

Mohan organizes YouTube’s priorities around four themes: reinventing entertainment, building the best place for kids and teens, powering the creator economy, and supercharging and safeguarding creativity. When examined through a marketing lens, these themes reveal a clear message: The future of video marketing is integrated, creator-led, commerce-enabled, and increasingly measurable.

Creators Are Now Studios – And Brands Must Think Like Co-Producers

Mohan states bluntly that the era of dismissing YouTube content as “UGC” is over. Many creators now operate like full-scale studios, purchasing production facilities, hiring teams, and developing episodic series that rival traditional television.

This is more than a branding exercise. It represents a structural shift in how entertainment is financed, produced, and distributed.

Historically, brands approached creators as distribution partners. A product placement, a sponsored segment, or a one-off integration was often sufficient. But when creators control their own intellectual property and audience relationships, that transactional model breaks down.

The more effective model is co-production.

In a co-production model, brands are involved from the very beginning in shaping content formats, creative development is approached as a collaborative process, and campaigns are designed to unfold across multiple episodes or even entire seasons rather than as one-off executions.

This approach aligns with my coverage of the rising performance of long-term creator partnerships compared to short-term influencer activations.

From a business standpoint, this also improves efficiency. Instead of briefing dozens of creators on the same campaign, brands can focus on a smaller number of deep partnerships that generate recurring assets usable across organic, paid, and owned channels.

Practical actions:

  • Identify creators whose content themes align with your product category and brand values.
  • Propose multi-video or episodic collaborations rather than single integrations.
  • Negotiate usage rights so creator content can be repurposed in paid media.

Why this helps you work smarter:

One strong partnership can outperform 10 shallow ones.

Shorts At 200 Billion Daily Views Has Redefined Discovery

Mohan revealed that YouTube Shorts now average 200 billion daily views and that YouTube plans to integrate additional formats, such as image posts, directly into the Shorts feed. This confirms what many marketers have already observed: Shorts are now YouTube’s primary discovery surface. But the strategic implication goes deeper.

Shorts are not just a short-form video product. They are evolving into a multi-format social feed that blends elements of TikTok, Instagram Reels, and traditional social posts. For marketers, this means Shorts should be treated as the front end of a larger content system.

A high-performing ecosystem works by guiding audiences through different layers of engagement: short-form content introduces an idea, long-form videos explore it in depth, community posts and livestreams sustain engagement, and paid ads are used strategically to amplify what’s already working.

My guidance on optimizing YouTube Shorts emphasizes hook-driven openings, concise storytelling, and native formatting. Mohan’s roadmap reinforces that these are not “nice to have” best practices; they are essential for visibility.

 Practical actions:

  • Build Shorts in clusters around a single topic.
  • Include subtle prompts directing viewers to long-form content.
  • Repurpose Shorts into vertical ads.

Why this helps you work smarter:

One long-form video can generate dozens of Shorts that extend its lifespan.

YouTube Is The New TV – Plan Accordingly

Mohan cites Nielsen data showing YouTube has been #1 in streaming watchtime in the U.S. for nearly three years. He also highlights YouTube TV innovations like customizable multiview and specialized subscription plans.

This reinforces a critical point: YouTube now dominates living-room viewing. For marketers, this collapses the old distinction between digital video and television.

If YouTube is increasingly functioning like television, production quality starts to matter again, long-form storytelling becomes a more viable format, and episodic content begins to make far more sense as a sustainable strategy.

This does not mean every brand needs a Netflix-style series. But it does mean brands should consider developing signature formats rather than only campaign-based videos.

Examples of this approach include monthly shows hosted by subject-matter experts, structured series focused on product education, and documentary-style content that showcases real customer success stories.

YouTube ads increasingly resemble connected TV buys, making YouTube an essential component of omnichannel planning.

 Practical actions:

  • Develop at least one recurring video series.
  • Test YouTube Select or CTV placements.
  • Optimize thumbnails and titles for large-screen browsing.

Why this helps you work smarter:

A consistent series builds audience equity over time.

YouTube’s Commerce Push Turns Video Into A Direct Revenue Channel

Mohan’s emphasis on YouTube Shopping and frictionless in-app purchases signals a major evolution: YouTube is becoming a transactional platform. Historically, video excelled at awareness and consideration. Conversions often happened elsewhere. That model is changing.

With in-app purchasing, attribution becomes clearer, funnels shorten and return on investment (ROI) improves.

For performance marketers, this means YouTube deserves a seat alongside search and social in lower-funnel planning.

I previously covered YouTube’s shoppable ad formats and best practices for measuring performance-driven video campaigns.

 Practical actions:

  • Integrate product feeds with YouTube.
  • Tag videos with product links.
  • Use retargeting to reach viewers who watched product-related content.

 Why this helps you work smarter:

Video can now drive measurable revenue, not just brand lift.

AI Will Multiply Output – But Strategy Will Separate Winners

Mohan notes that over 1 million channels use YouTube’s AI creation tools daily and that new capabilities will allow creators to generate Shorts using their own likeness and experiment with music and games. At the same time, YouTube is actively combating low-quality “AI slop.”

This dual message is important: AI is welcome, but quality is non-negotiable. For marketers, AI should be treated as an accelerator, not a replacement for thinking.

AI excels at handling many of the executional tasks in content creation, such as drafting scripts, generating multiple variations, translating content into different languages, and automating captions at scale.

Humans, however, continue to lead where deeper judgment and creativity are required, understanding audiences, crafting compelling narratives, and defining a clear, authentic brand voice.

It’s widely reported that AI-generated content without differentiation struggles to perform in search.

 Practical actions:

  • Use AI for ideation and first drafts.
  • Apply human editorial oversight.
  • Maintain clear brand voice guidelines.

 Why this helps you work smarter:

AI reduces production time so you can focus on strategy.

Measurement Is Shifting Toward Business Impact

Mohan’s focus on diversified monetization signals YouTube’s broader emphasis on outcomes. For marketers, this means moving beyond surface-level metrics.

Rather than defaulting to surface-level questions like “How many views did we get?”, it’s more useful to ask whether watch time increased, brand lift improved, and conversions actually rose.

I’ve previously outlined frameworks for measuring video ROI that connect engagement to revenue.

 Practical actions:

  • Track watch time and retention.
  • Use brand lift studies.
  • Attribute conversions where possible.

 Why this helps you work smarter:

You optimize based on results, not vanity metrics.

The Strategic Bottom Line

Neal Mohan’s 2026 roadmap reveals that YouTube is evolving into a unified ecosystem where creators, commerce, AI, and entertainment converge. For digital marketers, the opportunity is not to chase every new feature. It is to design integrated systems that:

  • Use Shorts for discovery.
  • Use long form for depth.
  • Use creators for trust.
  • Use paid media for scale.
  • Use commerce integrations for conversion.

The marketers who succeed in 2026 will not be the ones who produce the most videos. They will be the ones who build the smartest video ecosystems.

More Resources:


Featured Image: hmorena/Shutterstock

15 Fixes To Improve Low Conversion Rates In Google Ads via @sejournal, @brookeosmundson

Many Google Ads accounts generate steady traffic but struggle to turn that traffic into outcomes the business actually values, such as purchases, qualified leads, or demo requests.

That disconnect usually isn’t caused by a lack of demand or a broken platform. It’s more often the result of small, fixable issues across the account that quietly compound over time.

Keyword targeting drifts. Ad copy loses alignment with landing pages. Bid strategies stop matching how users actually convert.

None of these problems feel dramatic on their own, but together they can pull conversion rates down and make performance harder to scale.

The good news?

Improving conversion rates in Google Ads rarely requires rebuilding an account from scratch. In most cases, it comes down to tightening fundamentals, being more intentional with the levers already in place, and using performance data with a bit more discipline.

This article walks through 15 practical ways PPC managers can improve Google Ads conversion rates using changes that are realistic to implement and straightforward to test. The goal isn’t more traffic. It’s getting better results from the traffic you already pay for.

1. Implement Proper Conversion Tracking

This first one seems like a no-brainer, but it’s often overlooked by many accounts.

The only way to understand whether your Google Ads campaigns are performing or not performing is to properly set up conversion tracking.

The most common ways Google Ads conversion tracking is implemented are through:

The other key component to proper conversion tracking is identifying what conversions make sense to track.

Oftentimes, brands have one big conversion in mind. For ecommerce, that is likely a purchase or a sale. For B2B companies, it’s likely a lead or a demo signup.

But what about all the other available touchpoints before a customer makes that leap?

Consider tracking “micro” conversions on your sites to really identify the positive impact your PPC campaigns have.

Examples of “micro” conversions to track include:

  • Email newsletter signups.
  • Free samples.
  • Whitepaper download.
  • Webinar signup.
  • And more.

Taking a step back from the ins and outs of the platforms helps you hone in through the lens of a consumer. Setting up accurate measurements from the purchase journey can make a big impact on how you structure and optimize your Google Ads campaigns.

2. Optimize Keyword Lists

The second way to help increase Google Ads conversion rates is continuous optimization of keyword lists.

The Google Ads search terms report is a perfect tool for this. Not only can you see what users are searching for, in their own words, that leads to conversions, but you can also see what is not converting.

We’ll get to negative keywords later.

A Google Ads search terms report with click and conversion rate data.
Screenshot taken by author, January 2026

Keep in mind which match types you’re using throughout the keyword optimization process.

Broad match keywords have the biggest leniency when it comes to what types of searches will show for your ad. It also has the largest reach because of its flexible nature.

Turning some of your top-performing Broad match keywords into Exact match can help increase those Quality Scores, which can lead to lower cost per click (CPCs) and better efficiency for your campaigns.

3. Match Ad Copy To Landing Pages

Alright, so you’ve gotten a user to click on your ad. Great!

But you’re finding that not a lot of people are actually purchasing. What gives?

Surely, it must be a problem with the PPC campaigns.

Not always.

Typically, one of the most common reasons users leave a website right after clicking on an ad has to do with a mismatch of expectations.

Simply put, what the user was promised in an ad was not present or prominent on the landing page.

A great way to optimize conversion rates is to ensure the landing page copy is tailored to match your PPC ad copy.

Doing this ensures a relatively seamless user experience, which can help speed up the purchase process.

4. Use Clear Call-To-Action

If a user isn’t performing the actions you’d expect to after clicking on an ad, it may be time to review your ad copy.

Since the emergence of Responsive Search Ads (RSAs), I’ve seen many redundant headlines and generic calls-to-action (CTAs).

No wonder a user doesn’t know what you want them to do!

When creating CTAs either in ad copy or on the landing page, keep these principles in mind:

  • Use action-oriented language that clearly communicates what you want them to do.
  • For landing pages, make sure the CTA button is visually distinct and easily clickable. It helps if a CTA is shown before a user has to scroll down to find it.
  • Test different CTAs to determine what resonates best with users.

Examples of action-oriented CTA language could sound like:

  • “Download Now.”
  • “Request A Quote.”
  • “Shop Now.”

Try steering away from generic language such as “Learn More” unless you’re truly running a more top-of-funnel (TOF) campaign.

5. Optimize For Mobile

With mobile phones so prevalent in our society, it’s shocking how many websites are still not optimizing their mobile experience!

Creating a landing page with desktop top-of-mind should really be revisited, given that mobile traffic has overtaken desktop.

So, what can you do to help increase your conversion rates on mobile?

  • Use a responsive web design to accommodate different mobile layouts.
  • Make sure the site speed has fast loading times.
  • Create any mobile-specific features, like CTA placement, to make sure it’s easily viewable for users.
  • Optimize form fills on mobile devices.

6. Experiment With Ad Copy Testing

Ad copy is one of the biggest levers you can control in your PPC campaigns.

Even slight changes or tweaks to a headline or description can have a big impact on CTR and conversion rates.

Having multiple ad copy variants is crucial when trying to understand what resonates most with users.

Part of the beauty of Google’s Responsive Search Ads is the number of headline inputs you can have at once. Google’s algorithm then determines the best-performing ad copy combinations to increase conversion rates.

Google Ads also has tools built into the platform for more controlled testing if that is a route you want to take.

You can create ad variants or create an experiment directly in Google Ads for more precise A/B testing.

A screenshot of where to find Google Ads Experiments in the online interface.
Screenshot taken by author, January 2026

It’s also important to test one element at a time to isolate the impact of each change. Testing too many elements at once can muddy up analysis.

7. Utilize Ad Assets

Ad assets are a great way to help influence a click to your website, which can help improve conversion rates.

Assets like callouts, structured snippets, and sitelinks can provide additional detail that couldn’t be shown in headlines or descriptions.

When your Ad Rank is higher, you have a better likelihood of showing ad assets, which helps increase the overall visibility of your ad.

Your ad assets can be customized to fit your campaign goals, and can even show specific promotions, special product features, and social proof like seller ratings.

8. Don’t Be Shy With Negative Keywords

A sound negative keyword strategy is one of the best ways to improve Google Ads conversion rates.

You may be wasting your paid search budget on keywords that aren’t producing conversions.

You may also notice that some broad keywords have gone rogue and are triggering your ads for terms they definitely shouldn’t be showing up for!

As mentioned earlier, the search terms report can help mitigate a lot of these types of keywords.

You can choose to add negative keywords at the following levels:

  • Ad group.
  • Campaign.
  • Negative keyword lists to apply to campaigns.

You also have the ability to add negative keywords as broad, phrase, or exact match.

Alleviating poor-performing keywords allows your budget to optimize for your core keyword sets that lead to conversions.

9. Set Proper Bid Strategies

The type of bid strategy you choose for your Google Ads campaigns can make or break performance.

In recent years, Google has moved towards its fully automated bidding strategies, using machine learning to align performance with the chosen goal and bid strategy.

Currently, Google has four Smart Bidding strategies focused on conversion-based goals:

  • Target CPA (Cost-Per-Action): Helps increase conversions while targeting a specific CPA.
  • Target ROAS (Return on Ad Spend): Helps increase conversions while targeting a specific ROAS.
  • Maximize Conversions: Optimizes for conversions, not focused on a target ROAS outcome, and spends the entire budget.
  • Maximize Conversion Value: Optimizes for conversion value, not focused on a target ROAS outcome, and spends the entire budget.

Choosing the right bidding strategy is just one piece of the puzzle.

The inputs of the chosen bid strategy are just as important, where more context is needed to have a successful campaign.

For example, suppose you choose a Target CPA bid strategy for a search campaign and set the target CPA to $50.

However, in that campaign, you notice that your average CPC ranges anywhere from $10-$20.

Suddenly, your impressions go down, and you’re not sure what’s happening!

It could be your bid strategy inputs.

In the example above, if you have high CPCs but set your target CPA to just slightly higher than the CPCs, that means you need to have a stellar conversion rate in order to stay within that $50 CPA threshold.

Additionally, many make the mistake of setting the same target CPA for all campaigns, regardless of brand or non-brand intent.

Most often, non-brand keywords will have much higher CPAs than brand terms, so the inputs should be set accordingly based on performance.

Make sure you set your Target CPA thresholds high enough initially for the campaigns to gather information to meet expectations.

10. Add Audience Segmentation

As keyword match types tend to get looser, there is more emphasis on leveraging audience segmentation to reach the right people.

Using audience segments allows you to tailor your ads towards specific groups or utilize audiences as exclusions so your ads aren’t triggered.

Examples of audience segments within Google Ads include:

  • Demographics: Can be based on gender, age, household income, education, and other areas.
  • Interests and behaviors: Based on hobbies, lifestyle choices, website browsing behavior, and purchase history.
  • Actively researching or planning: Based on a user’s past or recent purchase intent.
  • Past interactions with your business: Can be based on previous engagements like website visits, add-to-cart, other online interactions, existing customer relationship management (CRM) data, and more.

By segmenting audiences within your PPC campaigns, you can customize ad messaging based on those segments.

This can lead to maximizing relevance and engagement, ultimately increasing conversion rates.

You can also use insights from GA4 to inform your segmentation strategy to identify high-value audience segments.

11. Create A Retargeting Strategy

On average, ecommerce conversion rates range from 2.5-3%.

That means 97% of people leave a website without purchasing. Talk about a missed opportunity!

With a retargeting strategy in place, you have the opportunity to win back those missed customers and turn them into your brand champions.

Retargeting keeps track of website or app visitors who don’t take the desired action you’d like them to. You can create retargeting lists as niche or as broad as you prefer, but keep in mind that audiences must be a certain size before they’re eligible to use.

Examples of utilizing retargeting could be:

  • Creating segmented lists of users based on certain category pages of a website.
  • Users who have added an item to their cart but didn’t purchase it.
  • Users who have viewed at least three to five pages.

These segments can be used to create retargeting campaigns, which show those users ads to help increase the likelihood of them converting. Be sure to set those ad frequencies within the campaign so you don’t annoy your audience, though!

12. Offer Incentives

These days, shoppers are more accustomed to expecting a discount whenever they purchase.

There’s certainly an argument that programming people to buy only during a sale can diminish a product’s value perception.

However, there are strategies that can boost sales and conversion rates without devaluing the product.

If possible, try making the offers more personal towards the user and their behavior.

Additionally, you can set smaller windows of sale times and incorporate real-time purchase behavior so users can see how many people have taken advantage of the sale.

13. Choose The Right Location Settings

One of the easiest ways to waste precious PPC dollars is to set up location targeting wrong.

Google Ads offers multiple ways to geo-target locations within the campaign settings to help reach your goals.

Location targeting allows you to set specific locations for your ads to show, including:

  • City.
  • Region.
  • State.
  • Country.
  • Radius.

For example, if you have products that can only be purchased in the United States, you would likely target “United States” within the campaign setting.

Nowadays, it’s not as easy as just choosing “United States” (in this example). This is where advanced settings come in.

Within the Google campaign settings, you have two location-targeting options:

  • Presence or interest: People in, regularly in, or who’ve shown interest in your targeted location.
  • Presence: People in or regularly in your targeted locations.
Google Ads location targeting options.
Screenshot taken by author, January 2026

In the example above, it would make sense to choose “Presence” – otherwise, the campaign could show ads in areas where the products aren’t available.

If users in those countries click on the ad but see they can’t purchase when they get to the website, that is a recipe for poor conversion rates.

14. Use Social Proof To Build Trust

Brands can leverage social proof in their Google Ads campaigns to help boost conversion rates.

The goal of using social proof is to incorporate elements that demonstrate positive sentiment from customers, endorsements, or validation that the customer’s needs will be met.

There are many ways brands can add social proof to their campaigns:

  • Seller ratings ad asset.
  • Callout ad assets.
  • Adding customer reviews and testimonials to the landing page.
  • Share case studies and success stories on the landing page.

Additionally, strategies like creating limited-time offers with an emphasis on social proof can help boost sales and conversion rates.

This could mean showing in real-time how many customers have taken advantage of the offer, which creates urgency for the customer to act.

Focusing on social proof and validation can build trust, credibility, and confidence among potential customers – ultimately leading to higher conversion rates.

15. Schedule Your Ads Based On Performance

Ad scheduling is an underestimated tool in Google Ads that helps improve conversion rates.

The beauty of ad scheduling is that you can control when your ad will or will not show.

Make sure to have ample budget and schedule ads when potential customers are most actively searching and are more engaged.

This can lead to higher effectiveness of the campaign and increased conversion rates.

For example, if you run a B2B software company, it’s highly unlikely that potential customers are searching in the middle of the night.

Optimize your spend by not showing ads at certain times of the day (such as the middle of the night) or days of the week (like weekends).

Google Ads scheduling capabilities.
Screenshot taken by author, January 2026

If you’re not sure how to start optimizing campaigns by time, consider the following:

  • Use tools like GA4 to understand when most purchases are happening on the website.
  • Look for trends like website traffic, conversion times, engagement rates, etc., by time.
  • Align your ad schedule with peak business operations times, especially if customer service is involved.
  • Adjust ad schedules around key events like holidays or peak seasonality.

Turning Conversion Rate Optimization Into A Habit

Improving conversion rates in Google Ads is rarely tied to a single optimization or setting change. Strong performance usually comes from a series of small decisions that are reviewed, tested, and refined over time.

When those decisions stop getting attention, efficiency tends to slip, even in accounts with solid traffic and budgets.

The most effective PPC teams treat conversion rate optimization as an ongoing process rather than a one-time project. They regularly question assumptions, revisit historical decisions, and adjust based on how users behave today, not how the account was originally built.

If there’s one takeaway from these 15 tactics, it’s that better results don’t always come from spending more. They come from making the traffic you already earn more relevant, more intentional, and easier to convert.

More Resources:


Featured Image: Billion Photos/Shutterstock

Controversial Proposal To Label Sections Of AI Generated Content via @sejournal, @martinibuster

A new proposal was published for creating an HTML attribute that can be helpful for notifying crawlers what part of a web page is generated by AI. The proposal is quickly becoming relevant because of new rules coming into effect in Europe this summer, but some are questioning whether this is the right solution to that problem.

AI Disclosure

The proposal was created by David E. Weekly (LinkedIn profile), who noted that there are currently proposals that provide a more general signal that an entire web page is AI generated but nothing that labels only a section of a web page in a page that is otherwise authored by a human.

Weekly’s proposal acknowledges the reality that many web pages are partially AI generated. One example is the AI generated summaries of news content. The proposal specifically mentions news sites that contain a sidebar with AI generated summaries.

The proposal suggests creating an HTML attribute that can be applied at the section level using the