Breaking Down Optmyzr’s Study on Amazon’s Exit from Google Ads via @sejournal, @brookeosmundson

Just under one month ago, on July 23, 2025, Amazon vanished from Google Shopping ads overnight.

No trial, no warning, no phased retreat. One of the biggest advertisers on the platform simply stepped back, leaving a noticeable gap in auctions.

For many retailers, this shift opened the door to new opportunities. It’s tempting to think they would breathe easier: less competition, lower costs, more conversions.

But as Fred Vallaeys puts it, the reality is more nuanced: “more volume, less value.” 

Optmyzr’s study eludes that those opportunities since Amazon’s exit didn’t always translate into stronger performance. Read on to further explore Optmyzr’s findings on the great Amazon exit.

Key Findings from Optmyzr’s Study on Amazon Leaving Google Ads

Optmyzr compared performance across two matched weeks: July 23-29, 2025 vs. July 16-22, 2025.

They made sure to exclude Prime Day and matching days to isolate the effect of Amazon’s exit.

The findings were significant in major metric categories, including:

  • Impressions +5%
  • Clicks +7.8%
  • Cost -1%
  • Avg. CPC -8.3%

This first set of pre-click metrics looked promising for many retailers. But what about conversions?

That data told another story:

  • Conversion volume stayed flat
  • Conversion Value -5.5%
  • Conversion Rate -7.2%
  • ROAS -4.4%

What does this mean? Ads got cheaper and drew more clicks as a result of Amazon leaving Google Ads. But overall, it brough in less value to retailers.

The ‘Volume Trap’ Defined

Why did conversions fall even as traffic increased? The answer lies in expectations.

Amazon‑seeking shoppers clicked competitor ads but still expected Amazon-level pricing, quick shipping, and seamless service.

When most brands couldn’t meet that bar, conversions and value slipped. That’s the classic “volume trap”: traffic that looks good on the surface but doesn’t deliver the bottom-line results.

Vallaeys elaborated more on the volume trap, explaining why it happens and how to escape the volume trap.

The volume trap happens when advertisers get excited about more traffic but don’t stop to ask whether those clicks are truly valuable. Driving incremental volume is often not difficult (especially if you’re willing to accept lower-value traffic) but the real question is whether that traffic can actually convert profitably.

When Amazon exited Google Ads, we observed shoppers clicking on competitor ads for the same products but then bouncing back to Amazon. Why? Because Amazon has built unmatched trust with consumers: fast Prime shipping, predictable pricing, and a familiar checkout experience. That shows us that you can’t just replace the clicks and expect the same outcome. If your value proposition doesn’t align with what consumers expect, you may see more traffic but not more revenue.

To escape this trap, advertisers need to reframe their strategy. Instead of chasing short-term click growth, they should focus on positioning themselves differently. That might mean emphasizing local sourcing, higher-quality products, or a more personal experience. These are factors that Amazon can’t replicate. It also means looking beyond the immediate conversion. Even if you don’t win the sale today, you can start building a relationship that leads to long-term customer loyalty.

The real key is shifting the mindset: don’t just measure success by volume. Measure it by the value of the relationships you create.

To summarize the volume trap, what Optmyzr showed in their study is that more clicks don’t automatically equal more revenue. If you can’t compete with Amazon-like qualities (price, shipping, etc.), lean into what makes your offer unique and build relationships that pay off in the long run.

Which Categories Gained and Which Struggled After Amazon’s Exit

Not every category reacted the same way. Some thrived, while others got stuck in the volume trap:

  • Electronics: The standout success story. Clicks +11.5%, Conversions +81.3%, Conversion Value +10.9%, ROAS +7.1%, and all with lower CPCs.
  • Home & Garden: Traffic surged (+13.1%), but Conversion Value dropped 7.5%, ROAS -7.7%. More volume, but less value per sale.
  • Sporting Goods: Conversions rose 20.7%, but value declined nearly 10%. Shoppers likely bought lower-priced items or held back because they couldn’t find Amazon-level deals.
  • Health & Beauty: Conversions increased 14.6%, but conversion value essentially flat (+0.3%), ROAS up only slightly. Gains were masked by low-value purchases.
  • Tools & Hardware, Apparel & Accessories, Arts & Entertainment, Furniture, Vehicles & Parts: All showed some version of the volume trap: modest increases in clicks or conversions, but declining value and ROAS.

What This Means for Advertisers Managing Google Shopping Campaigns

Optmyzr’s data showed what happened when Amazon suddenly stepped out of the picture: cheaper clicks, more traffic, but ultimately lower value.

That’s the data side of the story.

Where marketers need to lean in is interpreting what that really means for account management.

Optmzyr’s takeaways give some practical perspectives for advertisers to think about.

  • Volume doesn’t always equal victory. More clicks might look great on the surface, but if those shoppers aren’t buying (or if they’re buying lower-ticket items), the net impact on your business can be negative. This isn’t something Optmyzr explicitly called out, but it’s the natural next step in interpreting their findings.
  • Category context is critical when evaluating success. Optmyzr highlighted Electronics as a category that saw improved conversions and ROAS. Why? Because those retailers could match or even surpass Amazon on fulfillment, trust, and pricing. If you’re in a category where you can’t deliver the same level of convenience, you’re more likely to see the opposite effect.
  • Measure what matters to your business. The study found that impressions, clicks, and traffic volume all increased. But the metrics that matter (conversion value and ROAS) told a different story. That’s the reminder for advertisers: make sure your optimizations focus on value, not vanity metrics.
  • Differentiate of risk being forgotten. If you can’t compete with Amazon on price or logistics, your advantage has to come from somewhere else. That could be curated products, specialty expertise, or building a stronger brand identity.

How to Communicate these Changes to Leadership

Major changes in the SERPs can cause some knee-jerk reactions to advertisers.

But once you have those changes under control, how do you explain this fundamental shift to leadership?

Vallaeys offered his take and recommendations on how PPC managers can craft the conversation.

When talking to executives, the key is to frame the story in business outcomes, not marketing jargon. Most C-suite leaders don’t care about CPCs, impression share, or auction dynamics. But they absolutely care about revenue, profit, and the quality of customers being acquired.

So, instead of saying ‘our clicks went up but our ROAS went down,’ you might say: ‘We gained more traffic after Amazon left the auction, but much of that traffic didn’t convert as profitably because customers expected Amazon-level pricing and delivery that we couldn’t match.’ That ties the marketing story directly to financial outcomes they already think about every day.

It also helps to remind executives that these dynamics aren’t random: they’ve experienced the same challenges competing against Amazon before. If you didn’t have the lowest price or fastest shipping then, those factors don’t magically go away just because Amazon paused ads. This makes it easier for them to understand why extra clicks don’t necessarily mean extra profit.

By anchoring the conversation in the language of business value rather than marketing metrics, PPC pros can build credibility and keep executives aligned on realistic expectations.

So don’t talk about CPCs, but talk about revenue and profit. The C-suite cares about business outcomes, not auction mechanics.

Will Amazon Return to Google Ads Soon?

Since Amazon has left Google Ads so abruptly, it begs the question: will they be returning anytime soon?

I asked Vallaeys on his perspective of the possibility. He stated:

It’s impossible to know exactly how long Amazon will stay out of Google Ads, but we can make some educated guesses. One possibility is that they’re testing incrementality: pausing ads to see how much business Google truly drives versus organic or other channels. Another is operational: after a strong Prime Day, they may be letting inventory rebalance before reinvesting. Given the timing, it would be surprising if they didn’t return for the holiday season, especially Black Friday and Cyber Monday, when they typically maximize their marketing push.

If and when Amazon comes back, advertisers should focus on fundamentals. That means managing budgets carefully to make sure spend is allocated to the areas with the highest potential, and leaning on smart bidding to ensure that the clicks you do buy are meeting profitability targets. Performance monitoring and conversion tracking need to be absolutely solid so automated systems have the right data to optimize against.

To sum up, there’s no way to truly know what Amazon’s next move on Google will be (or won’t be). But, advertisers and retailers alike can use this opportunity to give a renowned focus on the basics of advertising.

Lessons Beyond the Traffic Spike

Amazon’s sudden exit from Google Shopping ads shattered the comfortable assumption that less competition equals better returns.

What followed wasn’t universal lift. It was more like a complicated shuffle, where brands saw more traffic but not necessarily more profit.

Use this moment as a reminder: measure what matters. Traffic and impressions are only valuable insofar as they drive conversions worth your cost.

In some categories, you can meet Amazon head-on (like Electronics). At most, you’d be wiser to double down on what makes your business unique, and invest in customers who value your story, service, and specialization, not just a bargain.

You can read Optmyzr’s full study here.

Google: Why CrUX & Search Console Don’t Match On Core Web Vitals via @sejournal, @MattGSouthern

Google’s Barry Pollard recently explained why website owners see different Core Web Vitals scores in Chrome User Experience Report (CrUX) versus Google Search Console.

The short answer: both tools can be correct because they measure different things.

Pollard addressed the issue on Bluesky after questions about sites showing 90% “good” page loads in CrUX but only 50% “good” URLs in Search Console. His explanation can help you decide which metrics matter for your SEO work.

CrUX vs. Search Console

CrUX and Search Console measure performance differently.

CrUX counts page views and reflects how real Chrome users experience your site across visits. Every visit is a data point. If one person hits your homepage ten times, that’s ten experiences counted.

In Pollard’s words:

“Most CrUX data is measured by ‘page views’.”

He added:

“Users can visit a single page many times, or multiple pages once. 90% of your ‘page views’ may be the home page.”

Search Console works differently. It evaluates individual URLs and groups similar pages, giving you a template-level view of page health across the site. It’s a different lens on the same underlying field data sourced from CrUX.

Google’s documentation confirms: CrUX is the official Web Vitals field dataset, and the Core Web Vitals report in Search Console is derived from it and presented at the URL/group level.

Why Both Metrics Matter

Should you focus on page views or individual pages? That depends on your goals.

Pollard puts the choice on you:

“Should you care about ‘page views’ or ‘pages’? Well that’s up to you!”

High-traffic pages affect more people, so they often deserve first priority. They also tend to run faster because they get more attention and caching.

But don’t ignore slower pages. As Pollard suggested:

“Maybe they’d be visited more if not so slow?”

The best approach uses both views. Keep popular pages fast for current visitors, and improve slower sections to raise overall site quality and discoverability.

Action plan

When CrUX looks good but Search Console shows many problem URLs, it usually means your most-visited pages are fine while long-tail sections need work. That’s useful direction, not a conflict.

Start with the pages that drive the most sessions and revenue, then work through other templates so URL-level health catches up. As you assess changes, always check what each tool is counting and over which time window.

Looking ahead

Don’t panic when the numbers don’t align. They’re showing you different views of the same reality: user experiences (CrUX) and page health by URL/group (Search Console). Use both to guide your roadmap and reporting.

OpenAI Announces Low-Cost Subscription Plan: ChatGPT Go via @sejournal, @martinibuster

OpenAI is rolling out a new subscription tier called ChatGPT Go, a competitively priced version that will initially be available only to users in India. It features ten times higher message limits, ten times more image generations, and file uploads than the free tier.

ChatGPT Go

OpenAI is introducing a new low-cost subscription plan that will be available first in India. The cost of the new subscription tiere is 399 Rupees/month (GST included). That’s the equivalent of $4.57 USD/month.

The new tier includes everything in the Free plan plus:

  • 10X higher message limits
  • 10x more image generations
  • 10x more file uploads
  • Twice as much memory

According to Nick Turley of ChatGPT:

“All users in India will now see prices for subscriptions in Indian Rupees, and can now pay through UPI.”

OpenAI’s initial announcement shared availability details:

“Available on web, mobile (iOS & Android), and desktop (macOS & Windows).

ChatGPT Go is geo-restricted to India at launch, and is able to be subscribed to by credit card or UPI.”

Featured Image by Shutterstock/JarTee

Google Trends API Alpha: Mueller Confirms Small Pilot Group via @sejournal, @MattGSouthern

Google says the new Trends API is opening to a “quite small” set of testers at first, with access expanding over time. The company formally announced the alpha at Search Central Live APAC.

On Bluesky, Google Search Advocate John Mueller tried to set expectations for SEO professionals, writing:

“The initial pilot is going to be quite small, the goal is to expand it over time… I wouldn’t expect the alpha/beta to be a big SEO event :)”

Google’s own announcement also describes access as “very limited” during the early phase.

What Early Testers Get

The API’s main benefit is consistent scaling.

Unlike the Trends website, which rescales results between 0 and 100 for each query set, the API returns data that stays comparable across requests.

That means you can join series, extend time ranges without re-pulling history, and compare many terms in one workflow.

Data goes back 1,800 days (about five years) and updates through two days ago. You can query daily, weekly, monthly, or yearly intervals and break results down by region and sub-region.

At the launch session, Google showed example responses that included both a scaled interest value and a separate search_interest field, indicating a raw-value style metric alongside the scaled score. Google also said the alpha will not include the “Trending Now” feature.

Why There’s High Interest

If you rely on Trends for research, the consistent scaling solves a long-standing pain point with cross-term comparisons.

You can build repeatable analyses without the “re-scaled to 100” surprises that come from changing comparator sets.

For content planning, five years of history and geo breakdowns support more reliable seasonality checks and local targeting.

Looking Ahead

The small pilot suggests Google wants feedback from different types of users. Google is prioritizing applicants who have a concrete use case and can provide feedback.

In the meantime, you can continue to use the website version while preparing for API-based comparisons later.


Featured Image: PhotoGranary02/Shutterstock

AI Systems Often Prefer AI-Written Content, Study Finds via @sejournal, @MattGSouthern

A peer-reviewed PNAS study finds that large language models tend to prefer content written by other LLMs when asked to choose between comparable options.

The authors say this pattern could give AI-assisted content an advantage as more product discovery and recommendations flow through AI systems.

About The Study

What the researchers tested

A team led by Walter Laurito and Jan Kulveit compared human-written and AI-written versions of the same items across three categories: marketplace product descriptions, scientific paper abstracts, and movie plot summaries.

Popular models, including GPT-3.5, GPT-4-1106, Llama-3.1-70B, Mixtral-8x22B, and Qwen2.5-72B, acted as selectors in pairwise prompts that forced a single pick.

The paper states:

“Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options. This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage.”

Key results at a glance

When GPT-4 provided the AI-written versions used in comparisons, selectors chose the AI text more often than human raters did:

  • Products: 89% AI preference by LLMs vs 36% by humans
  • Paper abstracts: 78% vs 61%
  • Movie summaries: 70% vs 58%

The authors also note order effects. Some models showed a tendency to pick the first option, which the study tried to reduce by swapping the order and averaging results.

Why This Matters

If marketplaces, chat assistants, or search experiences use LLMs to score or summarize listings, AI-assisted copy may be more likely to be selected in those systems.

The authors describe a potential “gate tax,” where businesses feel compelled to pay for AI writing tools to avoid being down-selected by AI evaluators. This is a marketing operations question as much as a creative one.

Limits & Questions

The human baseline in this study is small (13 research assistants) and preliminary, and pairwise choices don’t measure sales impact.

Findings may vary by prompt design, model version, domain, and text length. The mechanism behind the preference is still unclear, and the authors call for follow-up work on stylometry and mitigation techniques.

Looking ahead

If AI-mediated ranking continues to expand in commerce and content discovery, it is reasonable to consider AI assistance where it directly affects visibility.

Treat this as an experimentation lane rather than a blanket rule. Keep human writers in the loop for tone and claims, and validate with customer outcomes.

Google Makes Merchant API Generally Available: What’s New via @sejournal, @MattGSouthern

Google makes Merchant API generally available and announces plans to sunset the Content API. New features include order tracking, issue resolution, and Product Studio.

  • Merchant API is now generally available.
  • It’s now the the primary programmatic interface for Merchant Center.
  • Google will keep the Content API for Shopping accessible until next year.
Tired Of SEO Spam, Software Engineer Creates A New Search Engine via @sejournal, @martinibuster

A software engineer from New York got so fed up with the irrelevant results and SEO spam in search engines that he decided to create a better one. Two months later, he has a demo search engine up and running. Here is how he did it, and four important insights about what he feels are the hurdles to creating a high-quality search engine.

One of the motives for creating a new search engine was the perception that mainstream search engines contained increasing amount of SEO spam. After two months the software engineer wrote about their creation:

“What’s great is the comparable lack of SEO spam.”

Neural Embeddings

The software engineer, Wilson Lin, decided that the best approach would be neural embeddings. He created a small-scale test to validate the approach and noted that the embeddings approach was successful.

Chunking Content

The next phase was how to process the data, like should it be divided into blocks of paragraphs or sentences? He decided that the sentence level was the most granular level that made sense because it enabled identifying the most relevant answer within a sentence while also enabling the creation of larger paragraph-level embedding units for context and semantic coherence.

But he still had problems with identifying context with indirect references that used words like “it” or “the” so he took an additional step in order to be able to better understand context:

“I trained a DistilBERT classifier model that would take a sentence and the preceding sentences, and label which one (if any) it depends upon in order to retain meaning. Therefore, when embedding a statement, I would follow the “chain” backwards to ensure all dependents were also provided in context.

This also had the benefit of labelling sentences that should never be matched, because they were not “leaf” sentences by themselves.”

Identifying The Main Content

A challenge for crawling was developing a way to ignore the non-content parts of a web page in order to index what Google calls the Main Content (MC). What made this challenging was the fact that all websites use different markup to signal the parts of a web page, and although he didn’t mention it, not all websites use semantic HTML, which would make it vastly easier for crawlers to identify where the main content is.

So he basically relied on HTML tags like the paragraph tag

to identify which parts of a web page contained the content and which parts did not.

This is the list of HTML tags he relied on to identify the main content:

  • blockquote – A quotation
  • dl – A description list (a list of descriptions or definitions)
  • ol – An ordered list (like a numbered list)
  • p – Paragraph element
  • pre – Preformatted text
  • table – The element for tabular data
  • ul – An unordered list (like bullet points)

Issues With Crawling

Crawling was another part that came with a multitude of problems to solve. For example, he discovered, to his surprise, that DNS resolution was a fairly frequent point of failure. The type of URL was another issue, where he had to block any URL from crawling that was not using the HTTPS protocol.

These were some of the challenges:

“They must have https: protocol, not ftp:, data:, javascript:, etc.

They must have a valid eTLD and hostname, and can’t have ports, usernames, or passwords.

Canonicalization is done to deduplicate. All components are percent-decoded then re-encoded with a minimal consistent charset. Query parameters are dropped or sorted. Origins are lowercased.

Some URLs are extremely long, and you can run into rare limits like HTTP headers and database index page sizes.

Some URLs also have strange characters that you wouldn’t think would be in a URL, but will get rejected downstream by systems like PostgreSQL and SQS.”

Storage

At first, Wilson chose Oracle Cloud because of the low cost of transferring data out (egress costs).

He explained:

“I initially chose Oracle Cloud for infra needs due to their very low egress costs with 10 TB free per month. As I’d store terabytes of data, this was a good reassurance that if I ever needed to move or export data (e.g. processing, backups), I wouldn’t have a hole in my wallet. Their compute was also far cheaper than other clouds, while still being a reliable major provider.”

But the Oracle Cloud solution ran into scaling issues. So he moved the project over to PostgreSQL, experienced a different set of technical issues, and eventually landed on RocksDB, which worked well.

He explained:

“I opted for a fixed set of 64 RocksDB shards, which simplified operations and client routing, while providing enough distribution capacity for the foreseeable future.

…At its peak, this system could ingest 200K writes per second across thousands of clients (crawlers, parsers, vectorizers). Each web page not only consisted of raw source HTML, but also normalized data, contextualized chunks, hundreds of high dimensional embeddings, and lots of metadata.”

GPU

Wilson used GPU-powered inference to generate semantic vector embeddings from crawled web content using transformer models. He initially used OpenAI embeddings via API, but that became expensive as the project scaled. He then switched to a self-hosted inference solution using GPUs from a company called Runpod.

He explained:

“In search of the most cost effective scalable solution, I discovered Runpod, who offer high performance-per-dollar GPUs like the RTX 4090 at far cheaper per-hour rates than AWS and Lambda. These were operated from tier 3 DCs with stable fast networking and lots of reliable compute capacity.”

Lack Of SEO Spam

The software engineer claimed that his search engine had less search spam and used the example of the query “best programming blogs” to illustrate his point. He also pointed out that his search engine could understand complex queries and gave the example of inputting an entire paragraph of content and discovering interesting articles about the topics in the paragraph.

Four Takeaways

Wilson listed many discoveries, but here are four that may be of interest to digital marketers and publishers interested in this journey of creating a search engine:

1. The Size Of The Index Is Important

One of the most important takeaways Wilson learned from two months of building a search engine is that the size of the search index is important because in his words, “coverage defines quality.” This is

2. Crawling And Filtering Are Hardest Problems

Although crawling as much content as possible is important for surfacing useful content, Wilson also learned that filtering low quality content was difficult because it required balancing the need for quantity against the pointlessness of crawling a seemingly endless web of useless or junk content. He discovered that a way of filtering out the useless content was necessary.

This is actually the problem that Sergey Brin and Larry Page solved with Page Rank. Page Rank modeled user behavior, the choice and votes of humans who validate web pages with links. Although Page Rank is nearly 30 years old, the underlying intuition remains so relevant today that the AI search engine Perplexity uses a modified version of it for its own search engine.

3. Limitations Of Small-Scale Search Engines

Another takeaway he discovered is that there are limits to how successful a small independent search engine can be. Wilson cited the inability to crawl the entire web as a constraint which creates coverage gaps.

4. Judging trust and authenticity at scale is complex

Automatically determining originality, accuracy, and quality across unstructured data is non-trivial

Wilson writes:

“Determining authenticity, trust, originality, accuracy, and quality automatically is not trivial. …if I started over I would put more emphasis on researching and developing this aspect first.

Infamously, search engines use thousands of signals on ranking and filtering pages, but I believe newer transformer-based approaches towards content evaluation and link analysis should be simpler, cost effective, and more accurate.”

Interested in trying the search engine? You can find it here and  you can read how the full technical details of how he did it here.

Featured Image by Shutterstock/Red Vector

OpenAI Updates GPT-5 To Make It Warmer And Friendlier via @sejournal, @martinibuster

OpenAI updated GPT-5 to make it warmer and more familiar (in the sense of being friendlier) while taking care that the model didn’t become sycophantic, a problem discovered with GPT-4o.

A Warm and Friendly Update to GPT-5

GPT-5 was apparently perceived as too formal, distant, and detached. This update addresses that issue so that interactions are more pleasant and so that ChatGPT is perceived as more likable, as opposed to formal and distant.

Something that OpenAI is working toward is making ChatGPT’s personality user-configurable so that it’s style can be a closer match to user’s preferences.

OpenAI’s CEO Sam Altman tweeted:

“Most users should like GPT-5 better soon; the change is rolling out over the next day.

The real solution here remains letting users customize ChatGPT’s style much more. We are working that!”

One of the responses to Altman’s post was a criticism of GPT-5, asserting that 4o was more sensitive.

They tweeted:

“What GPT-4o had — its depth, emotional resonance, and ability to read the room — is fundamentally different from the surface-level “kindness” GPT-5 is now aiming for.

GPT-4o:
•The feeling of someone silently staying beside you
•Space to hold emotions that can’t be fully expressed
•Sensitivity that lets kindness come through the air, not just words.”

The Line Between Warmth And Sycophancy

The previous version of ChatGPT was widely understood as being overly flattering to the point of validating and encouraging virtually every idea. There was a discussion on Hacker News a few weeks ago about this topic of sycophantic AI and how ChatGPT could lead users into thinking every idea was a breakthrough.

One commenter wrote:

“…About 5/6 months ago, right when ChatGPT was in it’s insane sycophancy mode I guess, I ended up locked in for a weekend with it…in…what was in retrospect, a kinda crazy place.

I went into physics and the universe with it and got to the end thinking…”damn, did I invent some physics???” Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like “this is genuinely interesting stuff!” – and the LLM kept telling me it was genuinely interesting stuff and I should continue – I even emailed a friend a “wow look at this” email (he was like, dude, no…) I talked to my wife about it right after and she basically had me log off and go for a walk.”

Should ChatGPT feel like a sensitive friend, or should it be a tool that is friendly or pleasant to use?

Read ChatGPT release notes here:

GPT-5 Updates

Featured Image by Shutterstock/cosmoman

Google Expands iOS App Marketing Capabilities via @sejournal, @brookeosmundson

Running iOS app campaigns in Google has never been straightforward. Between Apple’s privacy changes and evolving user behavior, marketers have often felt like they were working with one hand tied behind their backs.

Measurement was limited, signals were weaker, and getting campaigns to scale often required more guesswork than strategy.

Google Ads Liaison, Ginny Marvin, took to LinkedIn to announce the numerous updates to iOS App Install campaigns/

Google is now making changes to help advertisers navigate this space more confidently. Their latest updates to iOS App Install campaigns are designed to give marketers a stronger mix of creative options, smarter bidding tools, and privacy-respecting measurement features.

While these changes won’t solve every iOS challenge overnight, they do mark a meaningful shift in how advertisers can approach growth on one of the world’s largest mobile ecosystems.

New Ad Formats Bring More Creative Opportunities

One of the biggest updates is the addition of new creative formats designed to improve engagement and give users a clearer picture of an app before they download.

Google is expanding support for co-branded YouTube ads, which integrate creator-driven content directly into placements like YouTube Shorts and in-feed ads.

For advertisers, it’s an opportunity to lean into the authenticity of creator-style ads, which often resonate more strongly than traditional branded spots.

Playable end cards are also being introduced across select AdMob inventory. After watching an ad, users can now interact with a lightweight, playable demo of the app.

Think of it as a “try before you buy” moment: users get a quick preview of the experience, which can lead to higher-quality installs.

For app marketers, this shift matters because it aligns user expectations with actual in-app experiences. The closer someone feels to your product before downloading, the less risk you face with churn or low-value installs.

Both of these creative updates point to a broader trend: ads are becoming less static and more interactive. That’s particularly important on iOS, where advertisers need every edge they can get to capture attention in environments where tracking is constrained.

Target ROAS Bidding Now Available for iOS

Another cornerstone of this announcement is Google’s expansion of value-based bidding on iOS.

Target ROAS (tROAS), a bidding strategy that optimizes for return on ad spend rather than raw install volume, is now fully supported.

This is especially valuable for apps with monetization models that vary widely across users, such as subscription services or in-app purchase businesses. Instead of paying equally for every install, advertisers can now direct spend toward users more likely to generate meaningful revenue.

Beyond tROAS, Google is also expanding the “Maximize Conversions” strategy for iOS. This allows campaigns to optimize not just for installs, but for deeper in-app actions.

By leaning into Google’s AI-driven modeling, advertisers can let the system identify where budget should be allocated to maximize results within daily spend limits.

The takeaway here is simple: volume still matters, but value matters more. With these updates, Google is nudging app marketers away from chasing installs at any cost and toward optimizing for users who truly drive long-term impact.

Measurement That Balances Privacy and Clarity

Perhaps the most challenging part of iOS advertising has been measurement.

Apple’s App Tracking Transparency framework made it harder to follow users across devices, limiting the signals available for campaign optimization. Google’s new measurement updates are designed to give advertisers more clarity without crossing privacy lines.

On-device conversion measurement is one of the most notable additions. Rather than sending user-level data back to servers, performance signals are processed directly on the device.

This means advertisers can still see which campaigns are working, but without compromising privacy. Importantly, it also reduces latency in reporting, helping marketers make faster decisions.

Integrated conversion measurement (ICM) is another feature being pushed forward. This approach works through app attribution partners (AAPs), giving advertisers cleaner, more near real-time data about installs and post-install actions.

Taken together, these tools signal a future where privacy and measurement don’t have to be opposing forces. Instead, advertisers can get the insights they need while users retain more control over their data.

How App Marketers Can Take Advantage

These updates aren’t the kind that require testing and adaptation.

For most advertisers, the best starting point is experimenting with the new ad formats. Running a co-branded YouTube ad or a playable end card alongside your existing creative can help you see whether engagement and conversion quality improve.

These tests don’t need to be massive, but they should be deliberate enough to give you actionable learnings.

For bidding, marketers should look closely at whether tROAS makes sense for their business model.

If your app has a clear monetization strategy and meaningful differences in user value, tROAS could be a game-changer. Start conservatively with your targets, give the algorithm time to learn, and refine based on observed performance.

On the measurement side, now is the time to talk to your developers and attribution partners about what it would take to implement on-device conversion tracking or ICM. These solutions may involve technical lift, but the payoff is improved data quality in an environment where every signal counts.

It’s also worth noting that these changes won’t transform campaigns overnight. Smart bidding models and new measurement frameworks take time to stabilize, and the impact of new formats might not show up in the first week of a test.

Patience, consistency, and a focus on week-over-week trends are key.

Looking Ahead

Google’s latest iOS updates don’t eliminate the complexities of app marketing, but they do give advertisers sharper tools to work with. From more engaging ad formats to value-based bidding and privacy-first measurement, the changes represent progress in a space that’s been difficult to navigate.

The message for marketers is clear: start testing, invest in measurement infrastructure, and don’t let short-term results cloud the bigger picture.

With the right approach, these updates can help shift iOS campaigns from a defensive play into an opportunity for real growth.