Starting today, Reddit rolled out a series of updates aimed at making it easier for small and medium-sized businesses (SMBs) to advertise on the platform.
The changes focus on simplifying the ad creation process, improving signal quality, and helping advertisers move campaigns from other platforms like Meta with fewer headaches.
These updates follow Reddit’s continued push to make its Ads Manager more accessible, especially for smaller businesses that may not have the luxury of dedicated ad ops teams or outside agencies.
Launching Campaigns Faster With New Tools
In the Reddit Ads update, they announced two new tools to streamline campaign creation:
Campaign Import.
Simplified Campaign Quality Assurance (QA).
The first of the additions is Campaign Import, a tool that lets advertisers bring campaigns over from Meta directly into Reddit Ads Manager.
The process is straightforward — after connecting their Meta account, advertisers can select an existing campaign, import it, and make any necessary adjustments to suit Reddit’s environment.
This isn’t just a time-saver; it gives brands a quick way to leverage proven creative and targeting strategies while adapting them to Reddit’s unique audiences.
Another welcomed update is Reddit’s new Campaign Quality Assurance (QA) system. Instead of clicking back and forth between settings pages, advertisers now get a consolidated review page summarizing all key campaign details.
If something looks off — budget, targeting, placements, or creative — users can jump directly to the relevant section and make fixes before going live.
It may seem small, but anyone who’s fumbled through nested ad platforms under tight deadlines knows how much this improves workflow.
Improved Quality Signals In Reddit Ads
In addition to the streamlined campaign creation tools, Reddit also announced two features to improve the quality of audience and user behavior signals:
1-click Google Tag Manager integration for Reddit Pixel.
Event Manager Quality Assurance (QA).
The platform now offers a 1-click integration with Google Tag Manager (GTM) for the Reddit Pixel, dramatically reducing the friction of installing and configuring conversion tags.
Advertisers can now fire up GTM, install the Reddit Pixel in minutes, and start sending conversion data without needing to pull in a developer. This update alone will make performance-focused advertisers breathe a little easier.
Reddit also upgraded its Event Manager QA tools. The revamped Events Overview now gives a clearer breakdown of conversion events coming from both the Reddit Pixel and the Conversions API (CAPI).
Advertisers can spot data discrepancies faster and ensure their lower-funnel campaigns are set up for success.
Jim Squires, EVP of Business Marketing and Growth at Reddit, noted that SMBs have always been an essential part of the platform’s community and advertising base.
We continue to make improvements to the Reddit Ads Manager that make it easier to launch and manage campaigns, so they can focus on what matters most: growing and running their businesses.
Reddit Ads Continues To Push Forward
With these latest updates, Reddit continues refining its ad platform for a broader range of advertisers, with particular attention to reducing friction for growing businesses.
Advertisers who have been looking for more streamlined ways to import, optimize, and measure campaigns will likely find these tools helpful as they plan their next steps on Reddit.
Have you already tried out Reddit Ads? Will these updates make you lean towards testing a new platform next quarter?
Are you ready to master the art of earning high-quality editorial links?
Join us on April 9, 2025, for our exclusive webinar, “Win Higher-Quality Links: The PR Approach To SEO Success.” As Google’s AI-driven algorithms evolve, the importance of securing authoritative links has never been greater. Learn from the experts how to enhance your SEO strategy with proven Digital PR techniques.
Why This Webinar Is a Must-Attend Event
This session is designed for those looking to elevate their link-building strategies beyond traditional methods. Discover a data-driven approach that has helped large brands secure hundreds of top-tier links and media mentions.
What You Will Learn:
Effective Digital PR Processes: Uncover the step-by-step process and tools needed to earn contextual, editorial links from leading publications.
Strategic Insights: Learn how combining data insights with expert commentary can attract significant press coverage.
Success Stories: Examine real-world case studies from enterprise brands that have successfully implemented these digital PR strategies.
Expert Guidance From Kevin Rowe
Kevin Rowe, an expert in digital PR strategies for enterprise brands, will lead the webinar. He’ll share the methodologies he’s used to gain impactful links, helping you understand how to apply these strategies to your own efforts.
Interactive Q&A Session
The webinar will conclude with a LIVE Q&A session, offering you the chance to ask Kevin your detailed questions about digital PR and link building.
Don’t Miss Out!
If you’re aiming to earn links that truly reflect your brand’s expertise and drive SEO success, this webinar will provide you with the actionable strategies you need.
Reserve your spot now to start leveraging your brand’s data and expertise into authoritative media coverage.
For over a decade, E-A-T (expertise, authoritativeness, and trustworthiness) has played a role in search rankings, first introduced in Google’s Search Quality Rater Guidelines in 2014.
But with the rise of AI-generated content and AI-synthesized answers, E-E-A-T (now including experience) is no longer just a good idea. It has become the defining factor in determining which sources AI-driven search results consider authoritative enough to cite and include in their synthesized narratives and responses.
AI Overviews and other AI-generated search features don’t just favor sites that “align with E-E-A-T principles” – they favor recognized experts.
To be cited in AI-driven answers, a brand needs to demonstrate undeniable expertise and establish itself as the authority in its field.
This means consistently producing original research, providing real-world insights, and gaining industry-wide (or broader) recognition.
In this article, we’ll explore how E-E-A-T determines visibility in AI-driven search and AI-generated answers, what challenges brands face in maintaining credibility, and strategies for ensuring that AI models and search engines rely on your content as a trusted source.
The Intersection Of E-E-A-T And AI-Generated Answers
The rise of AI-generated search results presents both opportunities and challenges for brands.
AI-powered features like Google’s AI Overviews, ChatGPT search integrations, and Perplexity AI are synthesizing answers instead of just returning traditional blue links.
This means that appearing in AI-driven answers requires more than just good SEO – it requires E-E-A-T-backed authority.
Key considerations for ensuring visibility in AI search features:
Experience: AI models favor content backed by first-hand knowledge. Brands that demonstrate real-world expertise through case studies, original research, and hands-on experience have a greater chance of being cited.
Expertise: AI-generated answers prefer sources with clear subject matter expertise. Author bylines, credentials, and expert contributions all signal trustworthiness to AI-driven search.
Authoritativeness: AI Overviews and LLM-generated answers prioritize brands that own their knowledge graph, are widely referenced, and are recognized leaders in their industry.
Trustworthiness: AI-generated content is acceptable to use (in that it is not inherently “bad” or penalized) but must be factually accurate and verifiable. Content backed by reliable sources, citations, and transparent authorship is more likely to surface in AI-generated search features.
AI Overviews And E-E-A-T: What Google’s Latest Research Reveals
Google’s recent post on AI Overviews and AI Mode highlights how AI-generated search experiences are evolving and underscores the importance of E-E-A-T in shaping AI-driven responses.
Here are key takeaways that reinforce the role of E-E-A-T:
Google Integrates E-E-A-T Into AI Overviews
AI Overviews leverage Google’s ranking systems and Knowledge Graph to determine which sources are most authoritative. (Hint: Ensure your Knowledge Graph exists and is accurate!)
E-E-A-T signals directly influence which websites AI Overviews pull from, reinforcing the need for brands to establish themselves as leading authorities.
High-Quality Sources Are A Requirement
AI Overviews corroborate AI-generated summaries with top-ranked content, (theoretically) ensuring the information is reliable.
For Your Money or Your Life (YMYL) queries, the bar for trustworthiness is even higher, emphasizing the importance of expert-driven content. (This is why author biographies with CVs, other credentials, and proof of expertise are necessary.)
AI Overviews Increase Engagement With High-Quality Content
Google reports that users who interact with AI Overviews visit a greater diversity of websites and that click-throughs from AI Overviews are of higher quality.
This presents an opportunity for brands with strong E-E-A-T signals to attract engaged visitors who trust the AI-curated results (but click through to verify).
Manual And Algorithmic Safety Checks Reinforce E-E-A-T’s Importance
Google’s Search Quality Raters, adversarial testing, and fact-checking systems ensure AI Overviews prioritize reliable information.
Brands that lack E-E-A-T credentials (specifically Knowledge Graphs and other key indicators that your brand is considered authoritative) may struggle to appear in AI-generated search experiences.
Future AI Search Innovations Will Reward E-E-A-T Signals
Google’s experimental AI Mode in Search expands AI-generated responses using multimodal data and real-time corroboration with authoritative sources.
Brands with verified expertise, structured citations, and widespread recognition will have an advantage in AI-driven search.
This reinforces the need for brands to proactively establish E-E-A-T authority to maintain visibility in AI-driven search features.
Challenges In Applying E-E-A-T To AI-Generated Search
Despite its benefits, AI-driven search presents several challenges for brands trying to maintain visibility:
1. AI Prioritizes Recognized Authorities: Simply optimizing for E-E-A-T is not enough. Brands must become the trusted source that AI search engines consistently reference.
It’s easy to optimize for or align with E-E-A-T in principle, but much more difficult to achieve in reality because some of the requirements simply aren’t within your control.
2. Potential For Misinformation: AI-generated search results can fabricate statistics, misquote sources, or create misleading narratives. Brands must actively monitor AI-generated mentions for accuracy.
3. Duplicate And Unoriginal Content: AI often pulls from widely cited knowledge bases, meaning brands that don’t produce original insights and research risk being ignored.
4. Algorithmic Bias And Filtering: AI search models prioritize widely referenced sources, which can disadvantage emerging brands. Overcoming this requires strategic partnerships, citations, and broad industry engagement.
The study tested eight major AI search engines and found that chatbots collectively provided inaccurate answers more than 60% of the time, nearly always without acknowledging uncertainty.
Most interesting finding: Premium AI models were even more prone to confidently incorrect responses than their free counterparts, contradicting the assumption that paid AI services are more reliable.
ChatGPT, in particular, only indicated uncertainty in its wrong answers 7.5% of the time. Which means that 92.5% of the times it was wrong, it was confident it was correct.
If ChatGPT’s success rate at indicating uncertainty were a batting average, it would be .075.
John Vukovich, known for recording the lowest ever MLB batting average (for non-pitchers with more than 500 at bats), had a career BA of .161 – which is still 100% better than ChatGPT’s ability to acknowledge it might not be right.
The findings in this report only underscore the need for careful, attentive human oversight when producing content and active reputation management to ensure accuracy in AI-generated search environments.
E-E-A-T is the key to visibility in AI-driven search – but it’s not just about optimization.
Brands must become the expert sources AI models trust, reference, and cite.
Those who invest in credibility, expertise, and real-world authority will survive in AI-powered search landscapes, and those who don’t will fade into irrelevance.
Writing quality content should be a key aspect of every SEO strategy. But when is your content considered good or high-quality? And does quality mean the same for your users as for Google? In this article, we’ll discuss creating content and how you can make sure it hits the mark. It will require some creative writing skills. But don’t worry, you don’t have to become the next big author! By focusing on the right things, you can create high-ranking quality content that your users will happily read.
What is quality content?
That is the million-dollar question. Knowing how to write good content helps you get more visitors, higher conversions, and lower bounce rates. But who determines the quality of your content? The easy answer: your users. However, this also makes creating the right content more difficult. Because every user is different and has a different search intent. They have, however, one thing in common: every user knows what they want.
Although your users eventually determine the quality of your content, you can take a few steps to ensure you end up with well-thought-out, readable, and attractive content. In other words, content that’s eligible to be considered high-quality by your users and search engines. Luckily, a lot of the aspects that users will appreciate about your content are the same as the aspects search engines look for in quality content.
How search engines determine quality content
Search engines want to present their users with the exact content they seek. Content that is helpful, reliable and people-first and aligns with their current search intent. To help you create good content, Google has an acronym that you can consult: E-E-A-T.
Search engines decide on what is content quality by assessing a number of things – relevance, clarity and helpfulness, credibility and uniqueness. This all ties into the importance of E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) in any strategy around brand or topical authority.
Alex Moss – Principal SEO at Yoast
The acronym E-E-A-T stands for Experience, Expertise, Authoritativeness and Trustworthiness. In their ongoing search for the best content, Google has added this acronym to their search quality raters guidelines. They use this to assess and judge the quality of online content. Although it’s especially important for so-called YMYL websites (“Your Money Your Life” – sites that are related to well-being, health, finances or safety), these guidelines apply to all content out there.
Why is quality content important?
Quality content is the foundation of a strong brand, helping you establish authority and expertise in your industry. Well-crafted content speaks directly to the needs of your audience, providing valuable insights that position your brand as a reliable source. Whether it’s through blog posts, social media, or in-depth guides, delivering high-quality content builds long-term relationships with customers, fosters engagement, and strengthens brand credibility.
Beyond its impact on branding, quality content plays a crucial role in SEO. As mentioned above, search engines prioritize helpful, well-structured, and informative content that truly benefits users. By focusing on producing valuable content that answers queries effectively, you can achieve higher rankings in the search results. This leads to increased visibility, organic traffic, and better engagement, which will help you grow your website sustainably. To scale content creation effectively, check out this guide on scaling content. Additionally, if you mainly write content for your clients’ website, make sure to check out our article on writing valuable content that your clients will love.
7 steps to start creating high-quality content
To ensure the quality of your content, there are 7 steps that you can follow. Let’s go into them in more detail.
1. Write for your readers, not yourself
If you have an ecommerce site, you want readers to know about the products or services you offer. If you’re a blogger, you want readers to get to know you and the topics that interest you. However, it’s also important to consider what your users want to read about. What interests do they have? What events or news do they follow that you can relate to your business? And what ‘problems’ are they trying to fix that have led them to your site?
The first step in creating high-quality content is ensuring it contains the information your audience is looking for. To find out what your users are looking for, you have to conduct proper keyword research. This will help you determine what subjects to write about and what words your audience uses. Keyword research also helps your rankings, as more visitors and lower bounce rates tell Google that your page is a good result to show in their search results.
2. Think about search intent and your goal
Search intent is the reason why someone conducts a specific search. It’s the term used to describe their purpose. For example, do they have a question they need answered? Or do they want to buy something online? Someone’s search intent makes a difference in how they consider the quality of your content. If it fits their need at that moment, then they will stay on your page longer. But if they need an answer to a question and the page they land on only tries to sell them products, they’ll be gone before you know it.
Match goals to different search intents
It’s important to consider search intent while creating content for a specific page. That’s why we advise you to match your goals to users’ different search intents. Is one of your goals to increase newsletter subscriptions? Then, you should add that subscription button to pages where users with an informational intent land. Does a visitor have a transactional intent (meaning: they want to buy something)? Make sure they land on a product or category page dedicated to the product they are looking for.
Of course, experience tells us it’s not always that black and white. Still, it’s good to consider your users’ search intent. It helps you determine the focus of your content and what call-to-actions you want to add. A great way to get started is by adopting a content design mindset. This mindset helps you produce user-centered content based on real needs. Also, we recommend looking at the search results for some input to create great content.
3. Make your content readable and engaging
Do you want to get your message across? And do you want people to read your entire blog post or page? Then, make your content easy to read. This means that you should:
Think about the structure of your text and the words you use. Too much text without any headings or paragraphs, also known as a wall of text, tends to scare people off. Use headings and whitespace to give your readers some air while reading.
Try to limit the use of difficult words and be cautious of the length of your sentences. Both can make your content harder to understand, which will slow down and frustrate your reader.
Variation in your text will make it engaging. Use synonyms and alternate longer sentences with shorter ones to mix it up.
Another important thing to focus on: Have fun! And be conversational in your writing. This helps you write high-quality content that is different from your competitors’ and helps users get to know you and your brand.
Experience, expertise, authoritativeness and trustworthiness can all be used to improve your content. So how can you make sure to include these in your writing? We’ll go through them one by one and give you some pointers.
Share your experience
Although the acronym started as E-A-T, they added another E shortly after. This newly added E stands for experience. They prefer content that showcases knowledge or skills gained through first-hand experience. This can be gained through personal involvement or observations related to the topic at hand. To give an example, someone who has worked as an optician for many years will be experienced in the topic of eyesight. Or someone who has a prescription themselves will also have experience on the topic.
The second E in E-E-A-T stands for expertise. Although it makes sense that this would be an important factor in determining the quality of content, it is trickier to evaluate. So what Google does is find out what it can about the author itself. What is their reputation when it comes to the topic at hand? What is their background? And what other (reliable) sources are they referring to? When it comes to this criterion, it will pay off to be clear about your expertise and where it comes from online.
Related to expertise, the next letter stands for authoritativeness. An authority can be defined as a person or organization having power or control in a particular area. When you’re an authority on a topic, you often have the proper knowledge on it. That’s why official websites often have a higher chance of being perceived as the authority on a topic. But also aspects like qualifications and being associated with well-known organizations count towards this. If this one is tricky for you, don’t worry. It’s just one of the aspects Google looks for when determining quality. If this one doesn’t fit your blog or business, just focus more on the other letters in the acronym.
The last one probably doesn’t come as a surprise, as this is something we all look for when browsing online. The trustworthiness of the content before you. Whether it’s for a product you want to buy or information that you’re looking for, trust plays a big role in how serious you take online content. If it doesn’t feel right, a user will hesitate in the best case and leave your website in the worst. Google’s guidelines are quite clear on how they determine the trustworthiness of a website: “An unsatisfying amount of any of the following is a reason to give a page a low-quality rating: customer service information, contact information, information about who is responsible for the website or information about who created the content.” So make sure to be clear on these and look for other opportunities to show your trustworthiness.
Another key element of writing high-quality content is ensuring it’s up-to-date and relevant. This means you have to update your content occasionally to ensure people can find the right information. But why is this so important? It shows your users that you’re on top of recent developments and can always provide them with accurate information. In other words, it builds trust and keeps your audience returning to your site.
Keeping your website and blog posts updated is also important for SEO, as this shows Google that your site is ‘alive’ and relevant. So, make sure you schedule a time to update your content regularly.
The five steps we’ve discussed so far will help you write content that is easy to read and user-centered. Now, we’d like to highlight an equally important step: working on your site structure. It’s important because it will help users and search engines find your content.
Site structure refers to the way you organize your site’s content. When you structure your site well, search engines can index your URLs better. It helps Google determine the importance of your pages and which ones are related to each other. A good site structure allows users to find their way around your site more easily. It will help them find quality content in the search results and on your website. That’s why there’s much to gain from perfecting your site structure.
7. Use Yoast SEO to perfect your content
The last tip I want to share is the content analysis in our very own Yoast SEO plugin. This feature gives you real-time feedback on your content while you’re editing your page in the backend. It monitors whether you use your chosen keyword often enough and in the right places, it looks at text length and gives you feedback on readability. For example, it tells you when you use the passive voice too much, whether you’re using enough subheadings, gives you feedback on word complexity and the use of transition words. All of this and more is available in the free version to help you improve the readability and quality of your content.
The content analysis in Yoast SEO Premium goes a bit further and also does the following:
Allows you to optimize your text for related keyphrases and synonyms
Recognizes different forms of your keyphrase, so you can focus on writing naturally
Recognizes singular and plural, and also tenses of verbs
Gives access to our AI features, like Yoast AI Optimize, suggesting changes in your content
Gives you access to all the Yoast SEO academy courses, including our SEO copywriting training!
Buy Yoast SEO Premium now!
Unlock powerful features and much more for your WordPress site with the Yoast SEO Premium plugin!
A quick recap on high-quality content
Good, high-quality content will positively affect your SEO in the long run. So, before publishing post after post (or page after page), make sure to keep the following in mind. Make sure to write for your readers, make your content readable, match search intent with your goals, be trustworthy, keep your content up to date, and work on your site structure.
The result? Good content that your readers will appreciate. This will positively affect your number of visitors, conversions and eventual revenue. If you want to learn more tips and tricks, make sure to read our guide to SEO copywriting!
Google’s John Mueller offered an overview of EEAT (Experience, Expertise, Authoritativeness, and Trustworthiness) at the Search Central Live NYC event and affirmed why it matters for some sites and why it’s not something SEOs can add to a website.
EEAT’s Relation To Quality Raters And YMYL Websites
John Mueller started this part of his discussion by explicitly tying the concept to its use as a way for the third party quality raters to provide a more objective judgment about the quality of the search results. He did not say that EEAT was created for SEOs to use as a ranking factor guide, in fact he expressly said that’s not how it works.
What is especially notable is that Mueller says that EEAT comes into play algorithmically for sites that are in topics that affect health or finance, what Google terms Your Money Or Your Life (YMYL) topics.
This is what he said, according to my notes, which contains some paraphrasing:
“EEAT is one of the ways that we look at page quality. EEAT is experience, expertise, authoritativeness and trustworthiness. And this is something that we tell the third party quality raters to watch out for when they’re doing page quality evaluation and something that we take into account when we think the query or a set of pages is on a specific topic where it’s more critical, where we call them your money, your life pages. Where we think that the user actually needs to have something that they can rely on and some signs that they can rely on the content that is present.”
EEAT Is Not Something You Add To Web Pages
In his follow-up statements he dismissed the idea that an SEO can add EEAT to their web pages. EEAT is not something you can add to a website. That’s not how it works. So if adding EEAT is part of what you do for SEO, stop. That’s not SEO.
This is what Mueller said:
“Sometimes SEOs come to us or like mention that they’ve added EEAT to their web pages. That’s not how it works. Sorry, you can’t sprinkle some experiences on your web pages. It’s like, that’s that doesn’t make any sense.”
Photo From Google Search Central Live NYC
EEAT Is Not Needed On Non-YMYL Pages
Lastly, Mueller repeated the point that EEAT is not something that they’re looking for in run of the mill websites. Obviously it’s great if the content has expertise and trustworthiness and so on. But he said it’s not something they’re algorithmically alert for on those kinds of sites, specifically naming recipe sites.
This is what he said:
“From a practical point of view, it’s important to look at this, especially if you’re publishing things on these critical topics, and to look at how you can highlight what it is that you’re already doing so that it’s clear for users.
But if you’re creating a recipe for cookies, you don’t need to have the sidebar with like, ‘this author has created cookies for 27 years.’ I think most people will be able to understand.”
Takeaways
EEAT’s Purpose and Scope
EEAT is used by third-party quality raters to assess search result quality. It was not created by Google as a list of ranking factors for an SEO checklist.
EEAT’s Role in YMYL Topics
Google algorithmically considers EEAT for pages that affect users’ finances or health, which is referred in the Quality Raters Guidelines Your Money or Your Life (YMYL) topics. These are the topic areas where reliability and trust are critical for user safety and confidence and where Google is especially concerned that those qualities are expressed in some way internally and/or externally about those sites. Google doesn’t say what those signals are.
Misconceptions About EEAT in SEO
John Mueller emphasized that EEAT is not something SEOs can “add” to a website the way they might add keywords or internal links. Attempting to “add EEAT” is a misunderstanding of how the concept works within search.
EEAT and Non-YMYL Websites
EEAT is not something that is required in an algorithmic context for non-YMYL sites, such as recipe blogs or other kinds of non-critical content. While it’s useful in a general or marketing sense to to reflect expertise and trust, it’s not a ranking focus for most topics.
EEAT is explicitly created for the third party quality raters to use as a more objective benchmark. That fact gets lost in all the conversations by SEOs about the topic of EEAT. It is also something that’s not particularly important for sites that are outside of YMYL topics. Lastly, EEAT is not something that an SEO can add to their page. Creating a bio with an AI generated image, linking it to a fake LinkedIn profile and then calling it EEAT is not a thing. Trustworthiness, for example, is something that is earned and results in people making recommendations (which doesn’t mean that SEOs should create fake social media profiles and start talking about an author at a website). Nobody really knows what the EEAT signals are.
Email has long been among the most profitable ecommerce marketing channels, with a relatively high return on investment, an owned audience, and controllable — schedulable — frequency.
Email delivers:
Transactional messages such as order or shipping notifications.
Promotions such as discounts and sales.
Editorial content for brand value and customer loyalty.
Retargeting offers using the email address or its hash as a first-party identifier for advertising.
Confirmations to identify and authorize shoppers on a site.
These email use cases assume a recipient will see the message and senders can track her clicks. Unfortunately, neither exists for every message in 2025.
A long series of inbox changes from email clients and app extensions have made it more difficult to reach subscribers.
Here’s a rundown.
Gmail AI Search
The most recent change applies artificial intelligence to transform how email appears in inboxes, so much so that Adweek likened the AI-powered inbox to a social media feed, noting, “with a few exceptions, an email sent was [once] an email received. Now, that honeymoon period might be coming to an end.”
The Adweek commentary came shortly after Gmail announced this month a new feature, stating that “instead of just showing emails in chronological order based on keywords, Gmail search results now factor in elements like recency, most-clicked emails, and frequent contacts.”
Gmail’s announcement included an example search for the term “reunion,” wherein an ecommerce marketing message containing that term moved from the first position to the lowest in the visible search results despite being the most recent.
The example in Gmail’s announcement shows a marketing message at the bottom of a list despite being the most recent. Click image to enlarge.
Spam Filters
The assumption of guaranteed inbox delivery has been false since at least 1994 when America Online introduced the first automated spam filter.
Unsavory types were bombarding AOL email inboxes with unsolicited, fraudulent, and malevolent messages. Phishing and advance fee scams were common, as were links to malware downloads.
Many more email clients followed AOL’s initiative. Governments enacted laws such as the U.S. CAN-SPAM Act of 2003, and today, spam scams are significantly less common.
No one wants to return to spam-filled inboxes.
Nonetheless, we’ve all experienced an important transactional message or two-factor authentication code delivered to our spam folder or deleted before it ever arrived.
Promotions Tab
Spam filters are not the only algorithms impacting email visibility.
In May 2013, Gmail began sorting some email messages into a “Promotions” tab, more or less hiding them from view. The company was trying to solve overloaded inboxes and help recipients organize inbound messages.
Within a few months, MailChimp reported that the tab had lowered email engagement by at least a percentage point. That same year, ClickZ reported that Gmail open rates fell more than 12% during the Black Friday promotional season.
Circa 2025, AI-powered priority inboxes take the tabs concept further, removing most email messages from view. For example, when someone opens the new Yahoo Mail, only a handful of messages are visible.
Hence marketers should now assume AI agents filter all promotional messages, relegating them to something other than the priority tab.
Opens
Apple launched Mail Privacy Protection in September 2021. Similar email firewalls and utilities followed.
MPP blocks the disclosure of iOS users’ email opens by reporting all such recipients opened emails. The immediate impact was inflated open rates as reported by email service providers. Twilio SendGrid estimated MPP, in its first month, accounted for 22.9% of Apple Mail opens as well as 9.1% of Yahoo Mail Opens, 4.8% of Microsoft Outlook opens, and 4.6% of Gmail opens.
Many ESPs have since included features to estimate MPP’s effect on opens. Nonetheless, MPP, which now extends to macOS users, obscures precise reporting, including the full impact of AI algorithms such as Gmail search or a priority inbox.
Clicks
The second false assumption of many marketers is email clicks.
Marketers track clicks for sales attribution and to collect first-party behavioral data. But reported clicks in emails are sometimes unreliable.
For example, Apple Mail and Proton now remove URL parameters that identify specific subscribers, stripping the ability to associate a subscriber with a transaction or other behavior. (Campaign-wide IDs and UTMs typically remain, however.)
Moreover, email fraud-protection software often deploys bots to analyze and click links. The bots help protect recipients from phishing and malware attacks but complicate reporting. Leading ESPs now filter such bot clicks from performance reports, although many providers still include them.
—
Despite AI, privacy, and fraud-protection hurdles, email as an ecommerce marketing channel remains top-notch. The honeymoon may be ending, but success is not.
On Friday, Google sent out a subtle but impactful policy update to advertisers, confirming changes to its long-standing “Unfair Advantage Policy”.
While the official enforcement date is April 14, 2025, the conversation has already started — and it’s anything but quiet.
The PPC community is buzzing with opinions, questions, and concerns. But this update didn’t come out of nowhere.
About a month ago, Google quietly laid the groundwork for this change without most people noticing.
Let’s unpack exactly what’s happening, why it matters, and how advertisers are reacting.
What Did Google Change?
The core of the update is about limiting how many ads a business, app, or site can show in a single ad location. Here’s Google’s new language:
The new language is crucial to understand.
The focus isn’t on restricting brands from showing multiple ads across different placements—it’s about stopping advertisers from stacking multiple ads in the same slot, which would effectively block competition and inflate dominance.
It’s not a total ban on multiple ads from the same advertiser showing on a single page, but rather a limit within a specific ad location.
However, as with many Google Ads policies, the phrase “single ad location” is doing a lot of heavy lifting—and advertisers are left wondering how Google will interpret and enforce it in practice.
One notable detail: Google says violations won’t lead to instant account suspensions. Advertisers will receive a warning and at least seven days to address any violations before facing suspension.
This is important. Google seems to be trying to strike a balance between tightening policy and giving advertisers room to adapt.
The Breadcrumb Many Missed – February Auction Documentation Update
Interestingly, this isn’t the first time Google has hinted at this shift.
Back in February 2025, advertisers noticed that Google updated its documentation on “How the Google Ads Auction Works”.
The update clarified that Google runs separate auctions for each ad location, meaning that the auction for the first position is distinct from the auction for the second, third, and so on.
Ginny Marvin, Google Ads Liaison, even acknowledged the change directly in LinkedIn discussions. This detail flew under the radar for many but now seems like a foundational piece for this official Unfair Advantage update.
Effectively, Google was setting the table a month ago. This policy update simply formalizes how those auctions will now prevent advertisers from “double-serving” or stacking ads in the same position.
Why Google Is Doing This, And Why Now
Google’s goal here appears twofold:
Auction Fairness — Google wants to prevent scenarios where advertisers, affiliates, or large multi-account setups game the system by occupying multiple positions within a single auction.
Affiliate Abuse Control — This rule directly calls out affiliates who break affiliate program rules, a growing concern in Google’s search ecosystem.
Of course, some advertisers suspect there’s a third goal: protecting the user experience and, more directly, protecting Google’s own long-term revenue by encouraging more advertisers to compete rather than allowing the largest players to squeeze others out.
Advertisers Give Mixed Reactions to Google Update
While this update was emailed to advertisers on Friday afternoon, marketers didn’t waste time sharing their takes on the update.
Andrea Atzori, who also received the email from Google, took to LinkedIn to provide his take on the update.
Atzori highlighted that this change is more about clarification than transformation, as he’d seen the same advertiser in multiple locations previously.
Navah Hopkins also took to LinkedIn with a more brief update, eager to hear thoughts from fellow marketers on the Unfair Advantage policy.
Hopkins and others noted that while the update may sound fair in theory, the proof will come in how it affects impression share, Auction Insights, and real-world campaign performance.
From the comments on Hopkin’s post, early reactions seem to lead towards skepticism and questions:
Chris Chambers commented:
This is going to be wild from a metric reporting standpoint since it seems like right now it counts as 2 impressions and also affects your impression share and position in Auction Insights (same with competitors). But it also seems like now the advertisers with the most to spend in each niche will get even more real estate and be able to show twice, potentially cutting out smaller competitors completely from the first page.
Steve Gerencser had a similar take to Chambers:
I wonder how they are going to count people that pogo from one ad right back to the next and then back to something else? I can see a lot of wasted ad spend, or an opportunity for someone with deep pockets to dominate.
Some worry that well-funded advertisers will still find ways to dominate, while smaller brands hope this levels the playing field.
What Advertisers Should Watch For
While the policy may not seem earth-shattering at first glance, it does come with a few things advertisers should actively monitor.
First, smaller and mid-sized advertisers may stand to benefit, at least in theory. By limiting how many ads a single business can show in one location, Google could slightly reduce the dominance of big-budget brands that have historically owned the top of the page through multiple placements.
This could open up space for other players to get visibility where previously they were pushed out.
But, as several PPC pros pointed out on LinkedIn, the big question is how Google defines and enforces a single ad location in practice.
Google clarified last month that each ad location runs its own auction, meaning it’s technically possible for a brand to show up in multiple places on the same page—just not in the exact same slot.
So, while the policy aims to limit dominance, it doesn’t necessarily mean fewer total appearances for advertisers with deep pockets.
This also has potential ripple effects on Auction Insights reports. If Google starts filtering or limiting how often multiple ads from the same business appear in a given location, expect impression share metrics and overlap rates to behave differently—maybe even unexpectedly.
Advertisers will need to watch Auction Insights and Impression Share trends closely post-April to see if any patterns emerge.
Additionally, affiliate marketers and businesses using aggressive multi-account or multi-site strategies should be especially careful. The updated policy makes it clear that affiliates must play by their program’s rules and can no longer try to sneak multiple ads for the same offer into the same auction.
While Google says you’ll get a warning before any suspension, it’s probably wise to get ahead of this now, rather than risk a compliance issue later.
And finally, there’s still some ambiguity about multi-brand or franchise setups. If you’re managing a brand with multiple sub-brands, sister companies, or franchisees, the question remains: will Google treat you as one business under this policy or multiple?
That detail could make a big difference, especially for large organizations or verticals like automotive, real estate, or hospitality.
Final Thoughts: Is This Really a Game-Changer?
Honestly? It’s hard to call this a monumental shift yet. The update feels more like a formalization of existing enforcement patterns than a radical new rulebook.
That said, the PPC community is right to question what this will look like in Auction Insights and daily performance reports. Whether this is a minor tweak or the start of stricter anti-duplication policing will become clearer as advertisers see real-world data throughout Q2 and beyond.
Either way, it’s worth watching—especially if you’ve ever benefitted from, or competed against, someone taking up too much SERP real estate.
Google’s John Mueller explained during a session of the Search Central Live NYC event that they do over 700,000 tests per year in order to keep up with user expectations. His explanation of why Google performs so many tests and launches thousands of changes should give SEOs an idea of the pace of change going on at Google and should inspire publishers and SEOs to consider ways that they too can take steps to anticipate user expectations and roll out changes to satisfy them.
Updates Are Not Done In Isolation
The first thing that Mueller said about updates is that they’re not done in isolation but rather they use a third party raters, a fresh pair of eyes, to evaluate their tests and new updates to their algorithms.
Mueller explained:
“So there is a lot of activity happening on the web and we kind of have to keep up with that as well.
How we look at things when it comes to updates, I think this is maybe a bit of a jarring transition here, but essentially when we work on changes with regards to search, one of the things that is core to all of the changes that we do is that we don’t do them in isolation just because we think they’re good, but rather that we find ways to actually test to make sure that they are actually good.
And one of the ways that we do that is we work together with so-called quality raters.”
Number Of Tests And Launched Updates
Google conducts a staggering number of tests every year and launches thousands of changes (updates).
Photo Showing Number of Google Updates Per Year
John Mueller said (includes paraphrasing):
“When it comes to changes that we do this number is from 2023. I imagine the number from last year is similar. We’ve made over 4,700 launches. And these launches come from over 700,000 tests and we make tests all the time. You can try to calculate like how many tests are running every day. If you assume that a test maybe runs for two weeks over the course of the day, like there are lots of tests that are happening in parallel.”
Google Says It’s All About User Expectations
Mueller offered an explanation of what motivates Google to do so many tests and launch so thousands of updates to the search results. He said it’s all about meeting user expectations.
This is what he said:
“And that also means that when people look at the search results, they see things that are sometimes a bit different.
From our point of view it’s not so much that we’re doing all of this work to keep making changes to make it hard for people to keep up, but rather because we see that users have very high expectations of the web and we want to make sure that whatever expectations they have tomorrow we can still kind of fulfill.”
Takeaway For Publishers And SEOs
Google’s not running hundreds of thousands of tests a year to confuse SEOs and publishers. They’re doing it to stay ahead of what users want before users even know they want it.
SEO has historically been reactive, which means that search marketers and publishers wait until Google announces an update and then they run back to their websites and “fix” whatever they think is broken. SEO eyes are always on Google when they should really be thinking ahead about how their consumers or site visitors are aging out or no longer reading blogs and whose habits might be changing. Do you have to wait until Google announces an accessibility update before you test if your site is usable for visitors on screen readers? Are client sites usable for people who are color blind or are you going to wait for an update? That’s reactive.
One of the reasons Google is number one in many things is because they didn’t wait for someone else to do it first. Before GMAIL all email providers gave their users email space measured in megabytes. Google killed their competition because they were offering users gigabytes of free space.
So maybe SEOs and publishers should scroll up and re-read the reasons that John Mueller gave to explain why Google does hundreds of thousands of tests and launches thousands of updates every year. If you’re not already being proactive then I really think that this is the year you start thinking about ways to do that.
Takeaways:
Google’s Testing Volume and Frequency
Google performs over 700,000 tests annually.
In 2023 alone, these tests led to over 4,700 changes to Search.
Tests often run in parallel, with many active at the same time.
This volume reflects a continuous, high-speed development cycle.
Why Google Runs So Many Tests
Google’s motivation for running so many tests is to anticipate user expectations.
Despite their setbacks with AI, the number of tests and changes is the reason why Google remains a formidable competitor.
Implications for SEOs and Publishers
Search marketers and publishers who want to keep up with Google should consider emulating Google’s approach to users and look for ways to anticipate user behavior, expectations, and trends.
Start testing and improving now rather than waiting for a Google update before accounting for shortcomings.
This week, MIT Technology Review published a piece on bodyoids—living bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create “spare” human bodies that could be used for research, or to provide organs for donation.
If you find your skin crawling at this point, you’re not the only one. It’s a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldn’t cross “most people’s ethical lines,” the authors argue. I’m not so sure.
Either way, there’s no doubt that developments in science and biotechnology are bringing us closer to the potential reality of bodyoids. And the idea is already stirring plenty of ethical debate and controversy.
One of the main arguments made for bodyoids is that they could provide spare human organs. There’s a huge shortage of organs for transplantation. More than 100,000 people in the US are waiting for a transplant, and 17 people on that waiting list die every day. Human bodyoids could serve as a new source.
Scientists are working on other potential solutions to this problem. One approach is the use of gene-edited animal organs. Animal organs don’t typically last inside human bodies—our immune systems will reject them as “foreign.” But a few companies are creating pigs with a series of gene edits that make their organs more acceptable to human bodies.
A handful of living people have received gene-edited pig organs. David Bennett Sr. was the first person to get a gene-edited pig heart, in 2022, and Richard Slayman was the first to get a kidney, in early 2024. Unfortunately, both men died around two months after their surgery.
But Towana Looney, the third living person to receive a gene-edited pig kidney, has been doing well. She had her transplant surgery in late November of last year. “I am full of energy. I got an appetite I’ve never had in eight years,” she said at the time. “I can put my hand on this kidney and feel it buzzing.” She returned home in February.
Their approach is based on advances in the development of “synthetic embryos.” (I’m putting that term in quotation marks because, while it’s the simplest descriptor of what they are, a lot of scientists hate the term.)
Scientists don’t know how far these embryo-like structures will be able to develop. But they’re already using them to try to get cows and monkeys pregnant.
And no one really knows how to think about synthetic human embryos. Scientists don’t even really know what to call them. Rules stipulate that typical human embryos may be grown in the lab for a maximum of 14 days. Should the same rules apply to synthetic ones?
The very existence of synthetic embryos is throwing into question our understanding of what a human embryo even is. “Is it the thing that is only generated from the fusion of a sperm and an egg?” Naomi Moris, a developmental biologist at the Crick Institute in London, said to me a couple of years ago. “Is it something to do with the cell types it possesses, or the [shape] of the structure?”
The authors of the new MIT Technology Review piece also point out that such bodyoids could also help speed scientific and medical research.
Scientists have been working on solutions to these problems, too. Some are creating “organs on chips”—miniature collections of cells organized on a small piece of polymer that may resemble full-size organs and can be used to test the effects of drugs.
Others are creating digital representations of human organs for the same purpose. Such digital twins can be extensively modeled, and can potentially be used to run clinical trials in silico.
Both of these approaches seem somehow more palatable to me, personally, than running experiments on a human created without the capacity to think or feel pain. The idea reminds me of the recent novel Tender Is the Flesh by Agustina Bazterrica, in which humans are bred for consumption. In the book, their vocal cords are removed so that others do not have to hear them scream.
When it comes to real-world biotechnology, though, our feelings about what is “acceptable” tend to shift. In vitro fertilization was demonized when it was first developed, for instance, with opponents arguing that it was “unnatural,” a “perilous insult,” and “the biggest threat since the atom bomb.” It is estimated that more than 12 million people have been born through IVF since Louise Brown became the first “test tube baby” 46 years ago. I wonder how we’ll all feel about bodyoids 46 years from now.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Anthropic can now track the bizarre inner workings of a large language model
The news: The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.
Why it matters: It’s no secret that large language models work in mysterious ways. Shedding some light on how they work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and can’t do. And it would show how trustworthy (or not) they really are. Read the full story.
—Will Douglas Heaven
What is Signal? The messaging app, explained.
With the recent news that the Atlantic’s editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials aren’t supposed to use it for military planning, does that mean I shouldn’t use it either?
The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldn’t use Signal.Read the full story to find out why.
—Jack Cushman
This story is part of our MIT Technology Review Explains series, in which our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more of them here.
“Spare” living human bodies might provide us with organs for transplantation
—Jessica Hamzelou
This week, MIT Technology Review published a piece on bodyoids—living bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create “spare” human bodies that could be used for research, or to provide organs for donation.
If you find your skin crawling at this point, you’re not the only one. It’s a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldn’t cross “most people’s ethical lines,” the authors argue.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 A judge has ordered Trump’s officials to preserve their secret Signal chat While officials are required by law to keep chats detailing government business, Signal’s messages can be set to auto-disappear. (USA Today) + The conversation detailed an imminent attack against Houthi rebels in Yemen. (The Hill) + A government accountability group has sued the agencies involved. (Reuters) + The officials involved in the chat appear to have public Venmo accounts. (Wired $)
2 The White House is prepared to cut up to 50% of agency staff But the final cuts could end up exceeding even that. (WP $) + The sweeping cuts could threaten vital US statistics, too. (FT $) + Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)
3 OpenAI is struggling to keep up with demand for ChatGPT’s image generation The fervor around its Studio Ghibli pictures has sent its GPUs into overdrive. (The Verge) + Ghibli’s founder is no fan of AI art. (404 Media) + Four ways to protect your art from AI. (MIT Technology Review)
4 Facebook is pivoting back towards friends and family Less news, fewer posts from people you don’t know. (NYT $) + A new tab shows purely updates from friends, with no other recommendations. (Insider $) 5 Africa is set to build its first AI factory A specialized powerhouse for AI computing, to be precise. (Rest of World) + What Africa needs to do to become a major AI player. (MIT Technology Review)
6 A TikTok network spread Spanish-language immigration misinformation Including clips of the doctored voices of well-known journalists. (NBC News)
7 Your TV is desperate for your data Streamers are scrambling around for new ways to make money off the information they gather on you. (Vox)
8 This startup extracts rare earth oxides from industrial magnets It’s a less intrusive way of accessing minerals vital to EV and wind turbine production. (FT $) + The race to produce rare earth elements. (MIT Technology Review)
9 NASA hopes to launch its next Starliner flight as soon as later this year After its latest mission stretched from a projected eight days to nine months. (Reuters) + Europe is finally getting serious about commercial rockets. (MIT Technology Review)
10 The Sims has been the world’s favorite life simulation game for 25 years But a new Korean game is both more realistic and multicultural. (Bloomberg $)
Quote of the day
“It’s like, can you tell the difference between a person and a person-shaped sock puppet that is holding up a sign saying, ‘I am a sock puppet’?”
—Laura Edelson, a computer science professor at Northeastern University, is skeptical about brands’ abilities to ensure their ads are being shown to real humans and not bots, she tells the Wall Street Journal.
The big story
The race to fix space-weather forecasting before next big solar storm hits
April 2024
As the number of satellites in space grows, and as we rely on them for increasing numbers of vital tasks on Earth, the need to better predict stormy space weather is becoming more and more urgent.
Scientists have long known that solar activity can change the density of the upper atmosphere. But it’s incredibly difficult to precisely predict the sorts of density changes that a given amount of solar activity would produce.
Now, experts are working on a model of the upper atmosphere to help scientists to improve their models of how solar activity affects the environment in low Earth orbit. If they succeed, they’ll be able to keep satellites safe even amid turbulent space weather, reducing the risk of potentially catastrophic orbital collisions. Read the full story.
—Tereza Pultarova
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ This is very cool—a nearly-infinite virtual museum entirely generated from Wikipedia. + How to let go of that grudge you’ve been harboring (you know the one) + If your social media feeds have been plagued by hot men making bad art, you’re not alone. + It’s Friday, so enjoy this 1992 recording of a very fresh-faced Pearl Jam.