The SEO Industry Is Teaching The Wrong Skills via @sejournal, @DuaneForrester

Generative AI-driven search isn’t a trend; it’s the new baseline. Tools like Gemini and ChatGPT have already replaced traditional queries for millions of users.

Your audience doesn’t just search anymore: They ask. They expect answers. And those answers are being assembled, ranked, and cited by AI systems that don’t care about title tags or keyword placement. They care about trust, structure, and retrievability.

Most SEO training programs haven’t caught up. They’re still built around tactics designed for a ranking algorithm, not a generative model. The gap isn’t closing; it’s widening.

And this isn’t speculation. Research from multiple firms now shows that conversational AI is becoming a dominant discovery interface.

Microsoft, Google, Meta, OpenAI, and Amazon are all restructuring their product ecosystems around AI-powered answers, not just ranked links.

The tipping point has already passed. If your training still revolves around keyword targeting and domain authority, you’re falling behind, and not gradually, but right now.

The uncomfortable reality is that many marketers are now trained in a playbook from the early 2010s, while the engines have moved on to an entirely different game.

At this point, are we even optimizing for “search engines” anymore – or have they become “discovery assistants” or “search assistants” built to curate, cite, and synthesize?

How SEO Fell Behind (Historical Context)

Traditional SEO has always adapted, from Google’s Panda and Penguin algorithms, which prioritized content quality and penalized low-quality links, to Hummingbird’s semantic understanding of user intent.

But today’s generative search landscape is an entirely new paradigm. Google Gemini, ChatGPT, and other conversational interfaces don’t simply rank pages; they synthesize answers from the most retrievable chunks of content available.

This is not a gradual shift. This is the biggest leap in SEO’s history, and most training programs haven’t caught up yet.

The Old Curriculum: What We’re Still Teaching (And Shouldn’t Be)

Traditional SEO curriculums typically emphasize:

  • Title Tags & Meta Descriptions: Despite Google rewriting around 60-75% of these (source: Zyppy SEO study), these remain foundational to most SEO training programs.
  • Link Outreach & Link Building: Still focused on quantity and domain authority, even though AI-driven search systems focus more on contextual relevance and content (and author) trustworthiness.
  • Keyword-Focused Blogging & Content Calendars: Rigid editorial calendars and keyword-driven articles are becoming obsolete in an AI-driven search era.
  • Technical SEO: While still useful for traditional search engines, modern AI-based systems care far less about the technical structure of a webpage, and more about the accessibility of the content, and how it displays entities and relationships.

Example:

Take a common assignment from SEO training programs: “Write a blog post targeting the keyword ‘best hiking boots for 2025’.”

You’re taught to select a primary keyword, structure your headers around related phrases, and write a long-form post designed to rank in traditional SERPs.

That approach might still work for Google’s blue links, but in a generative AI context, it fails.

Ask Gemini or ChatGPT the same query, and your content likely won’t appear. Not because it’s low quality, but because it wasn’t structured to be retrieved.

It lacks semantic chunking, embedding alignment, and explicit trust signals.

The AI systems are selecting content blocks they can understand, rank by relevance, and cite. If your article is built to match human scan patterns instead of machine retrieval cues, it’s simply invisible.

What SEO Training Still Teaches vs. What Actually Works NowImage credit: Duane Forrester

The New SEO Work: What Actually Drives Results Now

Real SEO today revolves around structured, retrievable, semantically rich content:

1. Semantic Chunking

Creating content structured into clearly defined, self-contained chunks optimized for large language models (LLMs).

2. Vector Modeling & Embeddings

Placing content into semantic clusters inside vector databases, ensuring each piece of content is closely aligned with user intent and query vectors.

3. Trust, Signal Engineering

Implementing structured citations, schema markup, clear attribution, and credibility signals that AI-driven models trust enough to cite explicitly.

4. Retrieval Simulation & Prediction

Using tools such as RankBee, SERPRecon, and Waikay.io to actively simulate how your content surfaces within AI-driven answers.

5. RRF Tuning & Model Optimization

Fine-tuning content performance across generative models like Perplexity, Gemini, ChatGPT, ensuring maximum retrievability in various conversational contexts.

6. Zero-Click Optimization

Optimizing content not just for clicks but to be featured directly in generative AI responses.

Backlinko’s guide on LLM Seeding introduces a practical framework for getting cited by large language models like ChatGPT and Gemini.

It emphasizes creating chunkable, trustworthy content designed to be surfaced in AI-generated answers – marking a fundamental shift from optimizing for rankings to optimizing for retrieval.

Consider leading brands engaging with AI-first discovery themes:

  • Zapier has published educational content on vector embeddings and how they underpin tools like ChatGPT and semantic search (source). While that article doesn’t detail their internal SEO strategies, it shows how marketing teams can start unpacking the concepts that underpin retrieval-based visibility.
    → Correction: An earlier version of this article suggested Zapier had implemented semantic chunking and retrieval optimization. That was an editing error on my part: there’s no public evidence to support that claim.
  • Shopify, meanwhile, uses its Shopify Magic tool to generate SEO-optimized product descriptions at scale, integrating generative workflows into day-to-day content ops (source).
    → Takeaway: Shopify ties generative tooling directly to scalable, structured content designed for discovery.

These examples don’t suggest perfect alignment – but they point to how modern teams are beginning to integrate AI thinking into real workflows. That’s the shift: from content creation to content retrieval architecture.

Why The Disconnect Exists (And Persists)

1. Educational Inertia

Updating curriculums is expensive, difficult, and risky for educators.

Many course creators and educational institutions are overwhelmed or ill-equipped to rapidly pivot their syllabi toward advanced semantic optimization and vector embeddings.

2. Hiring Practices & Organizational Habits

Job ads often still emphasize outdated skills, perpetuating the inertia by attracting talent trained in legacy SEO methods rather than future-oriented techniques.

3. Legacy Toolsets

Major SEO platforms like Moz, Semrush, and Ahrefs continue to emphasize metrics like domain authority, keyword volumes, and traditional backlink counts, reinforcing outdated optimization practices.

The Fix: An Outcome-Driven SEO Training Model

To address these problems, SEO training must now shift toward measurable KPIs, clear roles, and task-based learning:

New KPI, Driven Framework:

  • Embedding retrieval rate (AI-driven visibility).
  • GenAI attribution percentage (citations in AI outputs).
  • Vector presence and semantic alignment.
  • Trust-signal effectiveness (schema and structured data).
  • Re-ranking lift via Retrieval Rank Fusion (RRF).

New Roles And Responsibilities:

  • Digital GEOlogist: Optimizes content placement and semantic structure for retrieval. (I know, the title is a joke, but you get the point.)
  • Trust-Signal Strategist: Implements schema, citations, structured credibility signals.
  • Cheditor (Chunk Editor): Optimizes chunks of content specifically for LLM consumption and retrievability. If you’re an Editor, you need to be a Cheditor.

Task-Based SEO Education:

  • Simulate retrieval via ChatGPT/Perplexity prompt engineering.
  • Perform semantic embedding audits to measure content similarity against successful retrieval outputs.
  • Conduct regular A/B tests on chunk structures and semantic signals, evaluating real-world retrievability.

How To Take Charge: You Are The Resource Now

The reality is stark but empowering: No one’s coming to save your career. Not your company, which may move slowly, nor traditional schools, nor third-party platforms with outdated content.

You won’t find this in a course catalog. If your company hasn’t caught up (and most haven’t), it’s on you to take the lead.

Here’s a practical roadmap to start building your own AI-SEO expertise from the ground up:

Month 1: Build Your Foundation

  • Complete foundational AI courses:
  • Share key learnings internally.

Month 2: Tactical Skill, Building

  • Complete practical SEO, specific courses:
  • Start sharing actionable tips via Slack or internal newsletters.

Month 3: Community And Collaboration

  • Organize “Lunch & Learns” or internal SEO Labs, focused on semantic chunking, embeddings, trust, signal engineering.
  • Engage actively in external communities (Discord groups, LinkedIn SEO groups, online forums like Moz Q&A) to deepen your knowledge.

Month 4: Institutionalize Your Expertise

  • Formally propose and launch an internal “AI-SEO Center of Excellence.”
  • Run practical retrieval simulations, document results, and showcase tangible improvements to secure ongoing investment and visibility internally.

Turning Learning Into Leadership

Once you’ve built momentum with personal upskilling, don’t stop at silent improvement. Make your learning visible, and valuable, by creating change around you:

  • Host SEO-AI Micro Sessions: Run short, focused sessions (15-20 minutes) on topics like semantic chunking, retrieval testing, or schema design. Keep them informal, repeatable, and useful.
  • Run Retrieval Audits: Pick three to five high-priority URLs and test them in ChatGPT, Gemini, or Perplexity. Which content blocks surface? What gets ignored? Share your findings openly.
  • Build a Knowledge Hub: Use Notion, Google Docs, or Confluence to create a centralized space for SEO-AI strategies, test results, tools, and templates.
  • Create a Weekly AI Digest: Curate key updates from the field – citations appearing in generative answers, new tools, useful prompts – and circulate them internally.
  • Recruit Allies: Invite collaborators to contribute retrieval tests, co-host sessions, or flag examples of your content appearing in AI answers. Leadership scales faster with support.

This is how you shift from learner to leader. You’re no longer just upskilling, you’re operationalizing AI search inside your company.

You Are the Catalyst, Take Action Now

The roles of traditional SEO specialists will shift (or fade?), replaced by experts fluent in semantic optimization and retrievability.

Become the person who educates your company because you educated yourself first.

Your role isn’t just to keep up, it’s to lead. The responsibility, and the opportunity, sit with you right now.

Don’t wait for your company to catch up or for course platforms to get current. Take action. The new discovery systems are already here, and the people who learn to work with them will define the next era of visibility.

  • If you teach SEO, rewrite your courses around these new KPIs and roles.
  • If you hire SEO talent, demand modern optimization skills: semantic embeddings knowledge, chunk structuring experience, retrieval simulation approaches.
  • If you practice SEO, proactively shift your efforts toward retrieval testing, embedding audits, and semantic optimization immediately.

SEO isn’t dying, it’s evolving.

And you have an opportunity, right now, to be at the forefront of this evolution.

More Resources: 


This post was originally published on Duane Forrester Decodes.


Featured Image: Rawpixel.com/Shutterstock

Why Generative AI Isn’t Killing SEO – It’s Creating New Opportunities via @sejournal, @AdamHeitzman

You’ve heard the predictions: AI will replace SEO, generative search will eliminate organic traffic, and marketers should start updating their resumes.

With 73% of marketing teams using generative AI, it’s easy to assume we’re witnessing SEO’s funeral.

Here’s what’s actually happening: AI isn’t replacing SEO. It’s expanding SEO into new territories with bigger opportunities.

While Google’s AI Overviews and tools like ChatGPT are changing how people find information, they’re also creating new ways for your content to get discovered, cited, and trusted by millions of searchers.

The game isn’t ending. You just need to learn the new rules.

How AI Search Actually Works (And Where Your Content Fits)

Generative search doesn’t eliminate the need for quality content; it amplifies it.

When someone asks ChatGPT about email marketing or searches with Google’s AI features, these systems scan thousands of webpages to synthesize comprehensive answers.

Your content isn’t competing for traditional rankings anymore. You’re competing to become the authoritative source that AI systems pull from when generating responses.

The Citation Game

Here’s what most marketers miss: AI systems still cite their sources.

Google’s AI Overviews include links to referenced websites, and ChatGPT and Perplexity provide source citations.

Getting featured as a cited source can drive more qualified traffic than a traditional No. 1 ranking because users already know your content contributed to the answer they received.

Google AIO Citation Example:

Screenshot from search for [email marketing courses beginners must try], Google, July 2025

ChatGPT Citation Example:

Screenshot from ChatGPT, July 2025

What AI systems look for in sources:

  • Factual accuracy and reliability (they cross-reference information).
  • Authority signals like domain expertise and credentials.
  • Fresh, up-to-date information on current topics.
  • Comprehensive coverage that adds unique value.

Your action plan:

  • Back up claims with specific data and examples.
  • Use consistent terminology across all content.
  • Update older content with recent statistics and insights.
  • Structure information in clear, scannable sections.

From Rankings To Retrieval

Traditional SEO targeted specific keyword rankings. AI search introduces “retrieval” – your content gets pulled into responses for queries you never directly optimized for.

Your comprehensive project management guide might get cited when someone asks, “How can I keep my remote team organized without micromanaging?” even though you never targeted that exact phrase.

AI systems understand context and relationships between concepts better than traditional algorithms.

To really understand how ChatGPT and other large language models work, I highly recommend reading Stephen Wolfram’s “What is ChatGPT Doing … and Why Does It Work?”.

Optimizing for retrieval requires a different mindset than traditional keyword targeting.

Create content that covers topics from multiple angles rather than focusing on single keyword phrases.

Structure your articles around the actual questions your audience asks, using headings that mirror real user queries.

Build comprehensive topic clusters that demonstrate your expertise across related subjects, showing AI systems that you’re a reliable source for broad topic coverage.

The SEO Fundamentals That Still Matter (With New Twists)

Don’t throw out your SEO playbook. The core principles still apply, but it’s a little different now.

Technical SEO Is More Important, Not Less

AI systems are far less forgiving than Google’s crawlers.

While Google’s bots can render JavaScript, handle errors gracefully, and work around technical issues, most AI agents simply fetch raw HTML and move on.

If they find an empty page, wrong HTTP status, or tangled markup, they won’t see your content at all.

This makes technical SEO non-negotiable for AI visibility. Server-side rendering becomes absolutely critical since AI agents won’t execute JavaScript or wait for client-side rendering.

Your content must be immediately visible in raw HTML.

Clean, semantic markup with valid HTML and proper heading hierarchy helps AI systems parse content accurately, while efficient delivery ensures AI agents don’t abandon slow or bloated sites.

AI bot requirements:

  • Allow AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) through robots.txt.
  • Whitelist AI bot IP ranges rather than blocking with firewalls.
  • Ensure critical content loads without JavaScript dependencies.
  • Avoid “noindex” and “nosnippet” tags on valuable content.
  • Optimize server response times for efficient content retrieval.

There has been differing opinions on LLMs.txt files, but they could provide additional guidance for AI systems.

It could direct AI models to your best content during inference.

Place this plain text file at your domain root using proper markdown structure, including only your highest-value, well-structured content that answers specific questions.

Content Strategy For AI Citations

Your content strategy needs a fundamental shift. Instead of writing for search engine rankings, you’re creating content that feeds AI knowledge bases.

The key to successful retrieval optimization means leading with clear, definitive answers to specific questions.

When addressing common queries like [how long do SEO results take?], start immediately with “SEO results typically appear within three to six months for new websites.”

Break complex topics into digestible, extractable sections that include comprehensive explanations with supporting context.

AI systems favor content that provides complete answers rather than surface-level information, so include relevant data and statistics that can be easily identified and cited.

AI systems don’t retrieve entire pages; they break content into passages or “chunks” and extract the most relevant segments.

This means each section of your content should work as a standalone snippet that’s independently understandable.

Keep one focused idea per section, staying tightly concentrated on single concepts.

Use structured HTML with clear H2 and H3 subheadings for every subtopic, making passages semantically tight and self-contained.

Start each section with direct, concise sentences that immediately address the core point.

Building topical authority requires understanding how Google’s AI uses “query fan-out” techniques.

Complex queries get automatically broken into multiple related subqueries and executed in parallel, rewarding sites with both topical breadth and depth.

Create comprehensive pillar pages that summarize main topics with strategic links to deeper cluster content.

Develop cluster pages targeting specific facets of your expertise, then cross-link between related cluster pages to establish semantic relationships.

Cover diverse angles and intents to increase your content’s surface area for AI retrieval across multiple query variations.

Working With AI Systems, Not Against Them

The most successful marketers are learning to optimize for AI inclusion rather than fighting against machine-generated answers.

Optimizing For AI Summaries

Structure your content so AI systems can’t ignore it by leading with clear answers and using scannable formatting.

Include concrete data and statistics that make content citation-worthy, and implement schema markup like FAQ, how-to, and article schemas to help AI understand your content structure.

Key formatting elements that AI systems prefer:

  • Bullet points and numbered lists for easy parsing.
  • Clear subheadings that mirror actual user questions.
  • Natural language Q&A format throughout the content.

Building citation-worthy authority requires meeting higher trust and clarity standards than basic content inclusion.

AI systems prioritize content perceived as factually accurate, up-to-date, and authoritative. Include specific, verifiable claims with source citations that link to studies and expert sources.

Show clear authorship and credentials for E-E-A-T (which stands for experience, expertise, authoritativeness, and trustworthiness) signals, and use author and organization structured data for brand entity recognition.

Refresh key content regularly with timestamps to signal updated information, and consider publishing original research, surveys, or industry studies that journalists and bloggers reference.

AI search systems increasingly retrieve and synthesize content beyond text, including images, charts, tables, and videos. This creates opportunities for more engaging, scannable answers.

Ensure images and videos are crawlable by avoiding JavaScript-only rendering, and use descriptive alt text that includes topic context for all images.

Add explanatory captions directly below or beside visual elements, and use proper HTML markup like

and

instead of images of tables to support AI bot parsing.

Monitor Your AI Presence

Traditional rank tracking won’t show your full search visibility anymore. You need to track how AI platforms reference your content across different systems.

Set up Google Alerts for your brand and key topics you cover to catch when AI systems cite your content in their responses.

Regularly check Perplexity AI, ChatGPT, and Google’s AI Overviews for appearances of your content, and screenshot these citations since they’re becoming your new success metrics.

Don’t just monitor your brand presence. Track how competitors appear in AI summaries to understand what type of content AI engines prefer.

This competitive intelligence helps you adjust your strategy based on what’s actually getting cited.

Pay attention to the context around your citations, too, since AI engines sometimes present information differently than you intended, providing valuable feedback for refining how you present information in future content.

The Future Of SEO Is Bigger, Not Smaller

SEO isn’t shrinking. It’s expanding into a multi-platform opportunity. Your content can now appear in traditional search results, AI Overviews, chatbot responses, and voice search answers.

Skills That Matter Most

The SEOs thriving in this new landscape are developing expertise in data analysis to understand how different AI systems crawl and categorize content.

Multi-platform optimization has become essential, requiring the ability to write for Google, ChatGPT, Perplexity, and emerging AI tools simultaneously.

Advanced technical skills around implementing schema markup that actually helps AI understanding are increasingly valuable, along with content strategy integration that aligns SEO with broader content marketing and brand positioning efforts.

As AI makes search more complex, companies need expert guidance to navigate multiple platforms and opportunities.

The brands trying to handle this evolution internally often get left behind while their competitors appear across every AI-powered search experience.

SEO leaders today aren’t just optimizing websites; they’re building strategies that work across traditional and generative search platforms, tracking brand mentions in AI search, and ensuring their companies stay visible as search continues evolving.

Your Next Steps

The shift to AI-powered search isn’t a threat; it’s a call to expand your reach.

Start by auditing your current content for AI citation potential, asking whether it answers specific questions clearly and directly.

Implement comprehensive schema markup on your most important pages to help AI systems understand and categorize your content effectively.

Immediate action items:

  • Create topic clusters that demonstrate deep expertise in your field.
  • Monitor AI platforms for mentions of your brand and competitors.
  • Update older content with fresh data and improved structure for AI retrieval.

The brands dominating tomorrow’s search landscape are adapting now.

Your SEO skills aren’t becoming obsolete; they’re becoming more valuable as companies need experts who can navigate both traditional rankings and AI-generated responses.

The game hasn’t ended. It just got more interesting.

More Resources:


Featured Image: SvetaZi/Shutterstock

Performance Max: I Was A Skeptic & Now I’m Devout (Even In Bing) via @sejournal, @jonkagan

When Google first announced the existence of Performance Max back in 2020, to say I was skeptical of this ad unit would’ve been an understatement.

When it rolled out to everyone in 2021, I described my thoughts about it as “loud, angry, and distrusting”.

In my defense, look at it from a 2021 Jon perspective: Google gave you an ad unit that would opt you into areas you may not want to be in (Display, Partner Network, YouTube), which you couldn’t opt out of.

You also couldn’t target one network; if you didn’t add a YouTube video, it would make its own. There were no exclusions; there were no negatives. There was negligible reporting.

Additionally, it would show in all the ad placements you were already in, and potentially, cannibalize them. There was limited control over the budget.

All you knew was that you would give Google your money and hope it did right by you.

On top of it all, it was described as a supplementary function, but if you wanted to use Local Search Ads or Smart Shopping, you were forced to do this.

This was then followed by Google representatives recommending that we stop running Shopping campaigns because “PMax will handle it” (which contradicted the original descriptions).

Needless to say, I wasn’t thrilled about it. Then, when Bing (because I refuse to call them Microsoft Ads) announced it was going to be rolling out PMax back in 2024, I almost lost it.

My loyal, consistent, trustworthy little buddy, Bing, was going down the evil rabbit hole of non-transparent advertising, and I was angry. That was all then (I know that was just over a year ago, but give me credit).

Fast forward to June 2025 Jon, (maybe it is the early summer heat in New England), I am no longer that belligerently angry at PMax for existing (still angry about a lot of other things, though).

Now, for different reasons, I am afraid to say it: I am a Performance Max loyalist. Not just in Google, either, but also in Bing – I love the PMax function in both of them.

Why Was I Anti-PMax?

A little bit of background: I’ve been in the digital space for over 20 years. I’ve seen the evolution of search platforms many times over. Some changes were good. Several were terrible (a la “Enhanced Campaigns” or mandatory “MSAN”).

So, needless to say, I am a firm believer of: “If it ain’t broke, don’t fix it.” But, PMax was a fix for something that wasn’t broken (at least, at the time, I believed it).

More importantly, this ad unit went against a lot of Google’s claims of “trust and transparency.” This ad unit provided, at the time, almost no transparency whatsoever, so it sure didn’t give us a reason to trust it.

A little awkward (Screenshot from Google Transparency Center, July 2025)

This was essentially having the fox watch the hen house.

What if I didn’t want to trigger for a specific search? What if I didn’t have video assets and I couldn’t let Google create them? What do you mean I can’t get a full placement report of where my ad was showing?

Not to mention, initial data and results yielded little to no noticeable growth in a positive direction. But, there was a lot of burning cash somewhere.

But that was just Google. When Bing rolled out its PMax, the Audience Network had just become mandatory for search. The search syndication network was producing garbage, there was no video ad unit, and the documentation on the Bing PMax capability was negligible and hidden (shout out to Milton for helping me find it).

Why was this so hard to find?! (Screenshot from Microsoft Advertising, July 2025)

Why should anyone have been in a pro-PMax mindset at all?

And if you scroll through the old X (Twitter) hashtag of #PPCCHAT (which by the way is the best global paid search community there is), you will realize that few – if any – were, in fact, pro-PMax.

What Changed My Mind About Google?

I should first clarify that I now heavily use Performance Max. It is a necessity (think a necessary evil) in most direct response/performance-driven paid media initiatives.

I maintain several reservations about it. However, other reservations have eroded away over time.

When I first tested out Performance Max, it was a test effort for a consumer packaged goods (CPG) ecommerce company and a couple of quick service restaurant (QSR) brands.

For CPG, we were testing it as a supplement to shopping, and we were honestly ignorant of what it was doing.

For the QSR brands, we tested it out as an alternative to local search campaigns, as those were being “sunsetted” by Google.

If we wanted to continue our digital marketing push to hundreds of brick-and-mortar locations on maps, then our only option was to do PMax (net-net, we were forced to).

In both cases, the initial results were “dog water” (a phrase my 10-year-old son keeps using when describing the Jets’ season).

Why were they bad? There are multiple reasons, including but not limited to: lack of education, probably a poor setup on our part, multiple technical flaws on it via Google, and what seemed like a rush to market/incomplete system.

The CPG ecommerce brand abandoned the effort within a few months (at my recommendation, I should note). But the QSR brands – that was different. We started seeing the data.

For both brands, we had been using local search, YouTube, Search, Discovery (may it rest in peace), and every now and then, GDN – all for different needs.

So, getting them to work together for a single function made sense on paper, but was a novel concept to us.

The QSR brands were optimizing for conversions (we had six types), but one of the six types was more valuable than the others (Store Visits).

Once we moved to a conversion value strategy on PMax, we were off to the races. More so, we started seeing deliveries that exceeded prior deliveries in regular search or local search.

LocalI miss local search (Screenshot from author, July 2025)

This shift in performance forced me to accept that I could compromise my lack of transparency for strong performance.

Something that was eating at me, though, was the impact on search.

For those who remember, briefly, PMax search was only on mobile. Then, it expanded to all devices. We did a study to prove it was cannibalizing regular search.

But ultimately, the study made me realize something: I may not be in control of the target and the function, but if the performance was there, my argument against it was going to have to diminish as quickly and quietly as Google Glass.

Ok, Then Why Did You Change Your Mind About Bing PMax?

My perception of Bing PMax changed for a different reason than Google’s.

If you’ve read my past articles, you know I am very much pro-Bing, but in very specific categories, such as healthcare. I am not huge on it in other categories.

So, entering into Bing PMax was going to have to be done either by force or because I heard a rumor.

Needless to say, I got backed into a corner that forced my hand on it (twice), and the first instance happened to coincide with a rumor.

First, note this: I am adamantly against the forced usage of the Bing audience network (MSAN) in search, and not being able to opt out of it, completely infuriates me.

Now, cue the rumor: I had been informed by a former Bing employee that if I wanted greater control of the audience network, I needed to go one of two routes:

  1. Run audience network-specific ads, or
  2. Run PMax ads.

I elected the PMax route (which, by the way, the rumor about that part was not, in fact, accurate).

I went this route because, at the same time, I had a health insurance brand that was crushing it in efficiency in Bing search, but we couldn’t really scale it anymore.

But, we had a test budget earmarked for direct response/performance tactics, and time was running out to use it (or I would lose it).

So, I threw out the idea of trying PMax in Bing. It had been negligibly attempted within the agency in various verticals with underwhelming performance.

We said, “Why not, let’s give it the old college try and prove that this was not going to work for us,” and we tested it against search.

Well … needless to say, I was wrong. It was beating out search. The only thing it couldn’t do – that Google could – was drive click-to-call leads.

Then What Happened?

A number of things:

  • I somehow got selected to sit on a focus group panel for PMax with Google, and selfishly directed as much feedback as possible to bring on basics that should’ve been around since Day one (search query insight, demographic control, product distribution, keyword targets, negatives, etc.) Note: As of press time, some of this actually came to fruition, but I can almost guarantee I had little to no impact in making it happen.
  • I worked with some brands that were Down For Testing (or “DTF”), and said, “This isn’t going away like Broad Match Modified did, so we need test it out, if you let me do it, I’ll buy you a sandwich, we’ll plan it out as zero return, and celebrate if it works out.”
  • I tested out different scenarios: target return on ad spend (tROAS), target cost-per-acquisition (tCPA), max conversions, max value, with a Google Business Profile (GBP), without a product feed, etc. – all to see what the right approach would be.
  • Ecommerce brands we went and tested as a supplement to shopping ads, and scenarios where it replaced shopping ads.
  • I repeated scenarios where I could in Bing.
    • Bing for ecommerce quickly became a rising star for me in PMax.
    • If you’re willing to wait for the longer ramp-up period, it pays off.
  • Most importantly, I stopped fighting PMax adoption. I decided that I could learn to work with less transparency if the returns came back as legitimate.

There Is, However, Some Stuff That Still Really Gets To Me

Don’t take this come around thought train as total acceptance. There are still several things that grind my gears, and tips I recommend for dealing with them:

  • In Google, the moment you get access to the channel report, pore over it in detail. It cannibalizes Search and Shopping, which could mean you need to up your game on other entities, or even reallocate funds as needed.
  • If you have the GBP connected, the distribution of spend on Maps is obscene. It makes me long for the days of local search ads, and when this happens, it comes at the expense of search distribution.
  • Even with the Google Channel Distribution reports, the actual detailed reporting is pretty terrible. Bing doesn’t even have a channel report.
  • If you thought you could use PMax as a way to get into Gmail ad units, think again. Less than 10% of the clients I work with who have PMax and channel reporting have actually shown in Gmail. If you want that placement, go to Demand Gen.
  • Upload a video. Whatever you do (for the love of all that is sane), don’t let Google create a video for you. I’ve seen them; you definitely do not want them.

Not-So Pro-Tips For The World Of PMax

  • Like my therapist wife says: You need to be comfortable with being uncomfortable, and PMax definitely makes you uncomfortable.
  • Have a video ready to go. Don’t let Google make it. Shoot it with your cellphone if you need to.
  • Do not launch without using search themes. You don’t have a lot of controls, but that is one to definitely use.
  • Bing actually has a good search query report, and Google has recently started rolling out a comprehensive search query report. Both are helpful to understand where you’re mapping, and now with Google, you can use it to expand negative keywords.
  • Brand exclusion is a go-to for avoiding competitor bidding.
  • The audience signals are key for thriving. Build them a niche, but view it more as a look-alike audience than a pure target.
  • Use every extension under the sun, because why not?
  • In Bing, not all placements are pretty, and you can actually exclude certain placements by creative there. Utilize it.

The Takeaway

Performance Max, whether it is on Google or Bing, is an ad unit that makes you feel somewhat powerless, but honestly, that isn’t a bad thing.

There are a few verticals/scenarios where PMax isn’t usable (specifically, if it is “remarketing only” audiences or legal compliance restrictions).

You will likely be comfortable with the results, but uncomfortable with the method. You aren’t alone; this is a continuously evolving ad unit.

While you’re at it, especially in Google, don’t sleep on Demand Gen; it’s basically a PMax “lite.”

More Resources:


Featured Image: Master1305/Shutterstock

Google On Balancing Needs Of Users And The Web Ecosystem via @sejournal, @martinibuster

At the recent Search Central Live Deep Dive 2025, Kenichi Suzuki asked Google’s Gary Illyes how Google measures high quality and user satisfaction of traffic from AI Overviews. Illyes’ response, published by Suzuki on LinkedIn, covered multiple points.

Kenichi asked for specific data, and Gary’s answer offered an overview of how Google gathers external data to form internal opinions on how AI Overviews is perceived by users in terms of satisfaction. He said that the data informs public statements by Google, including those made by CEO Sundar Pichai.

Illyes began his answer by saying that he couldn’t share specifics about the user satisfaction data, but he still continued to offer his overview.

User Satisfaction Surveys

The first data point that Illyes mentioned was user satisfaction surveys to understand how people feel about AI Overviews. Kenichi wrote that Illyes said:

“The public statements made by company leaders, such as Sundar Pichai, are validated by this internal data before being made public.”

Observed User Behavior

The second user satisfaction data point that Illyes mentioned was inferring from the broader market. Kenichi wrote:

“Gary suggested that one can infer user preference by looking at the broader market. He pointed out that the rapidly growing user base for other AI tools (like ChatGPT and Copilot) likely consists of the same demographic that enjoys and finds value in AI Overviews.”

Motivated By User-Focus

This part means putting the user first as the motivation for introducing a new feature. Illyes specifically said that causing a disruption is not Google’s motivation for AI search features.

Acknowledged The Web Ecosystem

The last point he made was to explain that Google’s still figuring out how to balance their user-focused approach with the need to maintain a healthy web ecosystem.

Kenichi wrote that Illyes said:

“He finished by acknowledging that they are still figuring out how to balance this user-focused approach with the need to continue supporting the wider web ecosystem.”

Balancing The Needs Of The Web Ecosystem

At the dawn of modern SEO, Google did something extraordinary: they reached out to web publishers through the most popular SEO forum at the time, WebmasterWorld. Gary Illyes himself, before he joined Google, was a WebmasterWorld member. This outreach by Google was the initiative of one Googler, Matt Cutts. Other Googlers provided interviews, but Matt Cutts, under the WebmasterWorld nickname of GoogleGuy, held two-way conversations with the search and publisher community.

This is no longer the case at Google, which is largely back to one-way communication accompanied by intermittent social media outreach.

The SEO community may share in the blame for this situation, as some SEOs post abusive responses on social media. Fortunately, those people are in the minority, but that behavior nonetheless puts a chill on the few opportunities provided to have a constructive dialogue.

It’s encouraging to hear Illyes mention the web ecosystem, and it would be even further encouraging to hear Googlers, including the CEO, focus more on how they intend to balance the needs of the users with those of the creators who publish content, because many feel that Google’s current direction is not sustainable for publishers.

Featured Image by Shutterstock/1000 Words

5 Predictions for 2025 Holiday Shopping

Could it be that Americans are heading into the holiday shopping season with confidence?

From faster delivery and cross-border buying to small business growth and AI-powered shopping tools, the coming Christmas season promises to be both bold and efficient — or at least that’s what I predict.

Near Instant Gratification

Fast, free delivery has become so common that consumers will pick up or receive at least 35% of orders placed in November and December within 24 hours.

I foresee a couple of factors driving speedy deliveries.

First, Amazon’s infrastructure prioritizes rapid delivery. In urban areas, Amazon delivers approximately 60% of Prime orders the next day. Rural delivery lowers the average, but Fulfillment by Amazon shipments will provide nearly instant shopping gratification.

Second, buy online, pick up in-store purchasing has grown rapidly and could soon represent 10% of U.S. ecommerce sales, according to Capital One Shopping.

Canadian-American Relations

Canadian shoppers are among the most active cross-border consumers worldwide. In a given year, about half of folks north of the border shop with a U.S. ecommerce business.

Photo of a male looking at a laptop with snow in the background

Holiday shopping has become a digital ritual — convenient and quiet.

Despite tariff disputes, I believe these shopping habits are both resilient and beneficial. Canadian buyers are accustomed to shopping at U.S. stores online owing to value and variety. And, the nations have been friends for too long to experience lasting trade disruptions.

With this in mind, expect at least 55% of Canadian shoppers to make at least one holiday purchase from U.S. ecommerce stores in 2025.

Small Business Growth

I expect small, independent online retailers will grow by approximately 10% in 2025, outperforming overall ecommerce performance and reaching roughly $15.5 billion in U.S. holiday revenue.

In comparison, Shopify merchants alone generated about $11.5 billion during the 2024 holiday peak sale period. Etsy sellers added about $2 billion.

The growth should come from small brands that sell craft or U.S.-made products.

AI Shopping

During the peak gift-giving season, at least 50% of North American shoppers will use artificial intelligence for shopping. Consumers will chat, search, seek recommendations, and even make purchases with the help of AI tools.

Last year, fewer than 15% of U.S. shoppers consulted AI for holiday gift giving, reportedly, but much has changed in a year. AI is present in nearly every tool, including Google.

Hence AI product discovery will likely be the top ecommerce traffic source in 2025.

Consumer Confidence

I was pessimistic last year about U.S. holiday ecommerce growth, and it showed in my failed predictions, listed below. If I am going to err this year, it will be on the side of being too optimistic.

The U.S. stock market has performed well of late. For example, the S&P 500 and the Nasdaq Composite index recently hit record highs. The driver for this boom may be trade optimism and solid corporate earnings.

I suspect this enthusiasm will carry over into holiday gift-giving in 2025. The key factor will be whether shoppers believe they can afford to spend.

Last Year’s Predictions

Since 2013 I have predicted ecommerce trends and sales for the coming holiday season. In 2024, I was incorrect in four of my five predictions, making last year’s forecasting my worst yet. Here are the embarrassing specifics.

Mobile commerce will represent 54% of holiday ecommerce sales: correct. Adobe reported that U.S. holiday sales on mobile devices reached $131.5 billion, accounting for 54.4% of the overall online total.

Ecommerce holiday sales grow 5% year-over-year: wrong. I was too pessimistic last year. I wrote that early holiday predictions, including one suggesting 23% growth in 2024, were “too optimistic, given the contentious U.S. election, inflation, and other economic woes.” Most sources put the actual growth at 8.7%.

Email volume grows 25% during the 2024 holiday season: wrong. This one was more difficult to measure, but nonetheless, I likely missed the mark. Global email volume grew about 4.3% year-over-year during the fourth quarter, according to multiple sources.

40% of Gen  Zs use social commerce this holiday season: wrong. Most estimates place the actual number at 32% for Gen Zs (ages 13 to 28), while an estimated 12% of all U.S. consumers shopped social in 2024.

BNPL accounts for 9% of online holiday sales: wrong. About 7.7% of U.S. holiday purchases in November and December 2024 were buy-now, pay-later, representing $18.2 billion, per Adobe.

What role should oil and gas companies play in climate tech?

This week, I have a new story out about Quaise, a geothermal startup that’s trying to commercialize new drilling technology. Using a device called a gyrotron, the company wants to drill deeper, cheaper, in an effort to unlock geothermal power anywhere on the planet. (For all the details, check it out here.) 

For the story, I visited Quaise’s headquarters in Houston. I also took a trip across town to Nabors Industries, Quaise’s investor and tech partner and one of the biggest drilling companies in the world. 

Standing on top of a drilling rig in the backyard of Nabors’s headquarters, I couldn’t stop thinking about the role oil and gas companies are playing in the energy transition. This industry has resources and energy expertise—but also a vested interest in fossil fuels. Can it really be part of addressing climate change?

The relationship between Quaise and Nabors is one that we see increasingly often in climate tech—a startup partnering up with an established company in a similar field. (Another one that comes to mind is in the cement industry, where Sublime Systems has seen a lot of support from legacy players including Holcim, one of the biggest cement companies in the world.) 

Quaise got an early investment from Nabors in 2021, to the tune of $12 million. Now the company also serves as a technical partner for the startup. 

“We are agnostic to what hole we’re drilling,” says Cameron Maresh, a project engineer on the energy transition team at Nabors Industries. The company is working on other investments and projects in the geothermal industry, Maresh says, and the work with Quaise is the culmination of a yearslong collaboration: “We’re just truly excited to see what Quaise can do.”

From the outside, this sort of partnership makes a lot of sense for Quaise. It gets resources and expertise. Meanwhile, Nabors is getting involved with an innovative company that could represent a new direction for geothermal. And maybe more to the point, if fossil fuels are to be phased out, this deal gives the company a stake in next-generation energy production.

There is so much potential for oil and gas companies to play a productive role in addressing climate change. One report from the International Energy Agency examined the role these legacy players could take:  “Energy transitions can happen without the engagement of the oil and gas industry, but the journey to net zero will be more costly and difficult to navigate if they are not on board,” the authors wrote. 

In the agency’s blueprint for what a net-zero emissions energy system could look like in 2050, about 30% of energy could come from sources where the oil and gas industry’s knowledge and resources are useful. That includes hydrogen, liquid biofuels, biomethane, carbon capture, and geothermal. 

But so far, the industry has hardly lived up to its potential as a positive force for the climate. Also in that report, the IEA pointed out that oil and gas producers made up only about 1% of global investment in climate tech in 2022. Investment has ticked up a bit since then, but still, it’s tough to argue that the industry is committed. 

And now that climate tech is falling out of fashion with the government in the US, I’d venture to guess that we’re going to see oil and gas companies increasingly pulling back on their investments and promises. 

BP recently backtracked on previous commitments to cut oil and gas production and invest in clean energy. And last year the company announced that it had written off $1.1 billion in offshore wind investments in 2023 and wanted to sell other wind assets. Shell closed down all its hydrogen fueling stations for vehicles in California last year. (This might not be all that big a loss, since EVs are beating hydrogen by a huge margin in the US, but it’s still worth noting.) 

So oil and gas companies are investing what amounts to pennies and often backtrack when the political winds change direction. And, let’s not forget, fossil-fuel companies have a long history of behaving badly. 

In perhaps the most notorious example, scientists at Exxon modeled climate change in the 1970s, and their forecasts turned out to be quite accurate. Rather than publish that research, the company downplayed how climate change might affect the planet. (For what it’s worth, company representatives have argued that this was less of a coverup and more of an internal discussion that wasn’t fit to be shared outside the company.) 

While fossil fuels are still part of our near-term future, oil and gas companies, and particularly producers, would need to make drastic changes to align with climate goals—changes that wouldn’t be in their financial interest. Few seem inclined to really take the turn needed. 

As the IEA report puts it:  “In practice, no one committed to change should wait for someone else to move first.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The deadly saga of the controversial gene therapy Elevidys

It has been a grim few months for the Duchenne muscular dystrophy (DMD) community. There had been some excitement when, a couple of years ago, a gene therapy for the disorder was approved by the US Food and Drug Administration for the first time. That drug, Elevidys, has now been implicated in the deaths of two teenage boys.

The drug’s approval was always controversial—there was a lack of evidence that it actually worked, for starters. But the agency that once rubber-stamped the drug has now turned on its manufacturer, Sarepta Therapeutics. In a remarkable chain of events, the FDA asked the company to stop shipping the drug on July 18. Sarepta refused to comply.

In the days since, the company has acquiesced. But its reputation has already been hit. And the events have dealt a devastating blow to people desperate for treatments that might help them, their children, or other family members with DMD.

DMD is a rare genetic disorder that causes muscles to degenerate over time. It’s caused by a mutation in a gene that codes for a protein called dystrophin. That protein is essential for muscles—without it, muscles weaken and waste away. The disease mostly affects boys, and symptoms usually start in early childhood.

At first, affected children usually start to find it hard to jump or climb stairs. But as the disease progresses, other movements become difficult too. Eventually, the condition might affect the heart and lungs. The life expectancy of a person with DMD has recently improved, but it is still only around 30 or 40 years. There is no cure. It’s a devastating diagnosis.

Elevidys was designed to replace missing dystrophin with a shortened, engineered version of the protein. In June 2023, the FDA approved the therapy for eligible four- and five-year-olds. It came with a $3.2 million price tag.

The approval was celebrated by people affected by DMD, says Debra Miller, founder of CureDuchenne, an organization that funds research into the condition and offers support to those affected by it. “We’ve not had much in the way of meaningful therapies,” she says. “The excitement was great.”

But the approval was controversial. It came under an “accelerated approval” program that essentially lowers the bar of evidence for drugs designed to treat “serious or life-threatening diseases where there is an unmet medical need.”

Elevidys was approved because it appeared to increase levels of the engineered protein in patients’ muscles. But it had not been shown to improve patient outcomes: It had failed a randomized clinical trial.

The FDA approval was granted on the condition that Sarepta complete another clinical trial. The topline results of that trial were described in October 2023 and were published in detail a year later. Again, the drug failed to meet its “primary endpoint”—in other words, it didn’t work as well as hoped.

In June 2024, the FDA expanded the approval of Elevidys. It granted traditional approval for the drug to treat people with DMD who are over the age of four and can walk independently, and another accelerated approval for those who can’t.

Some experts were appalled at the FDA’s decision—even some within the FDA disagreed with it. But things weren’t so simple for people living with DMD. I spoke to some parents of such children a couple of years ago. They pointed out that drug approvals can help bring interest and investment to DMD research. And, above all, they were desperate for any drug that might help their children. They were desperate for hope.

Unfortunately, the treatment does not appear to be delivering on that hope. There have always been questions over whether it works. But now there are serious questions over how safe it is. 

In March 2025, a 16-year-old boy died after being treated with Elevidys. He had developed acute liver failure (ALF) after having the treatment, Sarepta said in a statement. On June 15, the company announced a second death—a 15-year-old who also developed ALF following Elevidys treatment. The company said it would pause shipments of the drug, but only for patients who are not able to walk.

The following day, Sarepta held an online presentation in which CEO Doug Ingram said that the company was exploring ways to make the treatment safer, perhaps by treating recipients with another drug that dampens their immune systems. But that same day, the company announced that it was laying off 500 employees—36% of its workforce. Sarepta did not respond to a request for comment.

On June 24, the FDA announced that it was investigating the risks of serious outcomes “including hospitalization and death” associated with Elevidys, and “evaluating the need for further regulatory action.”

There was more tragic news on July 18, when there were reports that a third patient had died following a Sarepta treatment. This patient, a 51-year-old, hadn’t been taking Elevidys but was enrolled in a clinical trial for a different Sarepta gene therapy designed to treat limb-girdle muscular dystrophy. The same day, the FDA asked Sarepta to voluntarily pause all shipments of Elevidys. Sarepta refused to do so.

The refusal was surprising, says Michael Kelly, chief scientific officer at CureDuchenne: “It was an unusual step to take.”

After significant media coverage, including reporting that the FDA was “deeply troubled” by the decision and would use its “full regulatory authority,” Sarepta backed down a few days later. On July 21, the company announced its decision to “voluntarily and temporarily” pause all shipments of Elevidys in the US.

Sarepta says it will now work with the FDA to address safety and labeling concerns. But in the meantime, the saga has left the DMD community grappling with “a mix of disappointment and concern,” says Kelly. Many are worried about the risks of taking the treatment. Others are devastated that they are no longer able to access it.

Miller says she knows of families who have been working with their insurance providers to get authorization for the drug. “It’s like the rug has been pulled out from under them,” she says. Many families have no other treatment options. “And we know what happens when you do nothing with Duchenne,” she says. Others, particularly those with teenage children with DMD, are deciding against trying the drug, she adds.

The decision over whether to take Elevidys was already a personal one based on several factors, he says. People with DMD and their families deserve clear and transparent information about the treatment in order to make that decision.

The FDA’s decision to approve Elevidys was made on limited data, says Kelly. But as things stand today, over 900 people have been treated with Elevidys. “That gives the FDA… an opportunity to look at real data and make informed decisions,” he says.

“Families facing Duchenne do not have time to waste,” Kelly says. “They must navigate a landscape where hope is tempered by the realities of medical complexity.”

A version of this article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How nonprofits and academia are stepping up to salvage US climate programs

Nonprofits are striving to preserve a US effort to modernize greenhouse-gas measurements, amid growing fears that the Trump administration’s dismantling of federal programs will obscure the nation’s contributions to climate change.

The Data Foundation, a Washington, DC, nonprofit that advocates for open data, is fundraising for an initiative that will coordinate efforts among nonprofits, technical experts, and companies to improve the accuracy and accessibility of climate emissions information. It will build on an effort to improve the collection of emissions data that former president Joe Biden launched in 2023—and which President Trump nullified on his first day in office. 

The initiative will help prioritize responses to changes in federal greenhouse-gas monitoring and measurement programs, but the Data Foundation stresses that it will primarily serve a “long-standing need for coordination” of such efforts outside of government agencies.

The new greenhouse-gas coalition is one of a growing number of nonprofit and academic groups that have spun up or shifted focus to keep essential climate monitoring and research efforts going amid the Trump administration’s assault on environmental funding, staffing, and regulations. Those include efforts to ensure that US scientists can continue to contribute to the UN’s major climate report and publish assessments of the rising domestic risks of climate change. Otherwise, the loss of these programs will make it increasingly difficult for communities to understand how more frequent or severe wildfires, droughts, heat waves, and floods will harm them—and how dire the dangers could become. 

Few believe that nonprofits or private industry can come close to filling the funding holes that the Trump administration is digging. But observers say it’s essential to try to sustain efforts to understand the risks of climate change that the federal government has historically overseen, even if the attempts are merely stopgap measures. 

If we give up these sources of emissions data, “we’re flying blind,” says Rachel Cleetus, senior policy director with the climate and energy program at the Union of Concerned Scientists. “We’re deliberating taking away the very information that would help us understand the problem and how to address it best.”

Improving emissions estimates

The Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Forest Service, and other agencies have long collected information about greenhouse gases in a variety of ways. These include self-reporting by industry; shipboard, balloon, and aircraft readings of gas concentrations in the atmosphere; satellite measurements of the carbon dioxide and methane released by wildfires; and on-the-ground measurements of trees. The EPA, in turn, collects and publishes the data from these disparate sources as the Inventory of US Greenhouse Gas Emissions and Sinks.

But that report comes out on a two-year lag, and studies show that some of the estimates it relies on could be way off—particularly the self-reported ones.

A recent analysis using satellites to measure methane pollution from four large landfills found they produce, on average, six times more emissions than the facilities had reported to the EPA. Likewise, a 2018 study in Science found that the actual methane leaks from oil and gas infrastructure were about 60% higher than the self-reported estimates in the agency’s inventory.

The Biden administration’s initiative—the National Strategy to Advance an Integrated US Greenhouse Gas Measurement, Monitoring, and Information System—aimed to adopt state-of-the-art tools and methods to improve the accuracy of these estimates, including satellites and other monitoring technologies that can replace or check self-reported information.

The administration specifically sought to achieve these improvements through partnerships between government, industry, and nonprofits. The initiative called for the data collected across groups to be published to an online portal in formats that would be accessible to policymakers and the public.

Moving toward a system that produces more current and reliable data is essential for understanding the rising risks of climate change and tracking whether industries are abiding by government regulations and voluntary climate commitments, says Ben Poulter, a former NASA scientist who coordinated the Biden administration effort as a deputy director in the Office of Science and Technology Policy.

“Once you have this operational system, you can provide near-real-time information that can help drive climate action,” Poulter says. He is now a senior scientist at Spark Climate Solutions, a nonprofit focused on accelerating emerging methods of combating climate change, and he is advising the Data Foundation’s Climate Data Collaborative, which is overseeing the new greenhouse-gas initiative. 

Slashed staffing and funding  

But the momentum behind the federal strategy deflated when Trump returned to office. On his first day, he signed an executive order that effectively halted it. The White House has since slashed staffing across the agencies at the heart of the effort, sought to shut down specific programs that generate emissions data, and raised uncertainties about the fate of numerous other program components. 

In April, the administration missed a deadline to share the updated greenhouse-gas inventory with the United Nations, for the first time in three decades, as E&E News reported. It eventually did release the report in May, but only after the Environmental Defense Fund filed a Freedom of Information Act request.

There are also indications that the collection of emissions data might be in jeopardy. In March, the EPA said it would “reconsider” the Greenhouse Gas Reporting Program, which requires thousands of power plants, refineries, and other industrial facilities to report emissions each year.

In addition, the tax and spending bill that Trump signed into law earlier this month rescinds provisions in Biden’s Inflation Reduction Act that provided incentives or funding for corporate greenhouse-gas reporting and methane monitoring. 

Meanwhile, the White House has also proposed slashing funding for the National Oceanic and Atmospheric Administration and shuttering a number of its labs. Those include the facility that supports the Mauna Loa Observatory in Hawaii, the world’s longest-running carbon dioxide measuring program, as well as the Global Monitoring Laboratory, which operates a global network of collection flasks that capture air samples used to measure concentrations of nitrous oxide, chlorofluorocarbons, and other greenhouse gases.

Under the latest appropriations negotiations, Congress seems set to spare NOAA and other agencies the full cuts pushed by the Trump administration, but that may or may not protect various climate programs within them. As observers have noted, the loss of experts throughout the federal government, coupled with the priorities set by Trump-appointed leaders of those agencies, could still prevent crucial emissions data from being collected, analyzed, and published.

“That’s a huge concern,” says David Hayes, a professor at the Stanford Doerr School of Sustainability, who previously worked on the effort to upgrade the nation’s emissions measurement and monitoring as special assistant to President Biden for climate policy. It’s not clear “whether they’re going to continue and whether the data availability will drop off.”

‘A natural disaster’

Amid all these cutbacks and uncertainties, those still hoping to make progress toward an improved system for measuring greenhouse gases have had to adjust their expectations: It’s now at least as important to simply preserve or replace existing federal programs as it is to move toward more modern tools and methods.

But Ryan Alexander, executive director of the Data Foundation’s Climate Data Collaborative, is optimistic that there will be opportunities to do both. 

She says the new greenhouse-gas coalition will strive to identify the highest-priority needs and help other nonprofits or companies accelerate the development of new tools or methods. It will also aim to ensure that these organizations avoid replicating one another’s efforts and deliver data with high scientific standards, in open and interoperable formats. 

The Data Foundation declines to say what other nonprofits will be members of the coalition or how much money it hopes to raise, but it plans to make a formal announcement in the coming weeks. 

Nonprofits and companies are already playing a larger role in monitoring emissions, including organizations like Carbon Mapper, which operates satellites and aircraft that detect and measure methane emissions from particular facilities. The EDF also launched a satellite last year, known as MethaneSAT, that could spot large and small sources of emissions—though it lost power earlier this month and probably cannot be recovered. 

Alexander notes that shifting from self-reported figures to observational technology like satellites could not just replace but perhaps also improve on the EPA reporting program that the Trump administration has moved to shut down.

Given the “dramatic changes” brought about by this administration, “the future will not be the past,” she says. “This is like a natural disaster. We can’t think about rebuilding in the way that things have been in the past. We have to look ahead and say, ‘What is needed? What can people afford?’”

Organizations can also use this moment to test and develop emerging technologies that could improve greenhouse-gas measurements, including novel sensors or artificial intelligence tools, Hayes says. 

“We are at a time when we have these new tools, new technologies for measurement, measuring, and monitoring,” he says. “To some extent it’s a new era anyway, so it’s a great time to do some pilot testing here and to demonstrate how we can create new data sets in the climate area.”

Saving scientific contributions

It’s not just the collection of emissions data that nonprofits and academic groups are hoping to save. Notably, the American Geophysical Union and its partners have taken on two additional climate responsibilities that traditionally fell to the federal government.

The US State Department’s Office of Global Change historically coordinated the nation’s contributions to the UN Intergovernmental Panel on Climate Change’s major reports on climate risks, soliciting and nominating US scientists to help write, oversee, or edit sections of the assessments. The US Global Change Research Program, an interagency group that ran much of the process, also covered the cost of trips to a series of in-person meetings with international collaborators. 

But the US government seems to have relinquished any involvement as the IPCC kicks off the process for the Seventh Assessment Report. In late February, the administration blocked federal scientists including NASA’s Katherine Calvin, who was previously selected as a cochair for one of the working groups, from attending an early planning meeting in China. (Calvin was the agency’s chief scientist at the time but was no longer serving in that role as of April, according to NASA’s website.)

The agency didn’t respond to inquiries from interested scientists after the UN panel issued a call for nominations in March, and it failed to present a list of nominations by the deadline in April, scientists involved in the process say. The Trump administration also canceled funding for the Global Change Research Program and, earlier this month, fired the last remaining staffers working at the Office of Global Change.

In response, 10 universities came together in March to form the US Academic Alliance for the IPCC, in partnership with the AGU, to request and evalute applications from US researchers. The universities—which include Yale, Princeton, and the University of California, San Diego—together nominated nearly 300 scientists, some of whom the IPCC has since officially selected. The AGU is now conducting a fundraising campaign to help pay for travel expenses. 

Pamela McElwee, a professor at Rutgers who helped establish the academic coalition, says it’s crucial for US scientists to continue participating in the IPCC process.

“It is our flagship global assessment report on the state of climate, and it plays a really important role in influencing country policies,” she says. “To not be part of it makes it much more difficult for US scientists to be at the cutting edge and advance the things we need to do.” 

The AGU also stepped in two months later, after the White House dismissed hundreds of researchers working on the National Climate Assessment, an annual report analyzing the rising dangers of climate change across the country. The AGU and American Meteorological Society together announced plans to publish a “special collection” to sustain the momentum of that effort.

“It’s incumbent on us to ensure our communities, our neighbors, our children are all protected and prepared for the mounting risks of climate change,” said Brandon Jones, president of the AGU, in an earlier statement.

The AGU declined to discuss the status of the project.

Stopgap solution

The sheer number of programs the White House is going after will require organizations to make hard choices about what they attempt to save and how they go about it. Moreover, relying entirely on nonprofits and companies to take over these federal tasks is not viable over the long term. 

Given the costs of these federal programs, it could prove prohibitive to even keep a minimum viable version of some essential monitoring systems and research programs up and running. Dispersing across various organizations the responsibility of calculating the nation’s emissions sources and sinks also creates concerns about the scientific standards applied and the accessibility of that data, Cleetus says. Plus, moving away from the records that NOAA, NASA, and other agencies have collected for decades would break the continuity of that data, undermining the ability to detect or project trends.

More basically, publishing national emissions data should be a federal responsibility, particularly for the government of the world’s second-largest climate polluter, Cleetus adds. Failing to calculate and share its contributions to climate change sidesteps the nation’s global responsibilities and sends a terrible signal to other countries. 

Poulter stresses that nonprofits and the private sector can do only so much, for so long, to keep these systems up and running.

“We don’t want to give the impression that this greenhouse-gas coalition, if it gets off the ground, is a long-term solution,” he says. “But we can’t afford to have gaps in these data sets, so somebody needs to step in and help sustain those measurements.”

Why A Site Deindexed By Google For Programmatic SEO Bounced Back via @sejournal, @martinibuster

A company founder shared their experience with programmatic SEO, which they credited for initial success until it was deindexed by Google, calling it a big mistake they won’t repeat. The post, shared on LinkedIn, received scores of supportive comments.

The website didn’t receive a manual action, Google deindexed the web pages due to poor content quality.

Programmatic SEO (pSEO)

Programmatic SEO (aka pSEO) is a phrase that encompasses a wide range of tactics that have automation at the heart of it. Some of it can be very useful, like automating sitewide meta descriptions, titles, and alt text for images.

pSEO is also the practice of using AI automation to scale content creation sitewide, which is what the person did. They created fifty thousand pages targeting long tail phrases, phrases that are not commonly queried. The site initially received hundreds of clicks and millions of impressions but the success was not long-lived.

According to the post by Miquel Palet (LinkedIn Profile):

“Google flagged our domain. Pages started getting deindexed. Traffic plummeted overnight.

We learned the hard way that shortcuts don’t scale sustainably.

It was a huge mistake, but also a great lesson.

And it’s one of the reasons we rebranded to Tailride.”

Thin AI Content Was The Culprit

A follow-up post explained that they believe the AI generated content backfired was because it was thin content, which makes sense. Thin content, regardless of how it was authored, can be problematic.

One of the posts by Palet explained:

“We’re not sure, but probably not because AI. It was thin content and probably duplicated.”

Rasmus Sørensen (LinkedIn profile), an experienced digital marketer shared his opinion that he’s seen some marketers pushing shady practices under the banner of pSEO:

“Thanks for sharing and putting some real live experiences forward. Programmatic SEO had been touted as the next best thing in SEO. It’s not and I’ve seen soo much garbage published the last few months and agencies claiming that their pSEO is the silver bullet.
It very rarely is.”

Joe Youngblood (LinkedIn profile) shared that SEO trends can be abused and implied that it is a viable strategy if done correctly:

“I would always do something like pSEO under the supervision of a seasoned SEO consultant. This tale happens all too frequently with an SEO trend…”

What They Did To Fix The site

The company founder shared that they rebranded the website to a new domain, redirecting the old domain to the new one, and focused their site on higher quality content that’s relevant to users.

They explained:

“Less pages + more quality”

A site: search for their domain shows that Google is now indexing their content, indicating that they are back on track.

Takeaways

Programmatic SEO can be useful if approached with an understanding of where the line is between good quality and “not-quality” content.

Featured Image by Shutterstock/Cast Of Thousands

Why Is SureRank WordPress SEO Plugin So Popular? via @sejournal, @martinibuster

A new SEO plugin called SureRank, by Brainstorm Force, makers of the popular Astra theme, is rapidly growing in popularity. In beta for a few months, it was announced in July and has amassed over twenty thousand installations. That’s a pretty good start for an SEO plugin that has only been out of beta for a few weeks.

One possible reason that SureRank is quickly becoming popular is that it’s created by a trusted brand, much loved for its Astra WordPress theme.

SureRank By Brainstorm Force

SureRank is the creation of the publishers of many highly popular plugins and themes installed in many millions of websites, such as Astra theme, Ultimate Addons for Elementor, Spectra Gutenberg Blocks – Website Builder for the Block Editor, and Starter Templates – AI-Powered Templates for Elementor & Gutenberg, to name a few.

Why Another SEO Plugin?

The goal of SureRank is to provide an easy-to-use SEO solution that includes only the necessary features every site needs in order to avoid feature bloat. It positions itself as an SEO assistant that guides the user with an intuitive user interface.

What Does SureRank Do?

SureRank has an onboarding process that walks a user through the initial optimizations and setup. It then performs an analysis and offers suggestions for site-level improvements.

It currently enables users to handle the basics like:

  • Edit titles and meta descriptions
  • Custom write social media titles, descriptions, and featured images,
  • Tweak home page and, archive page meta data
  • Meta robot directives, canonicals, and sitemaps
  • Schema structured data
  • Site and page level SEO analysis
  • Automatic image alt text generation
  • Google Search Console integration
  • WooCommerce integration

SureRank also provides a built-in tool for migrating settings from other popular SEO plugins like Rank Math, Yoast, and AIOSEO.

Check out the SureRank SEO plugin at the official WordPress.org repository:

SureRank – SEO Assistant with Meta Tags, Social Preview, XML Sitemap, and Schema

Featured Image by Shutterstock/Roman Samborskyi