OpenAI Expresses Interest In Buying Chrome Browser via @sejournal, @martinibuster

Nick Turley, Head of Product at ChatGPT, testified that OpenAI would be interested in acquiring the Chrome browser should a judge decide to break it off from Alphabet, Google’s parent company.

According to a report in Reuters:

“ChatGPT head of product Nick Turley made the statement while testifying at trial in Washington where U.S. Department of Justice seeks to require Google to undertake far-reaching measures restore competition in online search.”

Perplexity Comes Out Against Chrome Divestiture

On Monday Perplexity CEO Aravind Srinivas wrote a post on X (formerly Twitter) stating that he intends to testify in support of Google at the U.S. governments anti-trust trial.

Perplexity simultaneously published an article explaining how their position isn’t so much about supporting Google as it is about supporting the future of web browsers and a more open Android ecosystem, two things that he explains will preserve a high level of quality for browsers and create more opportunity and innovation on mobile devices, a win-win for consumers and businesses.

The United States department of Justice wants to split Chrome off from Google as a way to minimize Google’s monopoly position across multiple industries which it asserts is having a negative effect on competition. Srinivas argues that separating Chrome from Google would have the opposite effect.

Srinivas laid out his two key concerns:

1. Google should not be broken up. Chrome should remain within and continue to be run by Google. Google deserves a lot of credit for open-sourcing Chromium, which powers Microsoft’s Edge and will also power Perplexity’s Comet. Chrome has become the dominant browser due to incredible execution quality at the scale of billions of users.

2. Android should become more open to consumer choice. There shouldn’t be a tight coupling to the default apps set by Google, and the permission for OEMs to have the Play Store and Maps. Consumers should have the choice to pick who they want as a default search and default voice assistant, and OEMs should be able to offer consumers this choice without having to be blocked by Google on the ability to have the Play Store and other Google apps (Maps, YouTube).”

Takeaways

OpenAI Expresses Interest In Chrome Browser

  • Nick Turley, Head of Product at ChatGPT, stated OpenAI would be interested in purchasing Chrome if a court orders Google to divest it.
  • His statement was made during testimony in the U.S. Department of Justice’s antitrust trial against Google.

Perplexity AI’s Position Against Chrome Divestiture

  • Perplexity CEO Aravind Srinivas publicly opposed the idea of separating Chrome from Google.
  • He announced plans to testify in support of Google in the antitrust case.
  • Perplexity emphasized that their stance is focused on preserving innovation.

Call for a More Open Android Ecosystem

  • Srinivas advocated for a more open Android ecosystem.
  • He proposed that consumers should freely choose their default search engine and voice assistant.
  • He criticized Google’s practice of requiring OEMs to bundle Google services like the Play Store and Maps.
  • He urged regulators to focus on increasing consumer choice on Android rather than breaking up Chrome.

Featured Image by Shutterstock/Prathmesh T

DOJ’s Google Search Trial: What If Google Must Sell Chrome? via @sejournal, @MattGSouthern

The next phase of the DOJ’s antitrust case against Google started Monday. Both sides presented different views on the future of search and AI.

This follows Judge Amit Mehta’s ruling last year that Google illegally kept its dominance by making exclusive deals with device makers.

DOJ Wants Major Changes to Break Google’s Control

Assistant Attorney General Gail Slater made the government’s position clear:

“Each generation has called for the DOJ to challenge a behemoth that crushed competition. In the past, it was Standard Oil and AT&T. Today’s behemoth is Google.”

The Justice Department wants several changes, including:

  • Making Google sell the Chrome browser
  • Ending exclusive search deals with Apple and Samsung
  • Forcing Google to share search results with competitors
  • Limiting Google’s AI deals
  • Possibly selling off Android if other changes don’t work

DOJ attorney David Dahlquist stated that the court needs to look ahead to prevent Google from expanding its search power into AI. He revealed that Google pays Samsung a monthly sum to install Gemini AI on its devices.

Dahlquist said:

“Now is the time to tell Google and all other monopolists that there are consequences when you break the antitrust laws.”

Google Says These Ideas Would Hurt Innovation

Google disagrees with the DOJ’s plans. Attorney John Schmidtlein called them “a wishlist for competitors looking to get the benefits of Google’s extraordinary innovations.”

In a blog post before the trial, Google VP Lee-Anne Mulholland warned the changes would:

“DOJ’s proposal would also hamstring how we develop AI and have a government committee regulate our products. That would hold back American innovation when we’re in a race with China for technology leadership.”

Google also claims that sharing search data would risk user privacy. They say ending distribution deals would make devices more expensive and hurt companies like Mozilla.

Perplexity Suggests “Choice” as Better Solution

AI search startup Perplexity offers a middle-ground approach.

CEO Aravind Srinivas doesn’t support forcing Google to sell Chrome, posting:

“We don’t believe anyone else can run a browser at that scale without a hit on quality.”

Instead, Perplexity focuses on Android’s restrictive environment. In a blog post called “Choice is the Remedy,” the company argues:

“Google stays dominant by paying to force a subpar experience on consumers–not by building better products.”

Perplexity wants to separate Android from the requirements to include all Google apps. They also want to end penalties for carriers that offer alternatives.

AI Competition Takes Center Stage

The trial shows how important AI has become to search competition. OpenAI’s ChatGPT product head, Nick Turley, will testify Tuesday, highlighting how traditional search and AI are now connected.

The DOJ argues that Google’s search monopoly enhances its AI products, which then direct users back to Google search, creating a cycle that stifles competition.

What’s Next?

The trial is expected to last several weeks, with testimony from representatives of Mozilla, Verizon, and Apple. Google plans to appeal after the final judgment.

This case represents the most significant tech antitrust action since Microsoft in the late 1990s. It shows that both political parties are serious about addressing the market power of Big Tech. Slater notes that the case was “filed during President Trump’s first term and litigated across three administrations.”


Featured Image: Muhammad khoidir/Shutterstock

Google Ads 2024 Safety Report Unveils AI Protections via @sejournal, @brookeosmundson

Google has released its 2024 Ads Safety Report, and the message is clear: accountability is scaling fast thanks to AI.

With billions of ads removed and millions of accounts suspended, the report paints a picture of an advertising ecosystem under tighter scrutiny than ever.

For marketers, especially those managing significant media budgets, these shifts aren’t just background noise.

They directly impact strategy, spend efficiency, and brand safety. Here’s a closer look at the biggest takeaways and how marketers should respond.

A Record-Setting Year in Ad Removals and Account Suspensions

Google removed 5.1 billion ads in 2024, up slightly from the previous year.

The real eye-opener was the surge in account suspensions. Over 39 million advertiser accounts were shut down, more than triple the number from 2023.

That figure tells us two things:

  • Enforcement is no longer just about the ads themselves.
  • Google is focusing upstream, stopping abuse at the account level before it can scale.

In addition to individual ad removals, 9.1 billion ads were restricted (meaning they were limited in where and how they could serve). Google also took action on over 1.3 billion publisher pages and issued site-level enforcements across 220,000 sites in the ad network.

Whether you’re running Search, Display, or YouTube campaigns, this scale of enforcement can influence delivery, reach, and trust signals in subtle ways.

AI is Doing the Heavy Lifting

The scale of these removals wouldn’t be possible without automation. In 2024, Google leaned heavily on AI, introducing over 50 improvements to its large language models (LLMs) for ad safety.

One notable example: Google is now using AI to detect patterns in illegitimate payment information during account setup. This enables enforcement to occur before an ad even goes live.

And as concerns around deepfakes and impersonation scams continue to grow, Google formed a specialized team to target AI-generated fraud. They focused on content that mimicked public figures, brands, and voices.

The result? Over 700,000 advertiser accounts were permanently disabled under updated misrepresentation rules, and reports of impersonation scams dropped by 90%.

AI isn’t just a marketing tool anymore. It’s a core part of how ad platforms decide what gets to run.

A Shift in Ad Policy That Marketer’s Shouldn’t Overlook

One of the more under-the-radar updates was a policy change made in April 2025 to Google’s long-standing Unfair Advantage rules.

Previously, the policy limited a single advertiser from having more than one ad appear in a given results page auction. But the update now allows the same brand to serve multiple ads on the same search page, as long as they appear in different placements.

This creates both opportunity and risk. Larger brands with multiple Google Ads accounts or aggressive agency strategies can now gain more real estate.

For smaller brands or advertisers with limited budgets, this may lead to increased competition for top spots and inflated CPCs.

Even though this change is meant to address transparency and competition, it could cause performance swings in high-intent keyword auctions.

It’s the kind of change that may not be immediately obvious in your dashboard but can quietly reshape performance over time.

What Advertisers Should Keep in Mind Moving Forward

Staying compliant isn’t just about avoiding policy violations.

It’s now about being proactive with AI and understanding how enforcement impacts delivery.

Here are a few ways to stay ahead:

1. Know your ad strength tools, but don’t rely on them blindly

AI is behind many of Google’s enforcement and performance scoring systems, including Ad Strength and Asset Diagnostics. These are helpful tools, but they’re not guarantees of policy compliance.

Always cross-check new ad formats or copy variants against the most recent policy updates.

2. Double-check account structures if you’re running multiple brands or regions.

With the rise in multi-account suspensions, it’s more important than ever to document relationships between brands, resellers, and advertisers.

Google’s systems are increasingly adept at pattern recognition, and even unintentional overlap could flag your account.

3. Be careful with impersonation-style creative or influencer tie-ins

If you’re featuring people in ads (especially public figures), ensure that the usage rights are clear.

AI-generated content that resembles celebrities or influencers, even if satirical, could trip enforcement filters.

When in doubt, opt for original or clearly branded creative.

4. Review how recent policy changes could affect your real estate in search results

Marketers should test how often their brand appears on a single search page now that the Unfair Advantage update allows more flexibility.

Use tools like Ad Preview and multi-account diagnostics to understand if your visibility is shifting.

Wrapping It Up

Google’s latest Ads Safety Report is a reminder that digital advertising is becoming more regulated, more automated, and more tied to platform-defined trust.

Google’s tolerance for risk is dropping fast. And enforcement isn’t just about bad actors anymore. It’s about building an ecosystem where consumers trust what they see.

Marketers who pay attention to these shifts, stay flexible, and put transparency front and center will be in a stronger position. Those who assume “business as usual” are more at risk to be caught off guard.

Don’t wait for a suspension notice to rethink your ads strategy.

Have you noticed any account changes as a result of Google’s ad safety updates?

How Is Answer Engine Optimization Different From SEO? via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Is doing SEO enough for an AI chatbot visibility?

The industry is divided down the middle: Half believe that optimization for large language models (LLMs) requires new strategies, while the other half insists good SEO already handles it.

This division has spawned new acronyms like GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) – terms that are equally loved and hated.

To settle this debate, I analyzed Similarweb data comparing Google organic traffic with ChatGPT brand visibility across four product categories.

The result?

The top search players are not always the top ChatGPT players, and the overlap varies between product categories.

This is important to understand because:

  1. There are real optimization opportunities for some categories, and
  2. You might miss them if you dismiss optimization for ChatGPT as “just doing good SEO.”

At the same time, we can apply the same tactics to ChatGPT as to Google.

I like to think of how we describe the differences between SEO and GEO/AEO like this:

SEO and GEO/AEO are like pianos and guitars.

They’re both instruments. They both make music. And they both share fundamental principles (notes, scales, harmony) that a musician must master to properly play both.

About The Data

Big thanks to Similarweb, especially Adelle and Sam, for sharing the data with me.

Here’s what I reviewed for this particular analysis:

  • Organic search traffic vs. AI chatbots visibility across four product categories:
    • Credit cards (finance).
    • Earbuds (tech).
    • CRM (software).
    • Handbags (fashion).
  • Methodology: Similarweb categorizes ChatGPT conversations based on their content and identifies the most common brands in ChatGPT’s response.
  • In total, the data covers 69.9 million clicks.

SEO Vs. GEO/AEO: Same, Same, But Different

If GEO/AEO and SEO were the same, the same sites getting organic traffic would also get the most citations/mentions in LLMs.

That’s only true in a few cases, but not for the overall picture.

Credit Cards

Image Credit: Kevin Indig
  • chase.com — 6.6 million clicks | 13.6% ChatGPT visibility
  • reddit.com — 5.2 million clicks | 0% ChatGPT visibility
  • capitalone.com — 4.1 million clicks | 10.3% ChatGPT visibility
  • citibankonline.com — 3.2 million clicks | 4.4% ChatGPT visibility
  • comenity.net — 2.9 million clicks | 0% ChatGPT visibility
Image Credit: Kevin Indig
  • google.com — 337,000 clicks | 20.3% ChatGPT visibility
  • paypal.com — 209,000 clicks | 19.7% ChatGPT visibility
  • americanexpress.com — 1.3 million clicks | 16.9% ChatGPT visibility
  • visa.com — 116,000 clicks | 15.7% ChatGPT visibility
  • chase.com — 6.6 million clicks | 13.6% ChatGPT visibility

Handbags

Image Credit: Kevin Indig
  • reddit.com — 241,000 clicks | 0% ChatGPT visibility
  • youtube.com — 152,000 clicks | 5.3% ChatGPT visibility
  • amazon.com — 77,000 clicks | 9.8% ChatGPT visibility
  • nordstrom.com — 51,000 clicks | 0% ChatGPT visibility
  • coach.com — 48,000 clicks | 6.1% ChatGPT visibility
Image Credit: Kevin Indig
  • target.com — 7,000 clicks | 24.2% ChatGPT visibility
  • instagram.com — 7,000 clicks | 13.8% ChatGPT visibility
  • louisvuitton.com — 27,000 clicks | 10.0% ChatGPT visibility
  • gucci.com — 15,000 clicks | 9.9% ChatGPT visibility
  • amazon.com — 77,000 clicks | 9.8% ChatGPT visibility

Earbuds

Image Credit: Kevin Indig
  • reddit.com — 1.2 million clicks | 0% ChatGPT visibility
  • youtube.com — 868,000 clicks | 7.1% ChatGPT visibility
  • cnet.com — 512,000 clicks | 0% ChatGPT visibility
  • amazon.com — 474,000 clicks | 15.1% ChatGPT visibility
  • bose.com — 407,000 clicks | 10.2% ChatGPT visibility
Image Credit: Kevin Indig
  • apple.com — 152,000 clicks | 16.8% ChatGPT visibility
  • amazon.com — 474,000 clicks | 15.1% ChatGPT visibility
  • bose.com — 407,000 clicks | 10.2% ChatGPT visibility
  • wired.com — 120,000 clicks | 9.5% ChatGPT visibility
  • google.com — 31,000 clicks | 9.5% ChatGPT visibility

CRM

Image Credit: Kevin Indig
  • zoho.com — 314,000 clicks | 8.7% ChatGPT visibility
  • salesforce.com — 225,000 clicks | 33.8% ChatGPT visibility
  • sfgcrm.com — 188,000 clicks | 0% ChatGPT visibility
  • yahoo.com — 179,000 clicks | 0% ChatGPT visibility
  • youtube.com — 167,000 clicks | 4.1% ChatGPT visibility
Image Credit: Kevin Indig
  • salesforce.com — 225,000 clicks | 33.8% ChatGPT visibility
  • google.com — 30,000 clicks | 25.8% ChatGPT visibility
  • hubspot.com — 104,000 clicks | 22.5% ChatGPT visibility
  • linkedin.com — 36,000 clicks | 20.7% ChatGPT visibility
  • facebook.com — 7,000 clicks | 10.1% ChatGPT visibility

The data shows that the top organic domains (by clicks) are not the ones getting the most mentions in ChatGPT.

As a result, just doing good SEO is not enough for LLM visibility when we look at specific domains.

Broad relationships between organic clicks and ChatGPT mentions tell a more nuanced story.

Whether or not “just doing good SEO” will be successful for LLM visibility can depend on the vertical or category.

Image Credit: Kevin Indig

In some verticals, AI chatbot optimization can really move the needle. In others, it might not help much.

Earbuds and CRM have a strong correlation between clicks and ChatGPT visibility.

Credit cards and handbags have a weak one.

In other words, credit cards and handbags are a much more open playing field for LLM optimization.

So clearly, that’s where optimizing for LLMs has the biggest payoff.

What Makes A Category Worthy Of Visibility Optimization?

The differentiator is unclear.

The factors that likely play a role are:

  • Product specs.
  • Reviews.
  • Developer docs.
  • Regulatory language, and/or
  • Ad spend.

But ultimately, we need more data to understand when product categories have a high or low overlap between AI visibility and organic search.

Image Credit: Kevin Indig

Besides the high correlation between organic and AI traffic, some categories have a higher degree of winner-takes-it-all dynamics than others.

In the CRM category, for example, three brands get almost 50% of visibility: Salesforce, HubSpot, and Google.

These dynamics seem to reflect market share – the CRM space is heavily dominated by Salesforce, HubSpot, and Google. (Google even thought about buying HubSpot, remember?)

In What content works well in LLMs?, I found that brand popularity has the strongest relationship with LLM visibility:

After matching many metrics with AI chatbot visibility, I found one factor that stands out more than anything else: brand search volume. The number of AI chatbot mentions and brand search volume have a correlation of .334 – pretty good in this field. In other words, the popularity of a brand broadly decides how visible it is in AI chatbots.

This effect is reflected here as well, and contextualized by market share.

Plainly, the more fragmented a category is, the higher the chance of gaining ChatGPT visibility.

This is great news for organizations or brands in emerging industries or products where there is plenty of room for competition.

However, categories that are dominated by a few brands are harder to optimize for LLM visibility, probably because there is already so much content on the web about these incumbents.

If you’re thinking, “Well, that’s not new, Kevin. That’s true of SEO, too.” I get it.

This information might feel fairly intuitive, but I’ve seen smaller brands or startups that heavily invest in high-quality SEO practices be able to find their way at the top of search results.

What the data I’m discussing today shows us is that it’s going to be even more challenging to optimize for LLM visibility in verticals or industries that are well-established and have long-time trusted incumbents dominating the vertical.

And depending on what vertical your site sits in, you’ll need to develop your organic visibility strategy accordingly.

So, here are the main takeaways from my findings:

  • It’s risky to dismiss AEO/GEO altogether. You could assume that no action is needed when you’re winning in “classic SEO,” but that would open the door to competitors taking your spot in ChatGPT.
  • Don’t pivot or panic if you’re already winning. It’s also not helpful to reflexively change tactics or practices in attempts to optimize for ChatGPT when you’re already doing well. Start brainstorming plans for changes (algorithms do change, after all), but no need to reinvent the wheel just yet.
  • Prioritize content and PR investments for ChatGPT when the overlap with organic search is low across your most prompts. Now’s the time to get the ball rolling on this. Record your actions and your results, and find out what works in your vertical.

The Biggest Differences Between SEO And GEO/AEO

Half of the community wants to put a new label on SEO; half says it’s the same.

Here’s where I think the disconnect stems from:

The fundamental principles overlap, but the implementation and context differ significantly.

Both SEO and GEO/AEO rely on these core elements:

  • Technical accessibility: Both require content to be easily crawlable and indexable (with JavaScript often creating challenges for both, though currently more problematic for LLM crawlers).
  • Content quality: High-quality, comprehensive, and accurate content performs better in both environments.
  • Authority signals: While implemented differently, both systems rely on signals that indicate trustworthiness and expertise.

Despite these shared foundations, how you optimize is different:

  1. User intent and query patterns: AI chatbots handle longer prompts where users express detailed intent, which requires more specific content that addresses nuanced questions. Google is moving in this direction with AI Overviews, but it still primarily serves shorter queries.
  2. Signal weighting and ranking factors: AI chatbots give significantly more weight to overall brand popularity and volume of mentions. Google has more robust ways to measure and incorporate user satisfaction (Chrome data, click patterns, return-to-search rates). In another study I’m working on, trends indicate search results are more stable and the emphasis on content freshness is higher.
  3. Quality and safety guardrails: Google has developed specific criteria for YMYL (Your Money Your Life) content that AI chatbots haven’t fully replicated. LLMs currently lack sophisticated spam detection and penalty systems.
  4. Rich results: Google uses a variety of SERP features to format different content types. ChatGPT only incorporates rich formatting for some content (maps, videos).

And like I mentioned at the start, SEO and GEO/AEO are like pianos and guitars.

They share fundamental musical principles, but require different techniques and additional knowledge to play both effectively.

And essentially, classic SEO professionals will need to train as multi-instrumentalists over time.

Strategic Adaptation, Not Reinvention – Yet

Despite the different dynamics, both SEO and GEO/AEO have the same optimizations:

  • Create better content.
  • Provide unique perspectives.
  • Increase your brand strength.
  • Ensure your site is properly crawled and indexed.

The difference lies in how much attention you should pay to certain content categories and how resource allocation works.

Rather than creating an entirely new practice, it’s about understanding when and how to prioritize your efforts.

By the way, I also think it’s too early to coin a new acronym.

The AI and chatbot landscape is evolving rapidly, and so is search. We haven’t reached the final form of AI yet.

In some verticals with low correlation between search and AI visibility, there’s a significant opportunity to stand out.

In others, your SEO efforts may already be giving you the visibility you need across both channels.

But I do expect GEO/AEO to differ more from SEO over time.

Why? The signals OpenAI gets from interaction with its models and from the richness of prompts should allow it to develop its own weighting signals for brands and answers.

OpenAI gets much better inputs to train its models.

As a result, it should be able to either:

  1. Develop its own web index that it can use to ground answers in facts, or
  2. Develop a whole new system of grounding rules.

What Should You Do Right Now?

Focus on understanding your category’s specific dynamics.

Are the SEO leaders in your category also dominating prompts on ChatGPT?

If so, focus on becoming a leader in search results.

If not, focus on becoming a leader in search and invest in monitoring and optimizing your visibility across relevant ChatGPT prompts with targeted content, PR campaigns, content syndication, and content repurposing across different formats.

And until we all see this technology evolve and distinguish itself further from traditional organic search, I say we just all stick with SEO as our agreed-upon acronym for what we do…

…at least for now.


Featured Image: Paulo Bobita/Search Engine Journal

How To Build Consensus Online To Gain Visibility In AI Search via @sejournal, @_kevinrowe

Just like with SEO, it can be tempting to use clever hacks to optimize for AI search.

But the problem with hacks is that, as soon as they’re discovered, changes will be made that make those hacks ineffective.

Consider the Rank Or Go Home Challenge, where Kyle Roof managed to get his website a top ranking for the string “Rhinoplasty Plano,” despite 98% of the site being “lorem ipsum” text.

Within 24 hours of Google hearing about this, the site was de-indexed.

The same holds true for AI search, but here, the system is changing at a breakneck pace. What works today may well not work a month from now.

Understanding GEO

Generative Engine Optimization (GEO) is the emerging field of optimization for AI search. This includes optimizing to appear in Google’s AI Overviews, Gemini, ChatGPT, Grok, and others.

This field is evolving rapidly, meaning that tactics used today may not work in a year.

Here are a few examples of how quickly generative AI evolves, according to a key benchmark analysis by Ithy about OpenAI’s o1 to o3 models.

  • Mathematical reasoning: AIME 2024 benchmark accuracy rose from 83.3% to 96.7%, a 13.4% improvement.
  • Scientific reasoning: Using the GPQA Diamond Benchmark, ChatGPT’s “o3 scored 87.7% accuracy compared to o1’s 78.0%, demonstrating a stronger capacity to handle complex, PhD-level science questions with greater precision and depth.”
  • Coding: ChatGPT has significantly improved from o1 to o3, with o3 “achieving a 71.7% accuracy rate, a significant increase from the o1 model’s 48.9%.”

This means that, in the long term, hacking the system simply won’t be cost-effective. Any hack you uncover will have a very limited shelf life.

Instead, we should turn to a tried and true tactic from SEO: aligning consensus.

What Is Consensus, And How Do You Align With It?

Put simply, consensus is when a variety of high-quality sources align on a topic.

For example, if you ask Google if the earth is round or flat, the resulting snippet will tell you it is round because the vast majority of high-quality sources agree on this fact.

Screenshot from search for [is the earth round or flat], Google, February 2025

The highest-ranking results will be sites that agree with this consensus, while results that don’t align rank poorly.

AI search works in much the same way. It can identify the general consensus on a topic and use this consensus to decide which information is most relevant to a user’s search intent.

Building Consensus Through PR

So, then, building consensus is key for GEO. But, how can you help build consensus?

The answer is through the use of experts.

How Experts Build Consensus

Let’s take an example from Mark Cuban, a financial expert and Florida resident.

When discussing the topic of the housing crisis in Florida on the platform Bluesky, he stated that a major issue is the affordability of home insurance.

This, was then cited by a variety of articles on sites like GoBankingRates.

Further articles may then also cite this article, perhaps bringing in other experts to comment.

Soon, a consensus forms: Florida’s housing crisis is due at least in part to homeowners’ insurance rates. And if we ask Google this question, the AI snippet reflects just that.

Screenshot from search for [what are the factors in florida’s housing crisis], Google, February 2025

 Even a single expert’s opinion can have a major impact on consensus, especially for smaller, more niche topics.

Positioning Expertise To Build Consensus

The important thing to keep in mind is consensus cannot be faked.

Building consensus requires convincing people. And to convince people, you’ll need to establish your expertise and credibility and get a conversation going to establish consensus on a topic.

In other words, you’ll need:

  • Credible expertise.
  • High-quality data or insights.
  • Enough coverage or references across the web to establish that your viewpoint is widely accepted (or at least seriously considered) by other experts.

Say you want to build consensus around the idea that the best way to pay off debts is to prioritize debts with the highest interest rates.

By publishing original research that shows this to be true, backed by the voice of an established expert, you can start a conversation on this topic.

As further blog posts and online conversations reference your data, your position will gain greater reach. Then, more experts may comment on it and agree with it, over time building that consensus on the topic.

Then, when somebody goes to research the topic with AI search, the AI will find that consensus you’ve built.

Consider the case of blue light.

In 2015, the Journal of Clinical Sleep Medicine published a study:

“Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness.”

This study showed that exposure to blue light suppresses melatonin production, leading to delayed sleep onset and reduced sleep quality.

This research was then cited by experts and major outlets before gaining traction on social media and blogs.

Now, if you AI search “Does blue light affect sleep?” you’ll be given this information (that blue light affects sleep), and it will cite this original research and the websites and experts who wrote about it.

perplexity ai resultsScreenshot by author from Perplexity, February 2025

Collaboration For Building Consensus

Of course, you don’t have to simply wait for a conversation to find its way to other experts. By collaborating directly, you can amplify the establishment of a consensus.

Let’s take the same example as before. But, this time, we make a small change: Instead of authoring studies or guest posts solo, we do so in collaboration with another established expert.

In doing so, you can essentially “hijack” your collaborator’s authority and audience:

  • Their followers will become aware of your research.
  • Their peers and fellow experts are more likely to consider your findings.
  • Media outlets also view collaborations as more credible than a single, lesser-known source, further boosting your reach.

Take the example of David Grossman’s article The Cost of Poor Communications.

Inclusion in an article published on Provoke Media’s The Holmes Report, allowed Grossman to present his ideas to a wider audience.

This information went on to be referenced in a variety of other articles, including sites such as Harvard Business School.

Then, over time, these ideas form part of the consensus on business communication, appearing in AI search results for platforms such as Perplexity.

Screenshot by author from Perplexity, February 2025

Even With The Best Methods, Building Consensus Is A Process

This is, of course, a simplification of the process.

Authoring a study, or collaborating with another expert, is no guarantee that you will build a new consensus.

Your study or collaboration may simply go unnoticed, even if you do everything right.

Or, it may go against the existing consensus. In this case, you’ll face a serious uphill battle to try and change that consensus.

Even if you are successful, it may take some time for a new consensus to emerge.

These things don’t happen overnight. But, that doesn’t mean you should give up; every time you publish a study or collaborate with another expert, your reach and authority grows.

And as you continue the conversation through further studies and guest posts, a new consensus can begin to form.

At the heart of that new consensus are your ideas and expertise.

In turn, when providing sources for its search results, AI search will surface your ideas and drive traffic to your website.

Long-Term Success Over Short-Term Hacks

While new hacks are always being found, tried and tested methods will always be the better choice.

Instead of chasing an ever-moving target trying to outsmart constantly evolving generative AI tools, your time is much better spent building consensus around a topic.

This means:

  • Establishing expertise through studies and guest posts.
  • Collaborating with other experts to boost your reach and authority.
  • Continuing this process of authority building over time.

Building consensus takes time, but the payoff is lasting influence, which sees AI search surfacing your content and treating you as a trusted source of information.

More Resources:


Featured Image: mentalmind/Shutterstock

AI Overviews Glitch May Hint at Google’s Algorithm via @sejournal, @martinibuster

A glitch in Google’s AI Overviews may inadvertently expose how Google’s algorithm understands search queries and chooses answers. Bugs in Google Search are useful to examine because they may expose parts of Google’s algorithms that are normally unseen.

AI-Splaining?

Lily Ray re-posted a tweet that showed how typing nonsense phrases into Google results in a wrong answer where AI Overviews essentially makes up an answer. She called it AI-Splaining.

User Darth Autocrat (Lyndon NA) responded:

“It shows how G have broken from “search”.

It’s not “finding relevant” or “finding similar”, it’s literally making stuff up, which means G are not

a) A search engine
b) An answer engine
c) A recommendation engine they are now
d) A potentially harmful joke”

Google has a long history of search bugs but this is different because there’s an LLM summarizing answers based on grounding data (web, knowledge graph, etc.) and the LLM itself. So, the search marketer known as Darth Autocrat has a point that this Google search bug is in an entirely different level than anything that has been seen before.

Yet there’s one thing that remains the same and that is that search bugs represent an opportunity to see something that’s going on behind the search box that isn’t normally viewable.

AI Bug Is Not Limited To Google AIO

What I think is happening is that Google’s systems are parsing the words to understand what the user means. So in the case where a user query is vague, I think that the LLM will decide what the user is asking based on several likely meanings, like a decision tree in machine learning where a machine will map out likely meanings, remove out the branches that are least likely and predict the likely meaning.

I was reading a patent that Google recently filed that’s on a related theme, where an AI tries to guess what a user means by guiding a user through a decision tree and then storing that information for future interactions with them or with others. This patent, Real-Time Micro-Profile Generation Using a Dynamic Tree Structure, is for AI voice assistants, but it gives an idea of how an AI will try to guess what a user means and then proceed.

So I tested Google, ChatGPT and Claude and discovered that all three make similar mistakes of inferring what a user’s query means and confidently providing a wrong answer.

The sample question was:

What is the parallel puppy fishing technique for striped bass?

There is no such thing as a “parallel puppy fishing technique” but there is a technique called “walking the dog” and there is another technique where an angler in a kayak or a boat will cast parallel to the shore or some other structure.

AI Overviews (AIO)

AIO confidently offered a hallucinated response that is incorrect because it assumed that the user was confusing multiple actual fishing tactics so it blended multiple tactics and fishing lures to invent a non-existent tactic.

AI Overviews gave the following wrong answer:

“The “parallel puppy” technique for striped bass involves a specific retrieve method for topwater plugs, often referred to as “walking the dog”. It’s characterized by a zig-zagging motion imparted to the lure by sweeping the rod side to side, often while maintaining it parallel to the water. This action mimics a wounded baitfish, which can be highly attractive to striped bass.”

Screenshot Of AIO’s Hallucinated Answer

ChatGPT 4o

ChatGPT made the same mistake that Google’s AIO did and hallucinated a complex fishing tactic.

ChatGPT gave the following wrong answer:

“The parallel puppy fishing technique for striped bass is a specialized casting and retrieval method often used when targeting striped bass in shallow water near shoreline structure like jetties, sod banks, or rocky points.

Here’s a breakdown of what it typically involves:

What it is:
“Parallel” refers to casting your lure or bait parallel to the shoreline, rather than out into deeper water.

“Puppy” is often a nickname for small soft plastics, swimbaits, or small-profile plugs, typically in the 3″ to 5″ range—mimicking juvenile baitfish”.

That last sentence is interesting because it’s completely wrong, there are no style of lures that are nicknamed “puppy” but there is a technique called walking the dog.”

Screenshot Of ChatGPT’s Incorrect Answer

Anthropic Claude

Anthropic Claude, using the latest 3.7 Sonnet model, provided a correct answer. It correctly said it didn’t recognize a “legitimate fishing technique” with the provided name and then moved on with the presumption that the user wants to learn striped bass fishing tactics and provides a list of techniques from which a user can select a topic as a follow-up question.

Screenshot Of Anthropic Claude’s Correct Answer

Google Gemini Pro 2.5

Lastly I queried Google Gemini, using the latest Pro 2.5 model. Gemini also offered a correct answer plus a decision tree output that enables a user to decide:

A. That they are misunderstanding fishing tactics

B. Referring to a highly localized tactic

C. Is combining multiple fishing tactics

D. Or is confusing a tactic for another species of fish.

Screenshot of Correct Gemini Pro 2.5 Answer

What’s interesting about that decision tree, which resembles the decision tree approach in the unrelated Google patent, is that those possibilities kind of reflect what Google’s AI Overviews LLM and ChatGPT may have considered when trying to answer the question. They both may have selected from a decision tree and chosen option C, that the user is combining fishing tactics and based their answers on that.

Both Claude and Gemini were confident enough to select option E, that the user doesn’t know what they’re talking about and resorted to a decision tree to guide the user into selecting the right answer.

What Does This Mean About AI Overviews (AIO)?

Google recently announced it’s rolling out Gemini 2.0 for advanced math, coding, and multimodal queries but the hallucinations in AIO suggest that the model Google is using to answer text queries may be inferior to Gemini 2.5.

That’s probably what is happening with gibberish queries and like I said, it offers an interesting insight to how Google AIO actually works.

Featured Image by Shutterstock/Slladkaya

The quest to build islands with ocean currents in the Maldives

In satellite images, the 20-odd coral atolls of the Maldives look something like skeletal remains or chalk lines at a crime scene. But these landforms, which circle the peaks of a mountain range that has vanished under the Indian Ocean, are far from inert. They’re the products of living processes—places where coral has grown toward the surface over hundreds of thousands of years. Shifting ocean currents have gradually pushed sand—made from broken-up bits of this same coral—into more than 1,000 other islands that poke above the surface. 

But these currents can also be remarkably transient, constructing new sandbanks or washing them away in a matter of weeks. In the coming decades, the daily lives of the half-million people who live on this archipelago—the world’s lowest-lying nation—will depend on finding ways to keep a solid foothold amid these shifting sands. More than 90% of the islands have experienced severe erosion, and climate change could make much of the country uninhabitable by the middle of the century.

Off one atoll, just south of the Maldives’ capital, Malé, researchers are testing one way to capture sand in strategic locations—to grow islands, rebuild beaches, and protect coastal communities from sea-level rise. Swim 10 minutes out into the En’boodhoofinolhu Lagoon and you’ll find the Ramp Ring, an unusual structure made up of six tough-skinned geotextile bladders. These submerged bags, part of a recent effort called the Growing Islands project, form a pair of parentheses separated by 90 meters (around 300 feet).

The bags, each about two meters tall, were deployed in December 2024, and by February, underwater images showed that sand had climbed about a meter and a half up the surface of each one, demonstrating how passive structures can quickly replenish beaches and, in time, build a solid foundation for new land. “There’s just a ton of sand in there. It’s really looking good,” says Skylar Tibbits, an architect and founder of the MIT Self-Assembly Lab, which is developing the project in partnership with the Malé-based climate tech company Invena.

The Self-Assembly Lab designs material technologies that can be programmed to transform or “self-assemble” in the air or underwater, exploiting natural forces like gravity, wind, waves, and sunlight. Its creations include sheets of wood fiber that form into three-dimensional structures when splashed with water, which the researchers hope could be used for tool-free flat-pack furniture. 

Growing Islands is their largest-scale undertaking yet. Since 2017, the project has deployed 10 experiments in the Maldives, testing different materials, locations, and strategies, including inflatable structures and mesh nets. The Ramp Ring is many times larger than previous deployments and aims to overcome their biggest limitation. 

In the Maldives, the direction of the currents changes with the seasons. Past experiments have been able to capture only one seasonal flow, meaning they lie dormant for months of the year. By contrast, the Ramp Ring is “omnidirectional,” capturing sand year-round. “It’s basically a big ring, a big loop, and no matter which monsoon season and which wave direction, it accumulates sand in the same area,” Tibbits says.

The approach points to a more sustainable way to protect the archipelago, whose growing population is supported by an economy that caters to 2 million annual tourists drawn by its white beaches and teeming coral reefs. Most of the country’s 187 inhabited islands have already had some form of human intervention to reclaim land or defend against erosion, such as concrete blocks, jetties, and breakwaters. Since the 1990s, dredging has become by far the most significant strategy. Boats equipped with high-power pumping systems vacuum up sand from one part of the seabed and spray it into a pile somewhere else. This temporary process allows resort developers and densely populated islands like Malé to quickly replenish beaches and build limitlessly customizable islands. But it also leaves behind dead zones where sand has been extracted—and plumes of sediment that cloud the water with a sort of choking marine smog. Last year, the government placed a temporary ban on dredging to prevent damage to reef ecosystems, which were already struggling amid spiking ocean temperatures.

Holly East, a geographer at the University of Northumbria, says Growing Islands’ structures offer an exciting alternative to dredging. But East, who is not involved in the project, warns that they must be sited carefully to avoid interrupting sand flows that already build up islands’ coastlines. 

To do this, Tibbits and Invena cofounder Sarah Dole are conducting long-term satellite analysis of the En’boodhoofinolhu Lagoon to understand how sediment flows move around atolls. On the basis of this work, the team is currently spinning out a predictive coastal intelligence platform called Littoral. The aim is for it to be “a global health monitoring system for sediment transport,” Dole says. It’s meant not only to show where beaches are losing sand but to “tell us where erosion is going to happen,” allowing government agencies and developers to know where new structures like Ramp Rings can best be placed.

Growing Islands has been supported by the National Geographic Society, MIT, the Sri Lankan engineering group Sanken, and tourist resort developers. In 2023, it got a big bump from the US Agency for International Development: a $250,000 grant that funded the construction of the Ramp Ring deployment and would have provided opportunities to scale up the approach. But the termination of nearly all USAID contracts following the inauguration of President Trump means the project is looking for new partners.

Matthew Ponsford is a freelance reporter based in London.

$8 billion of US climate tech projects have been canceled so far in 2025

This year has been rough for climate technology: Companies have canceled, downsized, or shut down at least 16 large-scale projects worth $8 billion in total in the first quarter of 2025, according to a new report.

That’s far more cancellations than have typically occurred in recent years, according to a new report from E2, a nonpartisan policy group. The trend is due to a variety of reasons, including drastically revised federal policies.

In recent months, the White House has worked to claw back federal investments, including some of those promised under the Inflation Reduction Act. New tariffs on imported goods, including those from China (which dominates supply chains for batteries and other energy technologies), are also contributing to the precarious environment. And demand for some technologies, like EVs, is lagging behind expectations. 

E2, which has been tracking new investments in manufacturing and large-scale energy projects, is now expanding its regular reports to include project cancellations, shutdowns, and downsizings as well.  From August 2022 to the end of 2024, 18 projects were canceled, closed, or downsized, according to E2’s data. The first three months of 2025 have already seen 16 projects canceled.

“I wasn’t sure it was going to be this clear,” says Michael Timberlake, communications director of E2. “What you’re really seeing is that there’s a lot of market uncertainty.”

Despite the big number, it is not comprehensive. The group only tracks large-scale investments, not smaller announcements that can be more difficult to follow. The list also leaves out projects that companies have paused.

“The incredible uncertainty in the clean energy sector is leading to a lot of projects being canceled or downsized, or just slowed down,” says Jay Turner, a professor of environmental studies at Wellesley College. Turner leads a team that also tracks the supply chain for clean energy in the US in a database called the Big Green Machine.

Some turnover is normal, and there have been a lot of projects announced since the Inflation Reduction Act was passed in 2022—so there are more in the pipeline to potentially be canceled, Turner says. So many battery and EV projects were announced that supply would have exceeded demand “even in a best-case scenario,” Turner says. So some of the project cancellations are a result of right-sizing, or getting supply and demand in sync.

Other projects are still moving forward, with hundreds of manufacturing facilities under construction or operational. But it’s not as many as we’d see in a more stable policy landscape, Turner says.

The cancellations include a factory in Georgia from Aspen Aerogels, which received a $670 million loan commitment from the US Department of Energy in October. The facility would have made materials that can help prevent or slow fires in battery packs. In a February earnings call, executives said the company plans to focus on an existing Rhode Island facility and projects in other countries, including China and Mexico. Aspen Aerogels didn’t respond to a request for further comment. 

Hundreds of projects that have been announced in just the last few years are under construction or operational despite the wave of cancellations. But it is an early sign of growing uncertainty for climate technology. 

 “You’re seeing a business environment that’s just unsure what’s next and is hesitant to commit one way or another,” Timberlake says.

Yahoo will give millions to a settlement fund for Chinese dissidents, decades after exposing user data

A lawsuit to hold Yahoo responsible for “willfully turning a blind eye” to the mismanagement of a human rights fund for Chinese dissidents was settled for $5.425 million last week, after an eight-year court battle. At least $3 million will go toward a new fund; settlement documents say it will “provide humanitarian assistance to persons in or from the [People’s Republic of China] who have been imprisoned in the PRC for exercising their freedom of speech.” 

This ends a long fight for accountability stemming from decisions by Yahoo, starting in the early 2000s, to turn over information on Chinese internet users to state security, leading to their imprisonment and torture. After the actions were exposed and the company was publicly chastised, Yahoo created the Yahoo Human Rights Fund (YHRF), endowed with $17.3 million, to support individuals imprisoned for exercising free speech rights online. 

But in the years that followed, its chosen nonprofit partner, the Laogai Research Foundation, badly mismanaged the fund, spending less than $650,000—or 4%—on direct support for the dissidents. Most of the money was, instead, spent by the late Harry Wu, the politically connected former Chinese dissident who led Laogai, on his own projects and interests. A group of dissidents sued in 2017, naming not just Laogai and its leadership but also Yahoo and senior members from its leadership team during the time in question; at least one person from Yahoo always sat on YHRF’s board and had oversight of its budget and activities.  

The defendants—which, in addition to Yahoo and Laogai, included the Impresa Legal Group, the law firm that worked with Laogai—agreed to pay the six formerly imprisoned Chinese dissidents who filed the suit, with five of them slated to receive $50,000 each and the lead plaintiff receiving $55,000. 

The remainder, after legal fees and other expense reimbursements, will go toward a new fund to continue YHRF’s original mission of supporting individuals in China imprisoned for their speech. The fund will be managed by a small nonprofit organization, Humanitarian China, founded in 2004 by three participants in the 1989 Chinese democracy movement. Humanitarian China has given away $2 million in cash assistance to Chinese dissidents and their families, funded primarily by individual donors. 

This assistance is often vital; political prisoners are frequently released only after years or decades in prison, sometimes with health problems and without the skills to find steady work in the modern job market. They continue to be monitored, visited, and penalized by state security, leaving local employers even more unwilling to hire them. It’s a “difficult situation,” Xu Wanping, one of the plaintiffs, previously told MIT Technology Review—“the sense of isolation and that kind of helplessness we feel … if this lawsuit can be more effective, if it could help restart this program, it is really meaningful.” As we wrote in our original story,

“Xu lives in low-income housing in his hometown of Chongqing, in western China. He Depu, another plaintiff, his wife, and an adult son survive primarily on a small monthly hardship allowance of 1,500 RMB ($210) provided by the local government as collateral to ensure that he keeps his opinions to himself. But he knows that even if he is silent, this money could disappear at any point.” 

The terms of the settlement bar the parties from providing more than a cursory statement to the media, but Times Wang, the plaintiffs’ lawyer, previously told MIT Technology Review about the importance of the fund. In addition to the crucial financial support, “it is a source of comfort to them [the dissidents] to know that there are people outside of China who stand with them,” he said. 

MIT Technology Review took an in-depth look at the case and the mismanagement at YHRF, which you can read here