Google Found Guilty of Illegal Ad Tech Monopoly in Court Ruling via @sejournal, @MattGSouthern

A federal judge has ruled that Google maintained illegal monopolies in the digital advertising technology market.

In a landmark case, the Department of Justice and 17 states found Google liable for antitrust violations.

Federal Court Finds Google Violated Sherman Act

U.S. District Judge Leonie Brinkema ruled that Google illegally monopolized two key markets in digital advertising:

  • The publisher ad server market
  • The ad exchange market

The 115-page ruling (PDF link) states Google violated Section 2 of the Sherman Antitrust Act by “willfully acquiring and maintaining monopoly power.”

It also found that Google unlawfully tied its publisher ad server (DFP) and ad exchange (AdX) together.

Judge Brinkema wrote in the ruling:

“Plaintiffs have proven that Google possesses monopoly power in the publisher ad server for open-web display advertising market. Google’s publisher ad server DFP has a durable and ‘predominant share of the market’ that is protected by high barriers both to entry and expansion.”

Google’s Dominant Market Position

The court found that Google controlled approximately 91% of the worldwide publisher ad server market for open-web display advertising from 2018 to 2022.

In the ad exchange market, Google’s AdX handled between 54% and 65% of total transactions, roughly nine times larger than its closest competitor.

The judge cited Google’s pricing power as evidence of its monopoly. Google maintained a 20% take rate for its ad exchange services for over a decade, despite competitors charging only 10%.

The ruling states:

“Google’s ability to maintain AdX’s 20% take rate under these market conditions is further direct evidence of the firm’s sustained and substantial power.”

Illegal Tying of Services Found

A key part of the ruling focused on Google’s practice of tying its publisher ad server (DFP) to its ad exchange (AdX).

The court determined that Google effectively forced publishers to use DFP if they wanted access to real-time bidding with AdWords advertisers, a crucial feature of AdX.

Judge Brinkema wrote, quoting internal Google communications:

“By tying DFP to AdX, Google took advantage of its ‘owning the platform, the exchange, and a huge network’ of advertising demand.”

This was compared to “Goldman or Citibank own[ing] the NYSE [i.e., the New York Stock Exchange].”

Case History & State Involvement

The Department of Justice initially filed this lawsuit in January 2023, with eight states. Nine more states later joined, bringing the total to 17 states challenging Google’s practices.

Michigan Attorney General Dana Nessel explained why states joined the case:

“The power that Google wields in the digital advertising space has had the effect of either pushing smaller companies out of the market or making them beholden to Google ads.”

Google has consistently denied wrongdoing. Dan Taylor, Vice President of Global Ads, stated that the DOJ’s lawsuit would “reverse years of innovation, harming the broader advertising sector.”

What This Means for Digital Marketers

This ruling has implications for the digital marketing world:

  1. For publishers: If Google must restructure its ad tech business, the decision could give publishers more control over ad inventory and potentially higher revenue shares.
  2. For advertisers: Changes to Google’s ad tech stack may lead to more transparent bidding and lower costs over time.
  3. For marketing agencies: Using a variety of ad tech providers may become more important as Google faces these challenges.

What’s Next?

Judge Brinkema has yet to decide on penalties for Google’s violations. Soon, the court will “set a briefing schedule and hearing date to determine the appropriate remedies.”

Possible penalties include forcing Google to sell parts of its ad tech business. This would dramatically change the digital advertising landscape.

This ruling signals that changes may be coming for marketers relying on Google’s integrated advertising system.

Google intends to appeal the decision, extending the legal battle for years.

From it’s newsroom on X:


Featured Image: sirtravelalot/Shutterstock

TikTok Launches Footnotes: Its Answer To X’s Community Notes via @sejournal, @MattGSouthern

TikTok is testing a new feature called “Footnotes” that adds extra information to videos on the platform.

The test will start today in the United States.

What Are TikTok Footnotes?

Footnotes let approved TikTok users add information to videos. This feature aims to make content more trustworthy.

TikTok calls this a “community-based approach” where many users help improve information quality.

Who Can Contribute Footnotes?

TikTok has rules for who can add footnotes. US users can apply now, and TikTok will also invite eligible users.

To qualify, you must:

  • Have used TikTok for more than six months
  • Be at least 18 years old
  • Have a clean record with no recent Community Guidelines violations

TikTok will slowly give more people access over the coming months. Approved users can both add footnotes and rate others’ contributions.

How The System Works

TikTok’s announcement explains that Footnotes uses a special ranking system to help people with different viewpoints find common ground.

The system lets contributors add footnotes and vote on how helpful others’ additions are. Only footnotes that enough people find helpful will be shown to everyone.

As more people write and rate footnotes on different topics, the system will get better at displaying the most valuable information.

Similar to X’s Community Notes

TikTok’s Footnotes is similar to Community Notes on X. TikTok mentions that Footnotes is “inspired by the open-sourced system that other platforms use,” which appears to reference Community Notes.

Both systems:

  • Let users add context to posts
  • Use a rating system where people with different viewpoints need to agree
  • Require contributors to meet specific standards
  • Only show notes that many users find helpful
  • Aim to improve content quality through community input rather than just relying on platform moderators

This approach to content checking is becoming popular across social media as platforms look for better ways to handle misinformation without being accused of bias.

Part of a Broader Industry Shift

TikTok’s Footnotes launch comes amid a trend in social media content moderation. Following X’s Community Notes system, Meta announced in March that it would replace its third-party fact-checking program with its own Community Notes feature.

This shift toward community-based moderation represents a major change in how platforms handle potentially misleading content. Rather than relying on centralized fact-checkers, these platforms now empower users to provide context.

The timing of these changes is notable, as they follow President Trump’s return to office and come amid ongoing regulatory scrutiny. For TikTok specifically, this move comes at a sensitive time. The company faces a June 19 deadline for its parent company, ByteDance, to divest its U.S. operations, following a 75-day extension granted by the Trump administration.

Looking Ahead

TikTok says Footnotes is still in testing. The company will gather feedback from users, contributors, and creators to improve the feature. Marketers should watch how this develops before making big strategy changes.


Featured Image: ShutterStockies/Shutterstock

Google’s New Domain Structure: What’s Next For Hreflang? via @sejournal, @MattGSouthern

Google is making a big change to its domain structure. Soon, all country-specific Google domains will redirect to Google.com.

This change ties into earlier hints that Google may rely less on hreflang markup, showing how Google is changing its approach to international search.

Google Consolidates Domain Structure

Google announced plans to phase out country-specific domains like google.fr (France), google.ca (Canada), and google.co.jp (Japan). All these will eventually redirect to Google.com.

Google says in its announcement:

“Over the years, our ability to provide a local experience has improved. In 2017, we began providing the same experience with local results for everyone using Search, whether they were using google.com or their country’s ccTLD.”

Google explained that country-level domains are no longer needed because they can now deliver locally relevant results no matter which domain you use.

Implementation Timeline

Google will roll out this change slowly over the coming months, giving users time to adjust to the new system.

While the URL in your browser will change, Google says search will still work the same way.

Google stressed that the update “won’t affect the way Search works, nor will it change how we handle obligations under national laws.”

Connection to Hreflang Evolution

This domain change seems to be part of a bigger shift in how Google handles international content.

In July, Google’s Gary Illyes hinted that they might rely less on manual hreflang tags and more on automatic language detection.

Illyes stated in a podcast:

“Ultimately, I would want less and less annotations, site annotations, and more automatically learned things.”

SEO professional Montse Cano pointed out this connection in a social media post, noting that “hreflang might actually change too due to improvements in AI.”

While no changes are confirmed, it’s something to watch for in the future.

Implications For SEO Professionals

This change affects search marketers in several ways, especially those working on international SEO:

  • Your analytics will show different referral patterns as traffic moves from country-specific domains to Google.com.
  • Along with less reliance on hreflang, website managers may have fewer technical tasks for international targeting.
  • Google seems more confident in automatically detecting the right content versions for users.
  • Users should get a more uniform experience across regions while still seeing localized results.

Next Steps

While Google is getting better at automatic detection, SEO pros should still:

  • Keep using hreflang tags until Google officially says otherwise
  • Make sure your site clearly signals language and regional targeting
  • Watch your analytics for traffic pattern changes during the transition
  • Think about how this affects SEO strategies that relied on country-specific domains

Key Takeaway

This change shows Google is more confident in understanding context, language, and user intent without needing explicit signals like separate domains.

Combined with discussions about automatic language detection, Google’s AI seems ready to handle work that once required manual setup.

SEO professionals should see this as part of search technology’s natural evolution. Stay alert to how these changes affect your international search visibility and traffic.


Featured Image: JHVEPhoto/Shutterstock

LinkedIn Study Finds Adding Links Boosts Engagement By 13% via @sejournal, @MattGSouthern

A new study of over 577,000 LinkedIn posts challenges common marketing advice. It finds that posts with links get 13.57% more interactions and 4.90% more views than posts without links.

The LinkedIn study by Metricool analyzed nearly 48,000 company pages over three years. The findings give marketers solid data to rethink their LinkedIn strategies.

Link Performance Contradicts Common Advice

For years, social media experts have warned against adding links in LinkedIn posts.

Many claimed the platform would show these posts to fewer people to keep users on LinkedIn.

This new research says that’s wrong.

The data shows that about 31% of LinkedIn posts contained links to other websites. These posts consistently did better than posts without links.

Image Credit: Metricool LinkedIn Study 2025.

Content Format Performance Reveals Unexpected Winners

The study also found big differences in how content types perform.

Carousels (document posts) work best for engagement, with the highest engagement rate (45.85%) of any format. People on LinkedIn are willing to spend time clicking through multiple slides.

Polls are a missed opportunity. They make up only 0.00034% of all posts analyzed but got 206.33% more reach than average posts. Almost no one uses them, but they perform well.

Text-only posts performed worse than visual content across all metrics. Despite being common, they received the fewest interactions.

Video Content Shows Remarkable Growth

LinkedIn video content grew by 53% last year, with engagement up by 87.32%. This growth is faster than on TikTok, Reels, and YouTube.

The report states:

“Video posting may have increased by 13.77%, but the real story is in the rise of impressions (+73.39%) and views (+52.17%). Users are engaging more with video content, which indicates that LinkedIn is prioritizing this format in its algorithm.”

Industry-Specific Insights

The research broke down performance by industry. Surprisingly, sectors with smaller followings often get better engagement.

Manufacturing and utilities companies had fewer followers than education or retail companies, yet they received more engagement per post.

This challenges the idea that having more followers automatically means better results.

Practical Tips for Marketers

Based on these findings, here’s what LinkedIn marketers should do:

  • Don’t avoid links: Include links when they add value. They help, not hurt, your posts.
  • Mix up your content: Use more carousels and polls. They perform much better than other formats.
  • Send more traffic through LinkedIn: With clicks up 28.13% year-over-year, LinkedIn is better than many think for driving website traffic.
  • Be realistic about follower growth: Only 17.68% of accounts gained followers in 2024. Growing a LinkedIn following is harder than on other platforms.

Looking Ahead

The Metricool report challenges fundamental LinkedIn marketing beliefs with solid data. The most useful finding for SEO and content marketers is that adding links helps rather than hurts your posts.

Marketers should regularly test old advice against real performance data. What worked on LinkedIn in the past might not work in 2025.


Featured Image: Jartee/Shutterstock

Microsoft Monetize Gets A Major AI Upgrade via @sejournal, @brookeosmundson

Microsoft’s Monetize platform just received one of its biggest updates to date, and this one is all about working smarter, not harder.

Launched April 14, the new Monetize experience introduces AI-powered tools, a revamped homepage, and much-needed platform enhancements that give both publishers and advertisers more visibility and control.

This isn’t just a design refresh. With Microsoft Copilot now integrated, a new centralized dashboard, and a detailed history log, the platform is being positioned as a smarter command center for digital monetization.

Here’s what’s new and how it impacts your bottom line.

Copilot Is Now Built Into Monetize

Microsoft’s Copilot is now officially integrated into Monetize and available to all clients.

Copilot acts like a real-time AI assistant built directly into your monetization workflow. Instead of sifting through reports and data tables to figure out what’s wrong, Copilot surfaces insights automatically.

Think: “Why is my fill rate down?” or “Which line items are underperforming this week?”

Now, you’re able to ask and get answers without leaving the platform.

It’s designed to proactively alert users to revenue-impacting issues, like creatives that haven’t served, line items that didn’t deliver as expected, or unexpected dips in CPM.

For publishers who manage large volumes of inventory and multiple demand sources, this type of AI support can dramatically reduce troubleshooting time and help get campaigns back on track faster.

This allows monetization teams to shift their focus to revenue strategy, not just diagnostics.

A Smarter, Centralized Homepage

The new Monetize homepage is more than just a cosmetic update, it’s now the nerve center of the platform. It’s built around clarity and action.

Instead of bouncing between multiple tabs or reports, users now land on a central dashboard that shows performance highlights, revenue trends, system notifications, and even troubleshooting insights.

It’s designed to cut down the time spent navigating the platform and ramp up how quickly you can make revenue-driving decisions.

Microsoft Monetize homepage performance highlights example.Image credit: Microsoft Ads blog, April 2025

Some of the key features of the new homepage include:

  • Performance highlights: Get a high-level summary of revenue trends and your most important KPIs at the top of the screen.
  • Revenue and troubleshooting insights: What was originally in the Monetize Insights tool is now integrated into the homepage.
  • Brand unblock and authorized sellers insights: Brings visibility to commonly overlooked revenue blocks.

In short: you no longer need to click into five different tabs to piece together what’s going on. The homepage is designed to give a high-level pulse on your monetization performance, with quick pathways to dig deeper when needed.

It’s particularly helpful for teams managing multiple properties, as you can prioritize where to intervene based on the highest revenue impact.

A Simplified Navigation Experience

Another welcome change is the platform’s redesigned navigation. Microsoft has moved to a cleaner left-hand panel layout, consistent with its broader product ecosystem.

It may seem like a small thing, but this update removes a lot of the friction users previously experienced when trying to find specific tools or data. Now, when you hover over a section like “Line Items” or “Reporting,” all related sub-navigation options appear instantly, helping users get where they need to go faster.

For publishers who jump between Microsoft Ads, Monetize, and other tools like Microsoft’s Analytics offerings, this consistency in layout creates a smoother experience overall.

History Log Adds Transparency

One of the more functional (but underrated) updates is the new history change log.

This feature gives users the ability to view a running history of platform changes, whether it’s edits to ad units, campaign-level changes, or adjustments made by different team members.

You can now:

  • Filter changes by user, object type, or date range
  • View a summary of all edits made to a specific item over time
  • Compare and search up to five different objects at once
  • Spot which changes may have inadvertently affected revenue or delivery

The is such a time-saver for teams managing complex account structures or operating across multiple internal stakeholders.

Why Advertisers and Brands Should Care

While most of these updates are tailored to publishers, advertisers and brands also stand to benefit – especially those buying programmatically within Microsoft’s ecosystem.

Here’s a few examples of how brands and advertisers can benefit:

  • Cleaner inventory = better delivery. Copilot helps publishers resolve issues like broken creatives or poor match rates faster. That means your ads are more likely to show where and when they should.
  • More consistent pricing. With publishers better able to manage and optimize their inventory, the fluctuations in floor pricing and bid dynamics can become more predictable.
  • Better campaign outcomes. When ad operations run more smoothly, campaign metrics tend to improve.
  • Reduced latency. The homepage’s new alert system flags latency issues immediately, helping prevent delayed or missed ad requests that impact advertiser performance.

In short: a more efficient supply side leads to fewer wasted impressions and stronger results for advertisers across Microsoft inventory.

Looking Ahead

With this revamp, Microsoft is signaling that Monetize is no longer just an ad server: it’s becoming an intelligence hub for publishers.

Between the Copilot integration, the centralized homepage, and detailed change logs, the platform gives monetization teams tools to act faster, stay informed, and optimize proactively.

By improving the infrastructure on the publisher side, Microsoft is also improving the health and quality of its programmatic marketplace. That’s a win for everyone involved, whether you’re selling impressions or buying them.

If you’re a publisher already using Monetize, now’s the time to explore these new features. If you’re an advertiser, these updates may mean more reliable inventory and smarter campaign performance across Microsoft’s supply chain.

Google AI Overview Study: 90% Of B2B Buyers Click On Citations via @sejournal, @MattGSouthern

Google’s AI Overviews have changed how search works. A TrustRadius report shows that 72% of B2B buyers see AI Overviews during research.

The study found something interesting: 90% of its respondents said they click on the cited sources to check information.

This finding differs from previous reports about declining click rates.

AI Overviews Are Affecting Search Patterns in Complex Ways

When AI summaries first appeared in search results, many publishers worried about “zero-click searches” reducing traffic. Many still see evidence of fewer clicks across different industries.

This research suggests B2B tech searches work differently. The study shows that while traffic patterns are changing, many users in their sample don’t fully trust AI content. They often check sources to verify what they read.

The report states:

“These overviews cite sources, and 90% of buyers surveyed said that they click through the sources cited in AI Overviews for fact-checking purposes. Buyers are clearly wanting to fact-check. They also want to consult with their peers, which we’ll get into later.”

If this extends beyond this study, being cited in these overviews might offer visibility for specific queries.

From Traffic Goals to Citation Considerations

While still optimizing for organic clicks, becoming a citation source for AI overviews is valuable.

The report notes:

“Vendors can fill the gap in these tools’ capabilities by providing buyers with content that answers their later-stage buying questions, including use case-specific content or detailed pricing information.”

This might mean creating clear, authoritative content that AI systems could cite. This applies especially to category-level searches where AI Overviews often appear.

The Ungated Content Advantage in AI Training

The research spotted a common mistake about how AI works. Some vendors think AI models can access their gated content (behind forms) for training.

They can’t. AI models generally only use publicly available content.

The report suggests:

“Vendors must find the right balance between gated and ungated content to maintain discoverability in the age of AI.”

This creates a challenge for B2B marketers who put valuable content behind forms. Making more quality information public could influence AI systems. You can still keep some premium content gated for lead generation.

Potential Implications For SEO Professionals

For search marketers, consider these points:

  • Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness seems even more critical for AI evaluation.
  • The research notes that “AI tools aren’t just training on vendor sites… Many AI Overviews cite third-party technology sites as sources.”
  • As organic traffic patterns change, “AI Overviews are reshaping brand discoverability” and possibly “increasing the use of paid search.”

Evolving SEO Success Metrics

Traditional SEO metrics like organic traffic still matter. But this research suggests we should also monitor other factors, like how often AI Overviews cite you and the quality of that traffic.

Kevin Indig is quoted in the report stating:

“The era of volume traffic is over… What’s going away are clicks from the super early stage of the buyer journey. But people will click through visit sites eventually.”

He adds:

“I think we’ll see a lot less traffic, but the traffic that still arrives will be of higher quality.”

This offers search marketers one view on handling the changing landscape. Like with all significant changes, the best approach likely involves:

  • Testing different strategies
  • Measuring what works for your specific audience
  • Adapting as you learn more

This research doesn’t suggest AI is making SEO obsolete. Instead, it invites us to consider how SEO might change as search behaviors evolve.


Featured Image: PeopleImages.com – Yuri A/Shutterstock

Google Confirms That Structured Data Won’t Make A Site Rank Better via @sejournal, @martinibuster

Google’s John Mueller answered a question on Bluesky about whether structured data helps with SEO, which may change how some people think about it.

Schema.org Structured Data

When SEOs talk about structured data they’re talking about Schema.org structured data. There are many kinds of structured data but for SEO purposes only Schema.org structured data matters.

Does Google Use Structured Data For Ranking Purposes?

The person starting the discussion first posted that they were adding structured data to see if it helps with SEO.

Mueller’s first post was a comment about the value of preparation:

“Yes, and also no. I love seeing folks stumble into the world of online marketing, search engines, and all that, but reading up on how things technically work will save you time & help you focus.”

The original poster responded with a question:

“In your experience, how has it helped?”

That’s when Mueller gave his answer:

“(All of the following isn’t new, hence the meme.) Structured data won’t make your site rank better. It’s used for displaying the search features listed in developers.google.com/search/docs/… . Use it if your pages map to & are appropriate for any of those features.”

Google Only Uses Structured Data For Rich Results

It might seem confusing that structured data doesn’t help a site rank better but it makes more sense to think about it as something that makes a site eligible for rich results. In the context of AI Search results, Google uses regularly indexed data from websites and because AI search results are a search feature, it may rely on the documented structured data for search related features (read more about that here: Google Confirms: Structured Data Still Essential In AI Search Era.)

The main points about structured data in the context of AI search is that according to what was shared at a recent Search Central Live (hat tip to Aleyda Solis):

“Structured data is critical for modern search features

Check the documentation for supported types

Structured data is efficient,
…for computers easy to read,
… and very precise”

In a nutshell, for the context of AI Search:
Structured data supports search features and AI Search is an AI feature. AI search also relies on the regular search index apart from the Schema.org structured data.

How Google Uses Structured Data In Search Features

Google uses only a fraction of the available Schema.org structured data. There are currently over 800 Schema.org structured data types and Google only uses around 30 types for which it publishes structured data documentation for required properties for each structured data type and other guidelines and requirements.

The only use Google has for structured data is to collect information in a machine readable format so that it can then use the information for displaying rich results, which can be seen for recipes, reviews, displaying website information in carousel format, and even to enable users to buy books directly from the search results.

Adding Schema.org structured data doesn’t guarantee that Google will display the site with a rich results feature in search. It only makes a site eligible to be displayed in rich results. Adding non-documented forms of Schema.org structured data won’t affect search optimization for a site because Google ignores all but the roughly thirty structured data types.

Read the original discussion on Bluesky:

Adding structured data to see if it helps with SEO

Featured Image by Shutterstock/ViDI Studio

LinkedIn Launches New Creator Hub With Content Strategy Tips via @sejournal, @MattGSouthern

LinkedIn has launched a new “Create on LinkedIn” hub that helps professionals create better content, understand their stats, and use different post types.

The new hub is organized into three main sections: Create, Optimize, and Grow. It also includes a Creator Tools section with specific advice for each post format.

This resource offers helpful tips straight from LinkedIn for people using it to grow their business, build their brand, or share industry expertise.

Screenshot from: https://members.linkedin.com/create, April 2025.

Content Creation Best Practices

The “Create” section explains what makes a good LinkedIn post. It highlights four key parts:

  • A catchy opening that grabs attention
  • Clear, simple messaging
  • Your personal view or unique angle
  • Questions that start conversations

LinkedIn suggests posting 2-5 times weekly to build your audience, noting that “consistency helps you build community.”

The guide recommends these popular content topics:

  • Career advice and personal lessons
  • Industry knowledge and expertise
  • Behind-the-scenes workplace stories
  • Thoughts on industry trends
  • Stories about overcoming challenges

Analytics-Driven Content Optimization

The “Optimize” section shows how to use LinkedIn’s analytics to improve your strategy. It suggests these four steps:

  1. Regularly check how many people see and engage with your posts
  2. Adjust when you post based on when your audience is most active
  3. Set goals using your average performance numbers
  4. Make more content similar to your best-performing posts

Format-Specific Creator Tools

One of the most useful parts for marketers is the breakdown of LinkedIn’s different content types. Each comes with specific tips and technical requirements:

Video Content

LinkedIn says “videos build trust faster” and reveals that “85% of videos watched on LinkedIn are viewed on mute.” This makes subtitles a must.

The guide suggests keeping videos short (60-90 seconds) and posting them directly on LinkedIn instead of sharing links.

Text and Images

For regular posts, LinkedIn stresses being real:

“People want to learn from those they feel a connection to, so it’s best to be yourself.”

It suggests focusing on specific topics rather than broad ones.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Newsletters

You can create newsletters if you have over 150 followers and have posted original content in the last 90 days.

LinkedIn recommends posting on a regular schedule and using eye-catching cover videos.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Live Events

LinkedIn Live lets you stream to your audience using third-party broadcasting tools if you qualify. To help you get the best results, LinkedIn offers tips before, during, and after your event.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Why This Matters

While organic reach has dropped on many social platforms, LinkedIn still offers good visibility opportunities.

The content strategy advice matches what many marketers already do on other platforms. However, it provides specific insights into how LinkedIn’s algorithm works and what its users prefer.

Next Steps for Marketers

LinkedIn’s focus on analytics and testing different content types shows it wants users to be more strategic.

Check out this new resource to update your LinkedIn strategies. The format details are especially helpful for optimizing your content.

With over 1 billion professionals on LinkedIn, the platform is essential for B2B marketing, promoting professional services, and building thought leadership.

Smart marketers will include these approaches in their social media plans.


Featured Image: Fanta Media/Shutterstock

OpenAI CEO Sam Altman Confirms Planning Open Source AI Model via @sejournal, @martinibuster

OpenAI CEO Sam Altman recently said the company plans to release an open source model more capable than any currently available. While he acknowledged the likelihood of it being used in ways some may not approve of, he emphasized that highly capable open systems have an important role to play. He described the shift as a response to greater collective understanding of AI risks, implying that the timing is right for OpenAI to re-engage with open source models.

The statement was in the context of a Live at TED2025 interview where the interviewer, Chris Anderson, asked Altman whether the Chinese open source model DeepSeek had “shaken” him up.

Screenshot Of Sam Altman At Live at TED2025

Altman responded by saying that OpenAI is preparing to release a powerful open-source model that is near the capabilities of the most advanced AI models currently available today.

Altman responded:

“I think open source has an important place. We actually just last night hosted our first like community session to kind of decide the parameters of our open source model and how we want to shape it.

We’re going to do a very powerful open source model. I think this is important. We’re going to do something near the frontier, I think better than any current open source model out there.
This will not be all… like, there will be people who use this in ways that some people in this room, maybe you or I, don’t like. But there is going to be an important place for open source models as part of the constellation here…”

Altman next admitted that they were slow to act on open source but now plan to contribute meaningfully to the movement.

He continued his answer:

“You know, I think we were late to act on that, but we’re going to do it really well.”

About thirty minutes later in the interview Altman circled back to the topic of open source, lightheartedly remarking that maybe in a year the interviewer might yell at him for open sourcing an AI model but he said that in everything there are trade-offs and that he feels OpenAI has done a good job of bringing AI technology into the world in a responsible way.

He explained:

“I do think it’s fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier, as we weren’t sure about the impact these systems were going to have and how to make them safe, that we acted with precaution.

I think a lot of your questions earlier would suggest at least some sympathy to the fact that we’ve operated that way. But now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world.

If you invite me back next year, you will probably yell at me for somebody who has misused these open source systems and say, why did you do that? That was bad. You know, you should have not gone back to your open roots. But you know, we’re not going to get… there’s trade-offs in everything we do. And and we are one player in this one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way.

I think we have over the last almost decade …we have mostly done the thing we’ve set out to do. We have a long way to go in front of us, our tactics will shift more in the future, but adherence to this sort of mission and what we’re trying to do I think, very strong.”

OpenAI’s Open Source Model

Sam Altman acknowledged OpenAI was “late to act” on open source but now aims to release a model “better than any current open source model.” His decision to release an open source AI model is significant because it will introduce additional competition and improvement to the open source side of AI technology.

OpenAI was established in 2015 as a non-profit organization but transitioned in 2019 to a closed source model over concerns about potential misuse. Altman used the word “steward” to describe OpenAI’s role in releasing AI technologies into the world, and the transition to a closed source system reflects that concern.

2025 is a vastly different world from 2019 because there are many highly capable open source models available, models such as DeepSeek among them. Was OpenAI’s hand forced by the popularity of DeepSeek? He didn’t say, framing the decision as an evolution from a position of responsible development.

Sam Altman’s remarks at the TED interview suggest that OpenAI’s new open source model will be powerful but not representative of their best model. Nevertheless, he affirmed that open source models have a place in the “constellation” of AI, with a legitimate role as a strategically important and technically capable part of the broader technological ecosystem.

Featured image screenshot by author

AI Search Study: Product Content Makes Up 70% Of Citations via @sejournal, @MattGSouthern

A new study tracking 768,000 citations across AI search engines shows that product-related content tops AI citations. It makes up 46% to 70% of all sources referenced.

This finding offers guidance on how marketers should approach content creation amid the growth of AI search.

The research, conducted over 12 weeks by XFunnel, looked at which types of content ChatGPT, Google (AI Overviews), and Perplexity most often cite when answering user questions.

Here’s what you need to know about the findings.

Product Content Visible Across Queries

The study shows AI platforms prefer product-focused content. Content with product specs, comparisons, “best of” lists, and vendor details consistently got the highest citation rates.

The study notes:

“This preference appears consistent with how AI engines handle factual or technical questions, using official pages that offer reliable specifications, FAQs, or how-to guides.”

Other content types struggled to get cited as often:

  • News and research articles each got only 5-16% of citations.
  • Affiliate content typically stayed below 10%.
  • User reviews (including forums and Q&A sites) ranged between 3-10%.
  • Blog content received just 3-6% of citations.
  • PR materials barely appeared, typically less than 2% of citations.

Citation Patterns Vary By Funnel Stage

AI platforms cite different content types depending on where customers are in their buying journey:

  • Top of funnel (unbranded): Product content led at 56%, with news and research each at 13-15%. This challenges the idea that early-stage content should focus mainly on education rather than products.
  • Middle of funnel (branded): Product citations dropped slightly to 46%. User reviews and affiliate content each rose to about 14%. This shows how AI engines include more outside opinions for comparison searches.
  • Bottom of funnel: Product content peaked at over 70% of citations for decision-stage queries. All other content types fell below 10%.

B2B vs. B2C Citation Differences

The study found big differences between business and consumer queries:

For B2B queries, product pages (especially from company websites) made up nearly 56% of citations. Affiliate content (13%) and user reviews (11%) followed.

For B2C queries, there was more variety. Product content dropped to about 35% of citations. Affiliate content (18%), user reviews (15%), and news (15%) all saw higher numbers.

What This Means For SEO

For SEO professionals and content creators, here’s what to take away from this study:

  • Adding detailed product information improves citation chances even for awareness-stage content.
  • Blogs, PR content, and educational materials are cited less often. You may need to change how you create these.
  • Check your content mix to make sure you have enough product-focused material at all funnel stages.
  • B2B marketers should prioritize solid product information on their own websites. B2C marketers need strategies that also encourage quality third-party reviews.

The study concludes:

“These observations suggest that large language models prioritize trustworthy, in-depth pages, especially for technical or final-stage information… factually robust, authoritative content remains at the heart of AI-generated citations.”

As AI transforms online searches, marketers who understand citation patterns can gain a competitive edge in visibility.


Featured Image: wenich_mit/Shutterstock