Archive

Generative AI

2026: When AI Assistants Become The First Layer via @sejournal, @DuaneForrester

What I’m about to say will feel uncomfortable to a lot of SEOs, and maybe even some CEOs. I’m not writing this to be sensational, and I know some of my peers will still look sideways at me for it. That’s fine. I’m sharing what the data suggests to me, and I want you to look at the same numbers and decide for yourself.Too many people in our industry have slipped into the habit of quoting whatever guidance comes out of a search engine or AI vendor as if it were gospel. That’s like a soda company telling you, “Our drink is refreshing, you should drink more.” Maybe it really is refreshing. Maybe it just drives their margins. Either way, you’re letting the seller define what’s “best.”
SEO used to be a discipline that verified everything. We tested. We dug as deep as we could. We demanded evidence. Lately, I see less of that. This article is a call-back to that mindset. The changes coming in 2026 are not hype. It’s visible in the adoption curves, and those curves don’t care if we believe them or not. These curves aren’t about what I say, what you say, or what 40 other “SEO experts” say. These curves are about consumers, habits, and our combined future.
ChatGPT is reaching mass adoption in 4 years. Google took 9. Tech adoption is accelerating.
The Shocking Ramp: Google Vs. ChatGPT
Confession: I nearly called this section things like “Ramp-ocalypse 2026” or “The Adoption Curve That Will Melt Your Rank-Tracking Dashboard.” I had a whole list of ridiculous options that would have looked at home on a crypto shill blog. I finally dialed it back to the calmer “The Shocking Ramp: Google Vs. ChatGPT” because that, at least, sounds like something an adult would publish. But you get the idea: The curve really is that dramatic, but I just refuse to dress it up like a doomsday tabloid headline.
Image Credit: Duane Forrester
And before we really get into the details, let’s be clear that this is not comparing totals of daily active users today. This is a look at time-to-mass-adoption. Google achieved that a long time ago, whereas ChatGPT is going to do that, it seems, in 2026. This is about the vector. The ramp, and the speed. It’s about how consumer behavior is changing, and is about to be changed. That’s what the chart represents. Of course, when we reference ChatGPT-Class Assistants, we’re including Gemini here, so Google is front and center as these changes happen.
And Google’s pivot into this space isn’t accidental. If you believe Google was reacting to OpenAI’s appearance and sudden growth, guess again. Both companies have essentially been neck and neck in a thoroughbred horse race to be the leading next-gen information-parsing layer for humanity since day one. ChatGPT may have grabbed the headlines when they launched, but Google very quickly became their equal, and the gap at the top, that these companies are chasing, it’s vanishing quickly. Consumers soon won’t be able to say which is “the best” in any meaningful ways.
What’s most important here is that as consumers adopt, behavior changes. I cannot recommend enough that folks read Charles Duhigg’s “The Power of Habit” book (non-aff link). I first read it over a decade ago, and it still brings home the message – the impact that a single moment of habit-forming has on a product’s success and growth. And that is what the chart above is speaking to. New habits are about to be formed by consumers globally.
Let’s rewind to the search revolution most of us built our careers on.

Google launched in 1998.
By late 1999, it was handling about 3.5 million searches per day (Market.us, September 1999 data).
By 2001, Google crossed roughly 100 million searches a day (The Guardian, 2001).
It didn’t pass 50 % U.S. market share until 2007, about nine years after launch (Los Angeles Times, August 2007).

Now compare that to the modern AI assistant curve:

ChatGPT launched in November 2022.
It reached 100 million monthly active users in just two months (UBS analysis via Reuters, February 2023).
According to OpenAI’s usage study published Sept. 15, 2025, in the NBER working-paper series, by July 2025, ChatGPT had ~700 million users sending ~18 billion messages per week, or about 10 % of the world’s adults.
Barclays Research projects ChatGPT-class assistants will reach ~1 billion daily active users by 2026 (Barclays note, December 2024).

In other words: Google took ~9 years to reach its mass-adoption threshold. ChatGPT is on pace to do it in ~4.
That slope is a wake-up call.

Four converging forces explain why 2026 is the inflection year:

Consumer scale: Barclays’ projection of 1 billion daily active users by 2026 means assistants are no longer a novelty; they’re a mainstream habit (Barclay’s).
Enterprise distribution: Gartner forecasts that about 40 % of enterprise applications will ship with task-doing AI agents by 2026. Assistants will appear inside the software your customers already use at work (Gartner Hype Cycle report cited by CIO&Leader, August 2025).
Infrastructure rails: Citi projects ≈ $490 billion in AI-related capital spending in 2026, building the GPUs and data-center footprint that drop latency and per-interaction cost (Citi Research note summarized by Reuters, September 2025).
Capability step-change: Sam Altman has described 2026 as a “turning-point year” when models start “figuring out novel insights” and by 2027, become reliable task-doing agents (Sam Altman blog, June 2025). And yes, this is the soda salesman telling us what’s right here, but still, you get the point, I hope.

This isn’t a calendar-day switch-flip. It’s the slope of a curve that gets steep enough that, by late 2026, most consumers will encounter an assistant every day, often without realizing it.
What Mass Adoption Feels Like For Consumers
If the projections hold, the assistant experience by late 2026 will feel less like opening a separate chatbot app and more like ambient computing:

Everywhere-by-default: built into your phone’s OS, browser sidebars, TVs, cars, banking, and retail apps.
From Q&A to “do-for-me”: booking travel, filling forms, disputing charges, summarizing calls, even running small projects end-to-end.
Cheaper and faster: thanks to the $490 billion infrastructure build-out, response times drop and the habit loop tightens.

Consumers won’t think of themselves as “using an AI chatbot.” They’ll just be getting things done, and that subtle shift is where the search industry’s challenge begins. And when 1 billion daily users prefer assistants for [specific high-value queries your audience cares about], that’s not just a UX shift, it’s a revenue channel migration that will impact your work.
The SEO & Visibility Reckoning
Mass adoption of assistants doesn’t kill search; it moves it upstream.
When the first answer or action happens inside an assistant, our old SERP tactics start to lose leverage. Three shifts matter most:
1. Zero-Click Surfaces Intensify
Assistants answer in the chat window, the sidebar, the voice interface. Fewer users click through to the page that supplied the answer.
2. Chunk Retrievability Outranks Page Rank
Assistants lift the clearest, most verifiable chunks, not necessarily the highest-ranked page. OpenAI’s usage paper shows that three-quarters of consumer interactions already focus on practical guidance, information, and writing help (NBER working paper, September 2025). That means assistants favor well-structured task-led sections over generic blog posts. Instead of optimizing “Best Project Management Software 2026” as a 3,000-word listicle, for example, you need “How to set up automated task dependencies” as a 200-word chunk with a code sample and schema markup.
3. Machine-Validated Authority Wins
Systems prefer sources they can quote, timestamp, and verify: schema-rich pages, canonical PDFs/HTML with stable anchors, authorship credentials, inline citations.

The consumer adoption numbers grab headlines, but the enterprise shift may hit harder and faster.
When Gartner forecasts that 40% of workplace applications will ship with embedded agents by 2026, that’s not about adding a chatbot to your product; it’s about your buyer’s daily tools becoming information gatekeepers.
Picture this: A procurement manager asks their Salesforce agent, “What’s the best solution for automated compliance reporting?” The agent surfaces an answer by pulling from its training data, your competitor’s well-structured API documentation, and a case study PDF it can easily parse. Your marketing site with its video hero sections and gated whitepapers never enters the equation.
This isn’t hypothetical. Microsoft 365 Copilot, Salesforce Einstein, SAP Joule, these aren’t research tools. They’re decision environments. If your product docs, integration guides, and technical specifications aren’t structured for machine retrieval, you’re invisible at the moment of consideration.
The enterprise buying journey is moving upstream to the data layer before buyers ever land on your domain. Your visibility strategy needs to meet them there.
A 2026-Ready Approach For SEOs And Brands
Preparing for this shift isn’t about chasing a new algorithm update. It’s about becoming assistant-ready:

Restructure content into assistant-grade chunks: 150-300-word sections with a clear claim > supporting evidence > inline citation, plus stable anchors so the assistant can quote cleanly.
Tighten provenance and trust signals: rich schema (FAQ, HowTo, TechArticle, Product), canonical HTML + PDF versions, explicit authorship and last-updated stamps.
Mirror canonical chunks in your help center, product manuals, developer docs to meet the assistants where they crawl.
Expose APIs, sample data, and working examples so agents can act on your info, not just read it.
Track attribution inside assistants to watch for brand or domain citations across ChatGPT, Gemini, Perplexity, etc., then double-down on the content that’s already surfacing.
Get used to new tools that can help you surface new metrics and monitor in areas your original tools aren’t focused. (SERPRecon, Rankbee, Profound, Waikay, ZipTie.dev, etc.)

Back To Verification
The mass-adoption moment in 2026 won’t erase SEO, but it will change what it means to be discoverable.
We can keep taking guidance at face value from the platforms that profit when we follow it, or we can go back to questioning why advice is given, testing what the machines actually retrieve, and trust. We used to have to learn, and we seem to have slipped into easy-button mode over the last 20 years.
Search is moving upstream to the data layer. If you want to stay visible when assistants become the first touch-point, start adapting now, because this time the curve isn’t giving you nine years to catch up.
More Resources:

This post was originally published on Duane Forrester Decodes.

Featured Image: Roman Samborskyi/Shutterstock

Read More »
Enterprise SEO Column

Preparing C-Level For The Agentic Web via @sejournal, @TaylorDanRW

Artificial intelligence is changing how the web works. Search engines, voice assistants, and generative platforms are altering how people find information and make decisions.The internet is no longer built only for human visitors. Brands now operate in an environment where both people and intelligent systems interact with their content, reshaping how websites are designed, found, and measured.
Dual Audiences
The modern web now serves two audiences.
Websites are designed not only for people to read and navigate, but also for AI systems that interpret and act on information on behalf of users. This change is as significant as the move to mobile-first design.
Traditional search practices that focused on keyword visibility, human readability, and click-through rates are becoming less effective. AI-generated summaries in search results, along with tools like ChatGPT, Perplexity, and Gemini, surface information directly to users without them visiting a site. Website traffic and engagement data are becoming less reliable measures of success.
Brands need content that performs two functions. It must provide value and clarity for human visitors while also being structured in a way that can be understood and used by AI systems. This calls for new thinking around design, content structure, and data transparency.
Redefining Visibility
Visibility is no longer only about ranking highly on a search results page. It now depends on how often a brand’s information is cited or used by AI systems.
Brands with well-organized data, clear product details, and content that machines can interpret are more likely to appear in AI-driven environments. Websites should utilize modular, structured frameworks that separate content from design, allowing AI agents to easily process the information.
Modern SEO now extends beyond technical optimization and backlinks. It includes preparing data for language models and voice assistants, product feeds, and FAQ content to help make brand information accessible both to people and to machines.
Content strategies also need to evolve. Pages should be written to answer user questions directly, not just target keywords. AI systems prioritize clarity, authority, and logical structure. Brands that provide straightforward, useful information are more likely to appear in AI summaries and responses.
Personalization At Scale
AI is expanding how brands personalize content and recommendations. Machine learning and first-party data allow for tailored experiences at a scale that was not previously possible.
The challenge is maintaining a consistent brand identity while using automated personalization. Without strong frameworks, brand messaging can become inconsistent or lose tone.
To avoid this, organizations should build clear structures, tone-of-voice guidance, and defined data governance. Modular content systems make it possible to create personalized messages without losing consistency. Each variation should feel part of the same brand experience.
A strong data strategy is essential. Customer Data Platforms and analytics tools help brands understand context and behavior, enabling more relevant and timely communication. Human oversight remains important to ensure brand values and tone are respected across automated outputs.
Measuring Success In The AI Era
As AI reduces clicks and sessions, traditional marketing metrics are less meaningful. C-level leaders are focusing more on results than activity. The key question has become how effectively a brand’s content or product is being chosen or recommended by intelligent systems.
Brands can measure performance in three areas:
1. Agent Visibility And Selection
This reflects how often AI systems reference or prioritize a brand’s content. Tracking brand mentions and inclusion across AI platforms is becoming an important new visibility metric.
2. AI-Driven Traffic Referrals
Although click-throughs are fewer, visitors who arrive via AI recommendations often convert more quickly. Measuring how these users behave can reveal intent and content quality.
3. Brand Sentiment And Experience Quality
In personalized environments, success is not only about visibility but also how users feel. Measuring satisfaction, accuracy, and tone across AI interactions is key.
To do this effectively, brands need updated analytics. Tools that assess visibility in generative systems and track AI-driven referrals are beginning to emerge. Integrating these into broader measurement frameworks will be essential.
Preparing For The Open Agentic Web
The next phase of web development is the open agentic web, where AI systems can browse, interpret, and act across sites on behalf of users. These agents can make bookings, complete purchases, and retrieve information without direct user input.
New web standards are supporting this transition. Protocols such as NLWeb are helping make content easier for AI systems to access. This aims to create smoother interaction between users, brands, and intelligent systems.
Businesses should start adapting their digital infrastructure now. Content management systems, APIs, and data models should serve both human users and AI agents. Making information accessible in a structured, secure way will determine how effectively brands participate in this environment.
This shift also brings new decisions. Some brands may allow AI systems to use their content to improve visibility, while others may prefer to limit access. Each approach affects how visible and discoverable the brand becomes.
Leaders should see this as a major transition. Those who act early to build structured, machine-readable foundations will have an advantage. Those who delay risk losing visibility as AI systems become key gateways to information.
What C-Level Needs To Know
Executives should focus on three main areas as the open agentic web develops:
1. Build A Flexible Digital Infrastructure
Invest in structured, modular systems that can evolve with AI standards. APIs, data models, and schemas should be consistent and accessible.
2. Update Performance Metrics
Shift away from traffic and CTRs. Focus on agent selection, task completion, and performance outcomes that reflect both human and machine interactions.
3. Align Teams Around Data And Content
AI integration spans marketing, technology, and product functions. Shared frameworks are needed to ensure tone, data, and strategy stay consistent.
What Brand Teams Need To Do
Marketing teams should turn these strategies into practical action.
They need to create content that answers questions clearly, maintain clean data structures, and design experiences that both humans and machines can interpret. Testing structured formats such as conversational FAQs, knowledge hubs, and metadata-rich content will help future-proof visibility.
Measurement practices must also evolve. Teams should begin testing tools that monitor how often AI platforms reference their content and how structured data contributes to discoverability.
A New Web For Humans And Machines
The web is moving towards closer interaction between people and intelligent systems. Success will depend on how well brands design experiences that are both understandable and trustworthy for both parties.
For business leaders, the goal is to build digital systems that operate clearly and efficiently. For brands, it means creating content and structures that work with AI rather than against it.
The open agentic web will reward brands that connect visibility, personalization, and measurement into a single strategy. Those that act early will help shape how this new phase of the internet develops.
More Resources:

Featured Image: Anton Vierietin/Shutterstock

Read More »
SEO

30-Year SEO Expert: Why AI Search Isn’t Overhyped & What To Focus On Right Now via @sejournal, @theshelleywalsh

Out of many direct conversations I’ve had in the industry, there’s a mixed reaction to how much AI might impact SEO and search. It depends on your business model as to just how much of a catastrophic effect LLM platforms have taken away your clicks and, more importantly, your end business outcomes.Google still remains the dominant search engine, and right now is still referring the majority of traffic. Although, traffic volumes are significantly reduced, especially for news publishers.
From my conversations, many SEOs believe that despite this Google is not going anywhere and it’s business as usual.
To dig into this topic, I spoke to Carolyn Shelby, who co-founded an ISP in 1994 and has worked in the search industry since for 30 years, working with major brands such as Disney, ESPN, and Tribune Publishing.
Over three decades, Carolyn has seen disruption in the industry many times over, so I asked for her IMHO: Is AI search overhyped?
Her opinion is that focusing on just 1% of a huge share is a good strategy, that we should be focused on technical accessibility and that no one should be ignoring AI search. She also thinks that Google is purposely throttling it’s own progression right now.
The Blogging Economy Is Imploding
Right now, AI and LLMs are dramatically changing search business models and how you can make money online. The biggest impact of this is within blogging for dollars and page views-for-AdSense business models.
As Carolyn said, “It’s not viable going forward as a sustainable business strategy to spin up garbage content sites and slap AdSense all over them and then make enough money to live. Hobby creators or people that are creating out of love will continue to create because they’re doing it for themselves, not for the money. And the amount of money they will make will be enough to maybe buy them coffee every month, but it is not going to be enough to pay their mortgage.
So, the people that are looking for the money to pay their mortgage or buy them a Lamborghini are going to go where there is money to be made, which is over to TikTok and over to YouTube and over to the video platforms.”
This isn’t a temporary disruption. Right now, we’re experiencing a fundamental restructuring of how value is created and captured on the internet.
The influence of TikTok has been building for a few years and is one platform that could be resistant and even flourish in the face of the changes happening in search.
SEO experts I have spoken to cited TikTok as a space where a startup could break into a niche.
1% Of A Trillion Is Traffic Worth Taking
Recently, in a podcast, Carolyn said that less than 1% of traffic comes from AI tools/platforms. On the surface, 1% might seem to be insignificant, but if you consider that 1% of a trillion is 10 billion, that’s a huge amount of traffic.
“If you told me today that if I focused on nothing but ChatGPT and I could guarantee I would monopolize the 1% of traffic, I would jump on that because that is so much traffic.” Carolyn said.
As marketers, we can easily get swept away by the big ‘trillion’ numbers, but if we remember that it can be far easier to gain traction in a smaller niche with less competition than to drown in a crowded space.
For example, SEOs have all been focused on Google because it has so much traffic potential. However, Bing is less competitive and could convert better, so it could be far more beneficial to invest in Bing.
Carolyn believes that the same logic applies to AI platforms. “It’s better to have the traffic from the people that convert, and it’s better to have people coming to your website that are going to convert in general. If you can increase that, increase that.”
Carolyn was clear that in her opinion AI is not overhyped. “I think if you ignore these other opportunities with the LLMs and with AI, then you’re doing yourself a disservice. I wouldn’t call this overhyped. I would call this a shifting mindset, a shift in a paradigm.”
Google Is Holding Back As A Strategic Play
I asked Carolyn if she thought that Google could claw back its dominance, and she has an interesting theory centered on how Google’s Department of Justice battles might be influencing its competitive behavior.
Carolyn explained that during the appeals process, Google needs to prove it’s not a monopoly, which creates an incentive structure.
“They need to prove that they don’t hold absolute control over absolutely everything that happens. Which means they’re going to be inclined to allow other people to encroach on their position because that reinforces their point that they’re not a monopoly.”
Think of it like a driver spotting a speed trap; you slow down until you’re out of range, then floor it again. Google is playing the long game.
Carolyn also identified Chrome data as a critical factor, as it’s Google’s biggest competitive advantage. User signals and behavioral data from Chrome give them insights that drive innovation and performance and forcing the search engine to share this data would fundamentally alter the competitive landscape.
“You take the Chrome data away, that’s a different story. And I think that would be taking the gas out of their engine.” Carolyn commented.
AI Mode Is Here To Stay
We moved the conversation on to AI Mode, and I asked what she thought of the Google AI-generated search results.
Carolyn’s opinion is that Google is not going to roll it back, and it’s here to stay. “I think they’re going to take steps to make sure that we all get used to it and that we all start using it the way they want us to use it to get the best results.”
Carolyn acknowledged that AI Mode creates friction for users conditioned to traditional keyword searches.
“I feel weird asking Google questions like I would ask ChatGPT,” she admitted. “I’m conditioned to interface with ChatGPT in one way and I’m conditioned to interface with Google in a different way and my habits just haven’t changed yet.”
Her belief is that adaptation is inevitable. Google’s dominance means it can guide users toward new interaction patterns.
“They’ll just keep giving us bad answers and we’ll keep trying again because that’s what we do until we figure out how to get the answers that we want out of the machine … together we’ll all keep iterating.”
Google has maintained a position at the forefront of industry development for the last 25 years with constant iteration, and it has wanted to be a personal assistant for years. AI is enabling that to happen.
“It would be ridiculous for Google to say, ‘We’re going to not evolve and we’re going to stay the way we’ve been doing things for 20 years while everyone else is doing AI.’” Carolyn commented. “There’s too much investment in the infrastructure. It’s to everyone’s benefit to learn how to operate within this new environment.”
What SEOs Should Focus On Right Now
My final question to Carolyn was to ask what she thought SEOs should focus on right now.
For me, the actual marketing strategy has been long overlooked in SEO, and Carolyn echoed this in her response to say there are a lot of marketing aspects that have been ignored.
Although in her opinion, the main focus should be on the technical aspects of SEO, not just for search engines but also for LLMs. She emphasized ensuring content accessibility at the machine level.
“I think focusing on the technical fundamentals.” Carolyn explained, “Can the machines [LLMs] traverse your site and retrieve the content and is the content retrievable in the way you need it to be retrievable?”
SEOs should be aware that different LLMs access content differently. Carolyn noted that some platforms, like Anthropic, only capture first-view content, missing anything in toggles or tabs.
“Your job is to figure out what is being found and making sure that the things that the message that you need to have conveyed is in that stuff that is being read. If it’s not, if it’s hidden in something, you have to unhide it.
“There are a lot of different things to do to get to that point, which is what constitutes SEO. Making sure that it’s accessible and it’s the message that you want seen, that if you boil it all down, that is your job.”
The Future Belongs To Those Who Adapt & Adopt
Rather than dismissing AI search as hype, Carolyn thinks we’re witnessing a fundamental transformation that requires strategic adaptation. Business models are changing, and success demands understanding how machines access and interpret content.
“If you ignore these opportunities with the LLMs and with AI, then you’re doing yourself a disservice.”
The future belongs to those who understand that 1% of a trillion is a huge market, who ensure their content is truly accessible to every machine that matters, and who can adopt real marketing.
The professionals who embrace AI will define the next era of SEO.
Watch the full video interview with Carolyn Shelby here:
[embedded content]
Thank you to Carolyn Shelby for offering her insights and being my guest on IMHO.
More Resources: 

Featured Image: Shelley Walsh

Read More »
News

WP Engine Vs Automattic & Mullenweg Is Back In Play via @sejournal, @martinibuster

WP Engine filed a Second Amended Complaint against Automattic and Matt Mullenweg in response to the September 2025 court order that dismissed several counts but gave WP Engine an opportunity to amend and fix issues in its earlier filing. Although Mullenweg blogged last month that the ruling was a “significant milestone,” that’s somewhat of an overstatement because the court had, in fact, dismissed the counts related to antitrust and monopolization with leave to amend, allowing WP Engine to amend and refile its complaint, which it has now done.WP Engine Versus Automattic Is Far From Over
In last month’s court order, two claims were dismissed outright because of technical issues, not because they lacked merit.
Two Claims That Were Dismissed

Count 4, Attempted Extortion: WP Engine’s lawyers cited a section of the California Penal Code for Attempted Extortion. The Penal Code is criminal law intended for use by prosecutors and cannot serve as the basis for a civil claim.
Count 16, Trademark Misuse, was also dismissed on the technical ground that trademark misuse can only be raised as a defense.

The remaining counts that were dismissed last month were dismissed with leave to amend, meaning WP Engine could correct the identified flaws and refile. WP Engine’s amended complaint shows that Automattic and Matt Mullenweg still have to respond to WP Engine’s claims and that the lawsuit is far from over.
Six Counts Refiled
WP Engine refiled six counts to cure the flaws the judge identified in the September 2025 court order, including its Computer Fraud and Abuse Act claim (Count 3).

Count 3: Computer Fraud and Abuse Act (CFAA)
Count 12: Attempted Monopolization (Sherman Act)
Count 13: Illegal Tying (Sherman Act)
Count 14: Illegal Tying (Cartwright Act)
Count 15: Lanham Act Unfair Competition
Count 16: Lanham Act False Advertising

Note: In the amended complaint, Count 16 is newly numbered; the previous Count 16 (Trademark Misuse) was dismissed without leave to amend.
How Second Amended Complaint Fixes Issues
The refiled complaint adds further allegations and examples to address the shortcomings identified by the judge in the previous ruling. One major change is the inclusion of clearer market definitions and more detailed allegations of monopoly power.
Clearer Market Definition
The September 2025 order found that WP Engine’s earlier complaint did not adequately define the relevant markets, and the judge gave WP Engine an opportunity to amend. The amended complaint dedicates about 27 pages to defining and describing multiple relevant markets.
WP Engine’s filing now identifies four markets:

Web Content Management Systems (CMS) Market: Encompassing both open-source and proprietary CMS platforms for website creation and management, with alleged monopoly power concentrated in the WordPress ecosystem.
WordPress Web Hosting Services Market: Consisting of hosting providers that specialize in WordPress websites, where Automattic is alleged to influence competition through its control of WordPress.org and trademark enforcement.
WordPress Plugin Distribution Market: Focused on the distribution of plugins through the WordPress.org repository, which WP Engine alleges Automattic controls as an essential and exclusive channel for visibility and access.
WordPress Custom Field Plugin Market: A narrower segment centered on Advanced Custom Fields (ACF) and similar plugins that provide custom field functionality, where WP Engine claims Automattic’s actions directly suppressed competition.

By defining these markets in greater detail over 27 pages, WP Engine addresses the court’s earlier finding that its market definitions were inadequately supported and insufficiently specific.
New Allegations Of Monopoly Power
The September 2025 court order found that WP Engine had not plausibly alleged Automattic’s monopoly power or exclusionary conduct, and allowed WP Engine to amend its complaint.
The amended filing adds detailed assertions intended to show Automattic’s dominance:

Automattic allegedly controls access to the official WordPress plugin and theme repositories, which are essential for visibility and functionality within the WordPress ecosystem.
Matt Mullenweg’s dual roles as Automattic’s CEO and his control over WordPress.org’s operations are alleged to enable coordinated market exclusion.
The complaint cites WordPress’s scale, powering more than 40 percent of global websites, and argues that Automattic exercises significant influence over this ecosystem through its control of WordPress.org and related trademarks.

These new assertions are meant to show that Automattic’s influence over WordPress.org translates into measurable market power, addressing the court’s finding that WP Engine had not yet made that connection.
Expanded Exclusionary Conduct Examples
The court found that WP Engine framed Automattic’s control of WordPress.org and the WordPress trademarks too vaguely to plausibly show exclusionary conduct or resulting antitrust injury.
The amended complaint addresses this by detailing how Automattic and Matt Mullenweg allegedly used threats and actions involving WordPress.org access and distribution to:

Block or restrict WP Engine’s access to WordPress.org resources and community channels.
Impose conditions on access to WordPress trademarks and resources through alleged threats and leverage.
Pressure plugin developers and partners not to collaborate or integrate with WP Engine’s products.
Establish an alleged de facto tying arrangement, linking participation in the WordPress.org ecosystem to compliance with Automattic’s control over governance and distribution.

Together, these examples illustrate how WP Engine is attempting to turn previously vague claims of control into specific allegations of exclusionary conduct.
Abundance Of Evidence
Mullenweg sounded upbeat in his response to the September 2025 ruling:
“Just got word that the court dismissed several of WP Engine and Silver Lake’s most serious claims — antitrust, monopolization, and extortion have been knocked out!”
But WP Engine’s Second Amended Complaint makes it clear that those “serious claims” were dismissed with leave to amend, have since been refiled, and are not yet knocked out.
The amended complaint is 175 pages long, perhaps reflecting the comprehensive scope necessary to address the issues the court identified in the September 2025 order. None of this means WP Engine is winning; it simply means the ball is back in play. That outcome directly contradicts Mullenweg’s earlier claim that the antitrust, monopolization, and extortion counts had been “knocked out.”
Featured Image by Shutterstock/Nithid

Read More »
Content Marketing

You’re Writing a Book. Now What?

Having decided to add “author” to your résumé, your first task is setting the book up for success. Knowing the subject, audience, and goal is only the starting point. Consider how you’ll prioritize time, quality, speed, and budget. Assess your strengths and skills, and where you might need help.
Then envision the next steps.
This article is the second of my two-part series on publishing a book to benefit your company. Part one, “Can Writing a Book Grow Your Business?,” appeared last month.
Publishing Paths
The three main publishing paths are do-it-yourself, traditional, and hybrid. Each has pros and cons.

Self-publishing. If speed is important and budget is tight, DIY publishing in digital formats is the clear choice. Moreover, selling direct means you’ll know the buyers, which is unlikely through a publisher, distributor, or third-party website.

Traditional. If the goal is significant print sales, you’ll need an agent and a traditional publisher, though smaller publishers and university presses may accept un-agented book proposals.

Hybrid. Generally, with a hybrid publisher, the author pays some or all of the publishing expenses upfront (e.g., editorial, design, marketing) and, in turn, receives a larger share of book sales than with a standard royalty.

It’s unlikely your efforts alone — as a side hustle while running a business — will result in the best possible outcome, regardless of your expertise or writing skills. Casual writers such as your nephew the English major can help in the early stages. But like doctors, plumbers, mechanics, web designers, and digital marketers, editorial pros have much to offer.
Yes, AI tools are terrific aids for research, refining ideas, and organizing notes, but they lack the context, nuance, and judgment of experienced and connected humans.

Roles
Luckily, there are plenty of expert humans! Here are typical book development roles:

Researchers and fact-checkers can find information such as case studies, historical trends, and economic data, as well as verify references and quotations.

Writing coaches and groups can encourage and motivate, and provide useful, ongoing feedback.

Ghostwriters take on most of the composition, working closely to capture your voice, hone ideas, and organize the presentation. Partnering with a public co-author is another way to share the heavy lifting (and profits, if any).

Developmental editors and coaches help shape a book’s structure and flow, refine repetitive or unclear sections, and build on your strengths as a writer.

Copy editors and proofreaders check for errors and suggest corrections. A good copy editor will detect repetition or confusion and recommend alternatives, as well as fix grammar, spelling, and punctuation. Proofreaders focus on remaining errors as the final step before printing.

You as the author have final say with all editorial professionals over the manuscript. You are ultimately responsible for the book’s content. You may not require a team of cover designers, illustrators, indexers, agents, publishers, publicists, and audiobook narrators, but one or more will almost certainly improve the finished product.
Freelance marketplaces such as Upwork and Reedsy include editorial experts, as do professional membership organizations. The Chartered Institute of Editors and Proofreaders, the Association of Ghostwriters, ACES, the Editorial Freelancers Association, and Editors Canada have directories searchable by service, skills, location, experience, subject, and more. The sites also provide how-to on assessing needs and qualifications. The EFA (I’m a member) offers tips on hiring an editor, as well as descriptions and costs of the various editorial services.
Other helpful resources include publishing veteran Jane Friedman, the Alliance of Independent Authors, and the Authors Guild. Writer Beware alerts authors to potential scams.

Read More »
Generative AI

Microsoft Explains How To Optimize Content For AI Search Visibility via @sejournal, @MattGSouthern

Microsoft has shared guidance on structuring content to increase its likelihood of being selected for AI-generated answers across Bing-powered surfaces.Much of the advice reiterates established SEO and UX practices such as clear titles and headings, structured layout, and appropriate schema.
The new emphasis is on how content is selected for answers. Microsoft stresses there is “no secret sauce” that guarantees selection, but says structure, clarity, and “snippability” improve eligibility.
As Microsoft puts it:
“In traditional search, visibility meant appearing in a ranked list of links. In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”
Key Differences In AI Search
AI assistants break down pages into manageable parts, carefully assessing each for authority and relevance, then craft responses by blending information from multiple sources.
Microsoft says fundamentals such as crawlability, metadata, internal links, and backlinks still matter, but they are the starting point. Selection increasingly depends on how well-structured and clear each section is.
Best Practices Microsoft Recommends
To help improve the chances of AI selecting your content, Microsoft recommends these best practices:

Align the title, meta description, and H1 to clearly communicate the page purpose.
Use descriptive H2/H3 headings that each cover one idea per section.
Write self-contained Q&A blocks and concise paragraphs that can be quoted on their own.
Use short lists, steps, and comparison tables when they improve clarity (without overusing them).
Add JSON-LD schema that matches the page type.

What To Avoid
Microsoft recommends avoiding these practices to improve the chances of your content appearing in AI search results:

Writing long walls of text that blur ideas together.
Hiding key content in tabs, accordions, or other elements that may not render.
Relying on PDFs for core information.
Putting important information only in images without alt text or HTML alternatives.
Making vague claims without providing specific details.
Overusing decorative symbols or long punctuation strings; keep punctuation simple.

Why This Matters
The key takeaway is that structure helps selection. When your titles, headings, and schema are aligned, Copilot and other Bing-powered tools can extract a complete idea from your page.
This connects traditional SEO principles to how AI assistants generate responses. For marketers, it’s more of an operational checklist than a new strategy.
Looking Ahead
Microsoft acknowledges there’s no guaranteed way to ensure inclusion in AI responses, but suggests that these practices can make content more accessible for its AI systems.

Featured Image: gguy/Shutterstock

Read More »
News

Google AdSense Replaces Ad Networks With Authorized Buyers via @sejournal, @MattGSouthern

Google is updating its demand source management by replacing the Ad Networks blocking control with a new Authorized Buyers control in AdSense.This change affects how you control which demand sources can bid on your inventory. The transition begins on November 6. Existing blocks will remain in place, and new authorized buyers will be enabled by default.
What’s Changing
Google is discontinuing the Ad Networks blocking control within Brand Safety and introducing a new Authorized Buyers control.
As part of this update, the “Automatically allow new Google-certified ad networks” option is being eliminated. Instead, new authorized buyers will be permitted by default.
The Authorized Buyers list excludes inactive ad networks, test ad networks, and Display & Video 360 (DV360) networks.
Google states that the new page allows you to permit or block authorized buyers and offers improved visibility into parent–child relationships among buyers. However, DV360 accounts are not managed within the new control.
Timeline & Transition
Before launching, you can preview the view-only Authorized Buyers page in AdSense by navigating to Brand Safety → Content → Blocking controls → Authorized Buyers.
These controls will be active after November 6. Any modifications made to Ad Networks prior to launch will be saved and reflected in the Authorized Buyers section.
Once the change is live, control access by navigating to Brand Safety → Content → Blocking controls → Authorized Buyers. Here, you can permit or restrict specific authorized buyers and utilize search or filters to locate particular entries.
Google’s detailed “Allow and block authorized buyers” guide illustrates this process.
Ad Review Center & DV360
You’ll no longer be managing authorized buyers through the Ad Review Center. You can still allow or block Google ad accounts in the Advertiser section, including DV360 accounts, which stay outside the new Authorized Buyers system.
Looking Ahead
This update changes the default setting to permit new buyers, so tighter configurations might need a regular process to review and prevent unwanted buyers.
Preview the interface now to familiarize your team with control locations, then schedule a post-launch review to verify your existing blocks and any new entries. Maintain DV360 workflows in the Ad Review Center, and utilize the parent–child view to see how related buyers influence bidding and revenue.

Read More »