Archive

CMS Software

Is Your Website Ready for AI Search? A Practical Audit for CMOs via @sejournal, @lorenbaker

AI-driven discovery is reshaping how brands earn visibility and conversions. Most CMS stacks weren’t built for this shift.Is your CMS structured for AI-powered search and answer engines?Can your content be interpreted, reused, and surfaced by machine-driven systems?Is your current tech stack quietly limiting performance in search?
Discoverability depends on structured data, flexible architecture, and systems that adapt quickly.
Watch the on-demand webinar to see how to evaluate whether your Drupal site, or other CMS, is built for what’s next.
How To Audit Your CMS for AI-Driven Search & Conversion Performance
In this practical, marketer-focused on-demand session, we’ll walk through how CMOs and marketing leaders can assess whether their current CMS and digital stack support modern search behavior or restrict it.
You’ll leave with a clear understanding of what AI readiness means at the platform level, and how to identify risk areas before they impact growth.
You’ll Learn:

Where enterprise Drupal implementations most often fall short in AI-driven discovery
How AI search changes SEO strategy, content modeling, and conversion optimization
What defines an AI-ready CMS stack, including structured content, composable architecture, and open-source flexibility

Check out the slides below or watch the full presentation, on demand!

Read More »
Cross-border Selling

How Foreign Brands Test the U.S. Market

You have a product. You’ve done the research. The U.S. market feels like the obvious next step, but you haven’t launched there yet. You’ve wondered, “What if it doesn’t work?”
That voice is right to ask. Most products fail not because the item is bad, but because of inadequate preparation and misjudged demand.
I’m the founder of OT Growth Labs, a Los Angeles-based agency helping international brands launch and scale in the U.S. Since 2008 I’ve served worldwide in executive ecommerce marketing roles for leading consumer companies.
The U.S. is the world’s largest consumer market. But for brands coming from Europe, Asia, or Latin America, it’s often where products die quietly. Consumers are different, compliance is different, and your domestic playbook won’t travel.
So before spending big money, test the demand in two ways:

Virtual testing measures interest before inventory exists.
Physical testing sells a real product in small quantities.

Virtual Testing
Virtual product demonstrations are low-cost, fast to launch, and require no inventory.
Virtual testing gauges whether consumers want your product — before you make it. It’s ideal for early-stage brands, limited budgets, or high-risk products. It won’t replace physical sales, but it’s a smart first filter.
Start with a landing page.
Explain your product thoroughly and the problem it solves. Disclose packaging, format, ingredients, claims, and label design. Give visitors an action step, such as joining a waitlist, requesting early access, or opting in to receive launch notifications.
Drive traffic through ads, social media, and influencers. It’s an encouraging signal if visitors sign up.
Brands with a platform or app already generating traffic can avoid a separate landing page by upselling to existing users. It saves time and money.
Don’t test a single concept. Run two or three variations and compare results. In my experience, the version that wins in the U.S. is rarely the one that worked at home. U.S. consumers respond to numbers and to bold, specific language: “clinically tested,” “formulated by veterinarians,” “organic.” They want proof up front.
Virtual testing:

Pros. Low cost, fast to launch, no inventory.
Cons. Measures interest only, not product or purchase intent.

Physical Testing
Selling a physical test product offers real data, reviews, and market validation.
The most reliable way to validate demand is to sell a product. Ship a small batch to the U.S. from your current manufacturer, or produce in the U.S. with a minimum run.
The latter option, manufacturing in the U.S., is longer and more expensive, but often worth it in my experience. “Made in the U.S.A.” on the label is frequently a strong selling point.
Physical testing answers questions that a landing page cannot: Does the product perform? Does the packaging hold up? Is the formula good? Is the price right? What do customers say?
Sales will tell you more than months of research, as will reviews, which are critical. An overwhelming percentage of U.S. consumers rely on reviews before buying.
Brands in adjacent categories often use physical testing as a learning loop. They launch a small batch, collect reviews, improve the formula or positioning, and then scale. The final version wins because of findings from the tests.
Physical testing:

Pros: Real sales data, reviews, market validation.
Cons: Expensive and slow. A small batch can take a year from start to shelf. It requires compliance prep, label and design creation, and formula testing. Finding a manufacturer willing to run small batches is a challenge.

Test, Then Scale
Entering the U.S. market is getting harder. Tariffs are rising, and regulations are tightening. Imports valued at less than $800 are no longer exempt from duties — a direct hit on international companies shipping small quantities.
Foreign brands succeed in the U.S. through testing and information-gathering, not just superior products.
Start small, the market will tell you the rest.

Read More »
News

Google Begins Rolling Out The March 2026 Spam Update via @sejournal, @MattGSouthern

Google started rolling out the March 2026 spam update today, according to the Google Search Status Dashboard.The update is global and in all languages, with a rollout that may take a few days.
What’s New
The Search Status Dashboard listed the update as an incident affecting ranking at 12:00 PM PT on March 24, with the release note posted at 12:18 PM PDT.
Google’s description reads:
“Released the March 2026 spam update, which applies globally and to all languages. The rollout may take a few days to complete.”
Google hasn’t published a blog post or announced new spam policies with this rollout. So far, it seems to be a standard spam update, not a broader policy change like the March 2024 update, which added categories such as content abuse, expired domain abuse, and site reputation abuse.
How Spam Updates Work
Google describes spam updates as improvements to spam-prevention systems like SpamBrain, targeting sites violating spam policies, which can lead to lower rankings or removal from search results.
Spam updates differ from core updates, which re-assess content quality. Spam updates enforce policies against violations like cloaking, link spam, and content abuse.
Sites affected by a spam update can recover, but recovery takes time. Google states improvements may only appear once automated systems detect compliance over months.
Context
This is Google’s first spam update since the August 2025 spam update, which ran from August 26 to September 22 and took nearly 27 days to complete. That update was characterized by SISTRIX as penalty-only, with affected spammy domains losing visibility but no broad ranking changes.
Google’s estimated timeline of “a few days” for the March 2026 update suggests a shorter rollout than recent spam updates, though timelines can stretch. The December 2024 spam update completed in seven days. The August 2025 update took nearly four weeks.
The March 2026 spam update comes about three weeks after the February Discover update finished rolling out.
Why This Matters
Ranking changes during spam update rollouts can happen quickly. Monitoring Search Console data over the next few days will help distinguish spam-related drops from normal fluctuation.
Google hasn’t announced new spam policy categories with this update, so the existing spam policies remain the relevant framework for evaluating any impact.
Looking Ahead
Google will update the Search Status Dashboard when the rollout is complete. Search Engine Journal will report on the completion and any observed effects.

Featured Image: Hurunaga Yuuka/Shutterstock

Read More »
News

Google Adds AI & Bot Labels To Forum, Q&A Structured Data via @sejournal, @MattGSouthern

Google updated its Discussion Forum and Q&A Page structured data documentation, adding several new supported properties to both markup types.The most notable addition is digitalSourceType, a property that lets forum and Q&A sites indicate when content was created by a trained AI model or another automated system.
Content Source Labeling Comes To Forum Markup
The new digitalSourceType property uses IPTC digital source enumeration values to indicate how content was created. Google supports two values:

TrainedAlgorithmicMediaDigitalSource for content created by a trained model, such as an LLM.
AlgorithmicMediaDigitalSource for content created by a simpler algorithmic process, such as an automatic reply bot.

The property is listed as recommended, not required, for both the DiscussionForumPosting and Comment types in the Discussion Forum docs, and for Question, Answer, and Comment types in the Q&A Page docs.
Google already uses similar IPTC source type values in its image metadata documentation to identify how images were created. The update extends that concept to text-based forum and Q&A content.
New Comment Count Property
Google added commentCount as a recommended property across both documentation pages. It lets sites declare the total number of comments on a post or answer, even when not all comments appear in the markup.
The Q&A Page documentation includes a new formula: answerCount + commentCount should equal the total number of replies of any type. This gives Google a clearer picture of thread activity on pages where comments are paginated or truncated.
Expanded Shared Content Support
The Discussion Forum documentation expanded its sharedContent property. Previously, sharedContent accepted a generic CreativeWork type. The updated docs now explicitly list four supported subtypes:

WebPage for shared links.
ImageObject for posts where an image is the primary content.
VideoObject for posts where a video is the primary content.
DiscussionForumPosting or Comment for quoted or reposted content from other threads.

The addition of DiscussionForumPosting and Comment as accepted types is new. Google’s updated documentation includes a code example showing how to mark up a referenced comment with its URL, author, date, and text.
The image property description was also updated across both docs with a note about link preview images. Google now recommends placing link preview images inside the sharedContent field’s attached WebPage rather than in the post’s image field.
Why This Matters
For sites that publish a mix of human and machine-generated content, the digitalSourceType addition provides a structured way to communicate that to Google. The new properties are optional, and no existing implementations will break.
Google has not said how it will use the digitalSourceType data in its ranking or display systems. The documentation only describes it as a way to indicate content origin.
Looking Ahead
The update does not include changes to required properties, so existing forum and Q&A structured data implementations remain valid. Sites that want to adopt the new properties can add them incrementally.

Read More »
Generative AI

The Agency Playbook for Surviving the Agentic AI Era

Search is moving from queries typed into a box to conversations held with systems that understand intent, context, and outcomes. People no longer look for pages. They look for solutions, guidance, and confidence that they are making the right choice.Agentic AI pushes this shift further. Instead of waiting for instructions, agents act on goals. They discover information, compare options, trigger workflows, and adjust based on feedback. For digital leaders, this means visibility is no longer only a ranking problem. It becomes a problem of influence inside AI systems.
SEO now touches product, data, knowledge management, and experience design. This playbook explains how to prepare for that shift, build capability, and lead change.
Search Is Becoming AI-Mediated
AI systems have become the layer between users and the web. They read content on behalf of users, make selections instead of requiring users to browse, and influence decisions in ways that search pages once did.
This shift changes how people interact with information. Users now ask broader, more complex questions, expecting systems to understand nuance and intent. The traditional act of navigating through links is giving way to direct answers and immediate actions.
Content can no longer be designed solely for human readers. It must also be structured in ways that AI systems can interpret accurately and confidently. In this environment, trust and evidence carry more weight than keywords or search optimization tactics.
Winning in search today means becoming part of the models that shape decisions, not just appearing in the results.
What Agentic AI Means For SEO And Digital
Agentic AI is changing how people discover and choose brands. Discovery now depends on how well models learn from your content, the paths users take on your site, and the external signals that establish credibility. These systems decide when your brand is relevant, based on what they understand and trust.
During evaluation, AI compares your product, price, quality, reviews, and suitability for a given user against other options. It looks for proof, tests claims, and weighs real signals over marketing language.
When supporting decisions, AI doesn’t just provide information. It actively guides users toward what it considers the best fit. Your brand might be brought forward or quietly passed over, depending on how well it matches user needs.
In this landscape, SEO is no longer just about publishing content. It’s about shaping how AI systems perceive your brand and when they choose to recommend it.
New Operating Model For SEO
The future of search brings marketing, product, and data teams into a shared effort. Success depends on how well these areas work together to shape how AI systems perceive and present your brand.
The key is building structured knowledge that AI can easily process and apply. Instead of designing for clicks and views, focus on creating journeys that help users complete tasks through the systems guiding them. It’s also critical to train these systems with the right brand messages, supported by clear evidence and consistent proof points.
Ongoing visibility requires monitoring how models reference your brand, how they rank it, and how they reason about its relevance. This means continuously refining the signals you send, improving your content, updating product data, and reinforcing trust in every interaction.
The goal remains clear and hasn’t really changed from our technical goals for SEO. Make it easy for AI agents to understand, trust, and ultimately recommend your brand.
Maturity Model

Level
Name
Description
Key indicators

0
Manual SEO
Basic optimization and manual workflows
Keyword focus, isolated content execution, minimal data alignment

1
Assisted SEO
AI supports research and content creation
AI‑assisted briefs, content suggestions, faster execution, manual oversight

2
Integrated AI workflows
Core SEO tasks automated and structured
Content pipelines, structured data adoption, automated QA, analytics integration

3
Agent‑driven operations
Agents monitor, trigger, and refine SEO
Automated reporting, performance triggers, self‑adjusting content modules

4
Autonomous acquisition systems
Self‑improving systems tied to revenue
Continuous testing, adaptive journeys, revenue‑linked triggers, real‑time optimization

The goal is not automation alone. It is intelligence and improvement at scale.
Technical And Data Foundations
To prepare for agentic SEO, organizations need more than traditional content systems built for publishing. They need strong foundations that help AI systems understand, evaluate, and act with confidence.
This starts with clarity, which means crafting messaging that is consistent, accurate, and easy for machines to interpret. Structure is also essential, requiring content, data, and signals to be organized in ways that align with how AI systems process and reason through information.
Key components of this are:

Structured data that turns content into machine‑readable knowledge.
Knowledge graphs that explain relationships between products, categories, and needs.
Taxonomy and naming standards to ensure consistency across pages, feeds, and assets.
APIs and automation for publishing and optimization, so agents can trigger updates.
Clean product and service data, including specifications, pricing, and availability.
Evaluation systems to audit AI outputs and detect hallucinations or misalignment.
Identity and trust signals, including reviews, authority, certifications, and product proof.

This calls for a shift from simply building web pages to creating a well-organized information architecture. The goal is to structure information in a way that AI systems can easily navigate, understand, and apply.
In practice, this means bringing together product data, content metadata, and customer intent into a single, connected system. It involves defining the key entities your business represents, such as products or services, and mapping how they relate to what users are trying to accomplish. Content feeds and structured data should reflect the actual state of the business rather than just marketing language.
Equally important is creating feedback loops that show how AI systems interpret and reference your brand. These insights help you see where your content is being used, how it is being understood, and whether it is guiding users toward your brand. With this information, you can keep refining what you share to improve how systems recognize and recommend you.
Instead of asking, “How do we rank for this query?” leaders will ask, “How do systems understand us, trust us, and act on our information?”
KPI And Measurement Model
Traditional key performance indicators still hold value, but they no longer capture the full picture. Rankings and session metrics continue to provide insight, yet they now exist within a broader framework shaped by how AI systems retrieve, interpret, and act on information. Ranking reports will sit alongside AI retrieval dashboards, and session counts will be evaluated alongside metrics focused on task completion and user outcomes.
In my opinion, you should also be looking to monitor:

Share of voice in AI assistants.
Retrieval and inclusion rate in AI answers.
Brand alignment and brand safety in model outputs.
Presence in multi‑step reasoning chains.
Task completion and conversion paths from AI systems.
Cost per automated workflow and cost per agent‑driven action.
Model education, data freshness, and trust scores.

As measurement evolves, the focus moves from tracking visitor numbers to understanding how AI systems shape decisions. To navigate this shift, leaders should design metrics that reflect influence within these systems. Visibility will measure whether the brand is appearing in AI-generated responses and assistant-led interactions.
Accuracy will assess whether the brand is being represented correctly and safely across touchpoints. Trust will reflect whether AI systems choose your content and signals over others when making recommendations. Action will capture whether AI-driven experiences result in tangible outcomes like leads, bookings, or purchases. Efficiency will show whether AI agents are reducing manual effort, improving speed, and delivering better user experiences.
Success will no longer be defined by visibility alone but by a brand’s ability to perform across discovery, decision support, and operational impact.
Talent And Capability Model
Agentic SEO is not a standalone skill set, it draws from a mix of disciplines that span marketing, data, and product. Success in this space requires a collaborative approach, where expertise is integrated rather than siloed.
Future-facing teams bring together SEO and content strategy, data and automation engineering, product and user experience thinking, as well as governance and prompt development. Legal and compliance awareness also play a critical role, ensuring that outputs remain responsible and aligned with brand and regulatory standards.
These teams operate in cross-functional pods, organized around delivering customer outcomes rather than managing individual channels. This structure allows them to move faster, adapt to change, and create more cohesive experiences across AI-driven platforms.
Modern SEO teams include several key roles. The SEO strategist focuses on how AI systems search, retrieve, and rank content. The data engineer manages the integrity of structured content, metadata, and live data feeds. The automation specialist builds the workflows and agents that connect information to user actions. The AI evaluator audits model outputs to ensure accuracy, brand alignment, and safety. The product partner bridges SEO efforts with real user journeys, making sure that discovery leads to meaningful interaction and conversion.
As this approach matures, teams will spend less time producing content manually and more time designing the systems, signals, and experiences that guide AI behavior and improve how users discover and engage with the brand.
The First 90 days
Days 1 To 30: Foundation And Alignment

Audit content, data, and search performance.
Map where AI already touches customer journeys.
Identify gaps in structure, trust signals, and data quality.
Set goals for AI visibility and agent‑driven workflows.

Days 31 To 60: Build And Test Pilots

Launch structured data and knowledge base improvements.
Test AI‑assisted content and QA pipelines.
Introduce early agent monitoring for SEO signals.
Create evaluation benchmarks for AI accuracy and brand safety.

Days 61 To 90: Scale And Govern

Deploy automation in high‑impact workflows.
Formalize model governance and feedback loops.
Train cross‑functional teams on AI‑ready processes.
Build dashboards for AI visibility, trust, and conversion.

Future Outlook
Search will not disappear. It will merge into tasks, journeys, and decisions across devices and interfaces. Brands that train AI systems, structure knowledge, and build agent‑ready operations will lead.
The winners will not be those who automate content. They will be those who help users and systems make better decisions at speed and scale.
More Resources:

Featured Image: Collagery/Shutterstock

Read More »