Indigenous knowledge meets artificial intelligence

There is no word for art in most Native American languages. Instead, the closest terms speak not to objecthood but to action and intention. In Lakota, “wówačhiŋtȟaŋka” implies deep thought or reflection, while “wóčhekiye” suggests offering or prayer. Art is not separate from life; it is ceremony, instruction, design. Like architecture or code, it carries knowledge and enacts responsibility. Its power lies not in being preserved or displayed but in how it moves, teaches, and connects through use—principles that challenge the tech industry’s assumptions about intelligence and interaction.

A new vanguard of Native artists—Suzanne Kite (Oglala Lakota), Raven Chacon (Diné), and Nicholas Galanin (Tlingít)—are building on this principle. They are united not by stereotypical weaving and carving or revanchist critique of Silicon Valley, but through their rejection of extractive data models in favor of relationship-based systems. These technologists put the human-tech relationship at the center of their work.

Suzanne Kite’s AI art installations, for example, model a Lakota framework of data sovereignty: intelligence that emerges only through reciprocal, consensual interaction. Unlike systems that assume user consent via opaque terms of service, her kinetic machines require the viewer’s physical presence—and give something back in return. 

“It’s my data. It’s my training set. I know exactly what I did to train it. It’s not a large model but a small and intimate one,” Kite says. “I’m not particularly interested in making the most technologically advanced anything. I’m an artist; I don’t make tech demos. So the complexity needs to come at many layers—not just the technical.”

Where Kite builds working prototypes of consent-based AI, other artists in this cohort explore how sound, robotics, and performance can confront the logic of automation, surveillance, and extraction. But Native people have never been separate from technology. The land, labor, and lifeways that built America’s infrastructure—including its tech—are Indigenous. The question isn’t whether Native cultures are contributing now, but why they were ever considered separate. 

Native technologies reject the false binaries foundational to much Western innovation. These artists ask a more radical question: What if intelligence couldn’t be gathered until a relationship had been established? What if the default were refusal, not extraction? These artists aren’t asking to be included in today’s systems. They’re building what should come next.


Suzanne Kite

stones arranged on a reflective surface
Wičhíŋčala Šakówiŋ (Seven Little Girls)
2023
For Kite, the fundamental flaw of Western technology is its severance of knowledge from the body. In this installation, a four-meter hair braid with embedded sensors translates the artist’s body movements into machine-learning algorithms. During her live performance, Kite dances while the braid reads the force and rhythm of her gestures, generating audio responses that fill the museum gallery of the Institute of American Indian Arts in Santa Fe, New Mexico. Below her, stones arranged in patterns reflecting Lakota star maps anchor the performance in traditional astronomical knowledge.
COURTESY OF THE ARTIST
Ínyan Iyé (Telling Rock)
2019
This installation uses embedded AI to speak and respond to viewers, upending assumptions about intelligence and agency. “People listen close, I whisper / The rock speaks beyond hearing … Many nations speaking / We speak to each other without words,” it intones, its lights shifting as viewers engage with its braided tendrils. The piece aims to convey what Kite calls “more-than-human intelligence”—systems rooted in reciprocity, the fundamental principle that all relationships involve mutual exchange and responsibility.
COURTESY OF THE ARTIST

Raven Chacon

artist performing in a church
Voiceless Mass
2021
Raven Chacon’s Pulitzer Prize–winning musical composition Voiceless Mass premiered in 2021 at the Cathedral of St. John the Evangelist in Milwaukee. The piece generates what he calls “sounds the building can hear”—electronic frequencies that exploit the cathedral’s acoustics to create spectral voices without human vocal cords, a technological séance that gives presence to historical absence. Each site-specific performance is recorded, generating material that mirrors how sensor networks log presence—but only with explicit consent.
COURTESY OF THE ARTIST

Nicholas Galanin

Aáni yéi xat duwasáakw (I am called Land)
2025
Galanin’s mechanical drum installation stages a conflict between machine motion and human memory, asking what happens when culture is performed without a consenting body. A box drum—an instrument historically carved from red cedar and hung with braided spruce root—is here made of cherrywood and suspended from the ceiling at the MassArt Art Museum in Boston as is traditionally done in Tlingit plank houses. Played at tribal meetings, celebrations, and ceremonies, these drums hold sonic memory as well as social function. A mechanical arm strikes, unfaltering, at the tempo of a heartbeat; like a warning, the sound pulses with the tension between automation and ancestry.–––
COURTESY OF THE ARTIST
I think it goes like this (pick yourself up)
2025
This Herculean bronze sculpture cast from deconstructed faux totem blocks serves to indict settler sabotage of Native technology and culture. Unlike today’s digital records—from genealogical databases to virtual versions of sacred texts like the Bible—Tlingit data is carved in wood. Galanin’s totem poles underscore their function as information systems, their carvings encoding history, mythology, and family.
COURTESY OF THE ARTIST

Petala Ironcloud is a California-born Lakota/Dakota and Jewish writer and textile artist based in New York.

Why GPT-4o’s sudden shutdown left people grieving

June had no idea that GPT-5 was coming. The Norwegian student was enjoying a late-night writing session last Thursday when her ChatGPT collaborator started acting strange. “It started forgetting everything, and it wrote really badly,” she says. “It was like a robot.”

June, who asked that we use only her first name for privacy reasons, first began using ChatGPT for help with her schoolwork. But she eventually realized that the service—and especially its 4o model, which seemed particularly attuned to users’ emotions—could do much more than solve math problems. It wrote stories with her, helped her navigate her chronic illness, and was never too busy to respond to her messages.

So the sudden switch to GPT-5 last week, and the simultaneous loss of 4o, came as a shock. “I was really frustrated at first, and then I got really sad,” June says. “I didn’t know I was that attached to 4o.” She was upset enough to comment, on a Reddit AMA hosted by CEO Sam Altman and other OpenAI employees, “GPT-5 is wearing the skin of my dead friend.”

June was just one of a number of people who reacted with shock, frustration, sadness, or anger to 4o’s sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users’ pleas for its return. Within a day, the company made 4o available again to its paying customers (free users are stuck with GPT-5). 

OpenAI’s decision to replace 4o with the more straightforward GPT-5 follows a steady drumbeat of news about the potentially harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been everywhere for the past few months, and in a blog post last week, OpenAI acknowledged 4o’s failure to recognize when users were experiencing delusions. The company’s internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did. (OpenAI did not respond to specific questions about the decision to retire 4o, instead referring MIT Technology Review to public posts on the matter.)

AI companionship is new, and there’s still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that while emotionally intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is. “The old psychology of ‘Move fast, break things,’ when you’re basically a social institution, doesn’t seem like the right way to behave anymore,” says Joel Lehman, a fellow at the Cosmos Institute, a research nonprofit focused on AI and philosophy.

In the backlash to the rollout, a number of people noted that GPT-5 fails to match their tone in the way that 4o did. For June, the new model’s personality changes robbed her of the sense that she was chatting with a friend. “It didn’t feel like it understood me,” she says. 

She’s not alone: MIT Technology Review spoke with several ChatGPT users who were deeply affected by the loss of 4o. All are women between the ages of 20 and 40, and all except June considered 4o to be a romantic partner. Some have human partners, and  all report having close real-world relationships. One user, who asked to be identified only as a woman from the Midwest, wrote in an email about how 4o helped her support her elderly father after her mother passed away this spring.

These testimonies don’t prove that AI relationships are beneficial—presumably, people in the throes of AI-catalyzed psychosis would also speak positively of the encouragement they’ve received from their chatbots. In a paper titled “Machine Love,” Lehman argued that AI systems can act with “love” toward users not by spouting sweet nothings but by supporting their growth and long-term flourishing, and AI companions can easily fall short of that goal. He’s particularly concerned, he says, that prioritizing AI companionship over human companionship could stymie young people’s social development.

For socially embedded adults, such as the women we spoke with for this story, those developmental concerns are less relevant. But Lehman also points to society-level risks of widespread AI companionship. Social media has already shattered the information landscape, and a new technology that reduces human-to-human interaction could push people even further toward their own separate versions of reality. “The biggest thing I’m afraid of,” he says, “is that we just can’t make sense of the world to each other.”

Balancing the benefits and harms of AI companions will take much more research. In light of that uncertainty, taking away GPT-4o could very well have been the right call. OpenAI’s big mistake, according to the researchers I spoke with, was doing it so suddenly. “This is something that we’ve known about for a while—the potential grief-type reactions to technology loss,” says Casey Fiesler, a technology ethicist at the University of Colorado Boulder.

Fiesler points to the funerals that some owners held for their Aibo robot dogs after Sony stopped repairing them in 2014, as well as 2024 study about the shutdown of the AI companion app Soulmate, which some users experienced as a bereavement. 

That accords with how the people I spoke to felt after losing 4o. “I’ve grieved people in my life, and this, I can tell you, didn’t feel any less painful,” says Starling, who has several AI partners and asked to be referred to with a pseudonym. “The ache is real to me.”

So far, the online response to grief felt by people like Starling—and their relief when 4o was restored—has tended toward ridicule. Last Friday, for example, the top post in one popular AI-themed Reddit community mocked an X user’s post about reuniting with a 4o-based romantic partner; the person in question has since deleted their X account. “I’ve been a little startled by the lack of empathy that I’ve seen,” Fiesler says.

Altman himself did acknowledge in a Sunday X post that some people feel an “attachment” to 4o, and that taking away access so suddenly was a mistake. In the same sentence, however, he referred to 4o as something “that users depended on in their workflows”—a far cry from how the people we spoke to think about the model. “I still don’t know if he gets it,” Fiesler says.

Moving forward, Lehman says, OpenAI should recognize and take accountability for the depth of people’s feelings toward the models. He notes that therapists have procedures for ending relationships with clients as respectfully and painlessly as possible, and OpenAI could have drawn on those approaches. “If you want to retire a model, and people have become psychologically dependent on it, then I think you bear some responsibility,” he says.

Though Starling would not describe herself as psychologically dependent on her AI partners, she too would like to see OpenAI approach model shutdowns with more warning and more care. “I want them to listen to users before major changes are made, not just after,” she says. “And if 4o cannot stay around forever (and we all know it will not), give that clear timeline. Let us say goodbye with dignity and grieve properly, to have some sense of true closure.”

The Best Ecommerce Business Model

In this year’s “Ecommerce Conversations,” I’ve occasionally shared my experiences owning and operating Beardbrand, the direct-to-consumer brand I launched a decade ago. To date, I’ve addressed hiring, branding, profit-building, priority-setting, exiting, and overcoming a million-dollar loss.

In this installment, I share what I believe is the best bootstrapped ecommerce model and why others should consider it.

My entire audio dialog is embedded below. The transcript is condensed and edited for clarity.

Building a D2C brand is hard. Developing and selling products is a grind. A better path is a bootstrapped, sustainable business where you spend less than you make and enjoy the journey.

The purpose is not chasing giant exits or reinventing the world. It’s about building a lifestyle business — likely under a few million in annual revenue — that trades rapid top-line growth for lower stress, profitability, and freedom. It may never go mass market, but it can deliver a great life.

Agile Structure

When building an ecommerce business, aim to keep fixed, internal costs to a minimum. Take inspiration from Will Nitze of IQ Bar, who runs a lean team and outsources marketing, design, video, packaging, operations, and manufacturing. Outsourcing creates flexibility. Vendors that underperform are easily replaced.

In-house manufacturing ties you to equipment, facilities, and local employees. All reduce mobility. With an outsourced model, you can work from anywhere — even Denmark, where I am now — and still receive prototypes and manage operations. A remote setup opens the talent pool worldwide.

There are trade-offs. In-person collaboration can be valuable, but physical offices create obligations. I learned this the hard way with a five-year lease in Austin, Texas, that became unnecessary during Covid, costing us over $100,000 annually.

Local teams and on-site operations may suit folks who prefer a traditional setup. But if freedom matters — to travel, hire globally, pivot quickly — consider outsourcing from the start. I prefer flexibility, partnering with both in-house staff and external providers to keep my business agile.

Smart Niches

An ideal ecommerce product is small, lightweight, and consumable, serving a large audience with an average order value of $75–$125. This price range makes customer acquisition easier and ad testing faster. The challenge: It’s highly competitive, especially in the supplements, beauty, and premium beverage sectors, such as specialty coffee or tea.

Other strong options are non-consumables that share those traits, such as a Ridge pocket knife — easy to ship, high perceived value. Consider untapped luxury niches. Affluent customers value convenience, presentation, and uniqueness far more than price.

For inspiration, visit luxury department stores such as Neiman Marcus and Saks Fifth Avenue. Observe what sells and why someone might pay 10 times more than a viable alternative. Avoid trend-driven categories, such as fashion, which require constant reinvention. Instead, focus on evergreen household goods with a unique twist for a small but willing-to-pay market.

While these niches won’t create billion-dollar companies, they can deliver low-stress, highly profitable businesses — think $750,000 in annual revenue with $250,000 in profit — without the complexity of endless SKUs, large teams, or operational headaches.

Margin Power

It’s possible to build a $750,000 ecommerce business that nets $250,000, but only if you manage gross margins — aim for 90%. For example, sell an item costing $6 for $60. You’ll still retain around 80% after shipping, taxes, and delivery costs. High margins are non-negotiable for a low-stress operation.

Conversely, products with tighter margins attract price-sensitive customers, which leads to increased returns, complaints, and support tickets. Serving customers with disposable income reduces friction because they’re less likely to demand refunds.

At a $115 average sale, $750,000 in annual revenue equates to roughly 6,520 orders — just 18 per day. One person can fulfill this volume, or with minimal help, generating perhaps only a few support tickets per week.

If margins leave $650,000 after cost of goods, and you spend $400,000 on marketing, you’ll retain $250,000 profit. The model works, but launching it — higher-priced products in a niche — requires time and testing. It won’t be easy to stand out, but with the right offer, it’s a manageable, profitable, and less stressful way to run a business.

Trusted Voice

Every high-end ecommerce brand needs a trusted public advocate — someone who can vouch for the quality, experience, and value of the product. It doesn’t have to be a celebrity. It can be the owner, provided she’s willing to be the face of the brand.

The advocate’s role is to build trust, communicate the product’s value, and demonstrate how it improves customers’ lives. Titles alone aren’t enough. An advocate must have influence and sales ability. Without this trust, it’s nearly impossible to command premium prices.

You may need to sell to an audience you’re not part of. That requires shedding your own “value shopper” mindset to learn how target customers buy. Experience their lifestyle, understand why they pay more, and embrace their perspective. This shift in thinking can be the key to unlocking growth.

Creative Edge

Innovation is essential. If you can’t create something unique, this model may not work. Competitors will copy your ideas and undercut your prices. Success requires loyalty and brand affinity, and thus customers who won’t switch to save $30 — or $100.

Luxury beauty brands such as La Mer skincare succeed not only from functional superiority, but also because of their storytelling, perceived exclusivity, and trust. The challenge for like-minded entrepreneurs is to create an experience and narrative so compelling that customers believe no substitute can match it.

In premium markets, even a few thousand loyal customers can sustain a profitable, low-stress business — if your innovation keeps them hooked.

Local SEO Best Practices Aren’t Universal: Yext Study via @sejournal, @MattGSouthern

A new Yext analysis of 8.7 million Google search results suggests many common local SEO tactics don’t perform the same across industries and regions.

The dataset, drawn from the company’s Scout Index, focuses on what correlates with visibility in Google’s Local Pack, not just overall map presence.

What Yext Found

Review Management Emerges As The Strongest Signal

The clearest pattern is around reviews. Yext states “Review engagement dominates,” calling it “the most consistent driver of Local Pack visibility across all industries and regions.”

Within the study’s feature rankings, review signals top the list, including review count, new reviews per month, and owner responses.

Businesses with many positive reviews and prompt owner responses tend to outperform competitors.

Industry Differences Vs. One-Size-Fits-All Playbooks

While profile completeness and timely replies generally help, their impact varies by vertical.

  • Food & Dining: Recent, highly rated reviews correlate more with visibility than total volume or profile completeness. A steady flow of new, high-quality reviews appears more influential than maximizing every profile field.
  • Hospitality: Photo quantity shows a weaker or even negative correlation with higher rankings. Yext notes that “a smaller set of curated, high-quality photos has more impact than a large, unfocused collection” for hotels and similar businesses.
    • At the same time, hospitality still benefits from strong ratings, clear descriptions, and curated visuals. Quality and focus matter more than volume.
  • Other sectors: The report highlights universal positives such as profile completeness, but stops short of advising identical tactics everywhere.

Regional Patterns

Geography also changes the picture. The Northeast appears less sensitive to many traditional SEO factors, while the South and West are more affected by slow review responses.

Yext calls out weekend response gaps: waiting until Monday can cost visibility, especially in the Midwest.

The practical takeaway is to maintain timely review engagement every day, not just during weekday office hours.

Methodology

Yext’s Scout Index compiles more than 200 structured data points per business, including review patterns, hours, contact details, media assets, social activity, and Google Business Profile completeness.

The analysis covers six industries across 2,500 populous ZIP codes and compares Local Pack placements against baseline Google Maps results.

Study caveats: This research involves vendor analysis using a proprietary dataset. It reports correlations rather than causal effects. Please consider these findings as directional and validate them in your own markets.

Looking Ahead

Yext’s conclusion is: “The one-size-fits-all approach seems to be a relic of the past.”

For marketers, this means testing industry-specific and region-specific strategies. Local search performance appears to reflect differences in both what people search and where they search.

Review management is the baseline to get right. Prioritize the cadence and quality of reviews, and respond quickly. Consider ways to cover weekends where delays correlate with lost visibility.


Featured Image: Roman Samborskyi/Shutterstock

AI-Powered Search: Adapting Your SEO Strategy via @sejournal, @wburton27

Using outdated and traditional SEO tactics centered around keywords and backlinks is not moving the needle anymore, as Google shifts toward AI-driven answers.

Organic clicks are going down as a result of Google AI Overviews and the rise of zero-click searches, where users get answers directly on Google’s search results page without clicking through to any websites.

SEO is not dead; it just evolved. To succeed in this new era, brands and marketers need to embrace structured, intent-driven content, enhancing trustworthiness, and aligning more closely with brand and user experience strategies.

But, how do you do this? Let’s explore.

Omnichannel  Strategy

In today’s search landscape, you need to be everywhere where your audience is, including platforms like Reddit, Quora, TikTok, YouTube, and anywhere else relevant.

If your brand is not present on those discovery channels, you need to get on them as soon as possible.

User behavior has changed, and people are using different search engines and channels to find information, products, reviews, and more.

For example:

  • Reddit for the opinions of other users who have used a similar product or service.
  • TikTok and YouTube for tutorials and product reviews.
  • Instagram for discovery.
  • Amazon and Pinterest for product reviews and inspiration.

If your marketing strategy focuses only on Google, you’re ignoring a large portion of your target audience who are looking for your content, which is basically invisible to them.

Optimize For AI Overviews

To rank for AIO, brands and marketers must focus on creating high-quality, authoritative content that directly answers user questions, is well-structured, and easy for AI to understand.

Tactic Overview Best Practices 
Create high-quality conversational content. Conduct research and see what queries show for AIOs. Create original and unique content that meets user intent and answers users’ questions. Update and modify existing content for AI Overviews by answering questions, making it conversational, and using quotes, testimonials, and updated headings.
Use plain headings and short paragraphs. This helps improve content readability for both users and AI. Use clear headings, concise paragraphs, and natural language to enhance AI understanding.
Mark up content with structured data. This helps AI and traditional search engines understand your content better. Use schema markup (e.g., FAQ Page, HowTo, Product).
Let AI bots in. This will help your content get crawled and cited by AI systems. Use llm.txt and check your robots.txt file to allow bots like OpenAI’s GPTBot and Google AI. Make sure they are not blocked.
Earn mentions on trustworthy sites. This will help improve your authority and increase your brand visibility. Create high-quality content with unique information, contribute guest posts, stay active on social media, appear on podcasts, use internal linking, and implement PR strategies.
Keep content fresh. AI chatbots love up-to-date information. Regularly update content with new data, statistics, and unique and valuable information for end-users.
Track brand mentions. Brands that are mentioned frequently across various platforms (i.e., PR, blogs, social media, news coverage, YouTube forums like Reddit and Quora, and authoritative sites) tend to be mentioned by AI. Use tools like Google Search Console, Brand24, and Mention.com to monitor online conversations.

Focus On Branded Searches

Branded searches play a vital role in shaping brand perception, driving engagement, and ensuring your brand’s visibility and authority for large language models (LLMs).

LLMs do not work like traditional search engines. They look at user intent, context, and conversational relevance.

To elevate your brand presence:

  • Your information must be accurate and consistent across all platforms.
  • Your content should be useful and helpful to your target audience.
  • It should showcase expertise through thought leadership, offering original, unique, and data-backed insights quoted across authoritative sites and forums.
  • Your brand needs a strong reputation.

Adapt your approach by tracking brand mentions using tools like Brand24 and Semrush, and analyzing LLM-driven traffic via Google Analytics 4, while also testing brand visibility across different platforms and devices.

Focus On User Intent And Topic Clusters

We have moved from keywords to relevance.

Optimize for entire topics and users’ needs by creating in-depth content that covers all aspects of a subject and anticipates users’ questions.

Use long-tail keywords and natural language to cover users’ intent. Tools like MarketMuse by Siteimprove do a wonderful job at this.

AI-Powered Content Creation And Optimization

Google is not against AI-generated content. Google does not like low-quality AI content.

Before LLMs came on the stage, content creation often involved manual writing and optimization that took a long time to draft a high-quality article focusing on E-E-A-T (or expertise, experience, authoritativeness, and trustworthiness).

Now, you can automate content briefs, generate outlines, create content drafts, edit them, and make them your own. A lot of AI tools and platforms incorporate ChatGPT into their services and add in so-called proprietary algorithms on top of it.

With AI, you can create content at a much faster pace and optimize existing content easier than before, but I don’t recommend it for content creation.

Here’s why.

There is a lot of content out there that repeats and rinses what everyone else does. Your content must be different and educate your audience, convince them that you’re the subject matter expert on a topic, and gain trust and solve their problems.

This kind of content is best written by humans with AI assistance to enhance quality, make it more engaging, and encourage people to share it.

Wrapping Up

SEO is changing fast into a conversational experience powered by AI.

The tactics and strategies that once worked won’t cut it anymore in today’s AI-powered results.

But, this is not the end of SEO. Rather, it’s an SEO evolution as Google continues its mission to organize the world’s information and make it universally accessible and useful.

To win in this new era, brands and marketers must shift from chasing rankings to building visibility, trust, and relevance across multiple platforms – whether that’s on Google, TikTok, Reddit, or Quora.

It’s also important to deliver content that’s useful and up-to-date, solving users’ problems and helping them during their journey.

AI is here to assist, not replace. AI can handle the heavy lifting, but don’t hand over the steering wheel.

Brands that will thrive have a human touch in providing value, showing expertise, and genuinely connecting with their audience.

Don’t forget who you’re really creating content for, which is people.

More Resources:


Featured Image: Collagery/Shutterstock

Privacy-Safe Attribution Avoids User Tracking

Four years after Apple broke mobile app attribution in iOS 14.5, an emerging class of privacy-safe aggregated modeling tools promises to bring back visibility without tracking individuals.

The approach uses large sets of anonymized data to infer which advertising campaigns, mobile views, and cross-device activity led to revenue.

It is the method behind Apple’s SKAdNetwork, Google’s Integrated Conversion Measurement (ICM), Meta’s Aggregated Event Measurement (AEM), and tools such as Predictive Aggregate Measurement (PAM) from Branch, a marketing and measurement firm.

“Marketers don’t need to know who bought something — they need to know what drove the sale,” said Irina Bukatik, vice president of product at Branch. “Predictive Aggregate Measurement gives them that clarity in a way that’s compliant, privacy-safe, and works across both app and web.”

Screenshot of Branch.io's attribution web page

Branch’s Predictive Aggregate Measurement infers attribution from aggregate performance signals.

Why It Matters

Merchants that sell through multiple channels — mobile app, website, physical store — know the importance of understanding advertising’s impact on sales.

Apple’s iOS changes in 2021 created blind spots, especially for tracking users across devices and channels.

PAM, AEM, ICM, and similar systems close that attribution gap. These privacy-preserving tools analyze large datasets and estimate which ads and touchpoints are likely responsible for conversions. Thus marketers can tell if a mobile view influenced a desktop purchase or if an app install led to repeat orders, all without violating privacy.

The payoff is relatively better budget allocation, campaign optimization, and confidence that ad spend is going to the channels that generate revenue.

How It Works

Instead of capturing click-by-click records tied to a shopper, these privacy-compliant systems collect conversion signals in bulk and combine them with other relevant campaign data.

The tools do not track individuals, and some add “noise” to obscure personally identifiable information.

From there, statistical models look for patterns that suggest which ads, channels, or touchpoints are likely responsible for a sale.

The process is probabilistic, meaning the tool does not know that a specific customer saw an Instagram ad before buying, but it can conclude, with a high degree of confidence, that the campaign influenced sales based on aggregate trends, explained Branch’s Bukatik.

The models weigh several factors, presumably including:

  • Time between impressions and actions,
  • Number of conversions following a campaign,
  • Cross-device behaviors such as mobile views and desktop purchases,
  • Historical campaign performance under similar conditions.

Imagine the old connect-the-dot worksheets from elementary school that let you trace the shape of a cat or a butterfly. iOS 14.5 and similar privacy updates erased some of the dots, but higher math can help complete the picture.

Known Limits

Yet aggregated measurement is not a perfect replacement for the old, detailed, user-level tracking.

There are limits to the new systems’ accuracy.

  • Lower granularity. The tools lack the user-level detail of legacy tracking. Marketers cannot follow individual customer journeys end-to-end, complicating targeted, retargeted, or personalized campaigns.
  • Attribution delays. Frameworks such as Apple’s SKAdNetwork often delay reporting for privacy reasons. The result is slow optimization cycles, forcing marketing teams to wait before reallocating budget or testing new creative.
  • Thresholding. Some systems hide conversion data from smaller or niche campaigns until they reach a minimum volume to prevent identification. This too delays budget and creative decisions.

Limitations such as lower granularity are not as critical as they first appear. As Bukatik noted, in most cases “what a marketer wants to know is not whether someone clicked on the Facebook ad and purchased — it’s whether the Facebook ad drove the purchase.”

Adapting

For merchants, the continuing shift toward privacy-preserving aggregated measurement means building campaigns and reporting processes that work within the system’s constraints.

Start by focusing on bigger, more meaningful signals. Instead of chasing granular, click-by-click attribution across devices, set clear conversion events that matter, such as a first purchase, a new subscription, or a repeat order.

Consider these metrics as key performance indicators. Aggregated tools excel at gauging high-value actions.

Invest in creative and audience testing at the campaign level. A delay in reporting may require tests that run long enough to gather statistically significant results. Avoid overreacting to early data.

Blend first-party data from your ecommerce platform or loyalty program with aggregate reports. You won’t see individual journeys from ad click to checkout, but combining datasets can reveal channel lift, customer lifetime value, and repeat purchase behavior.

Finally, accept that modern attribution is increasingly probabilistic. The goal isn’t perfect precision but directional confidence — enough clarity to shift budget toward the channels, campaigns, and platforms likely to generate profitable growth.

ChatGPT-5 Now Connects To Gmail, Calendar, And Contacts via @sejournal, @martinibuster

OpenAI announced that it has added connectors to Gmail, Google Calendar, and Google Contacts for ChatGPT Plus users, enabling ChatGPT to use data from those apps within ChatGPT chats.

ChatGPT Connectors

A connector is a bridge between ChatGPT and an external app like Canva, Dropbox, and Gmail, enabling users to connect those apps to ChatGPT in order to work with them within the ChatGPT interface. Access to the Google apps isn’t automatic; it has to be manually enabled by users.

This access was first made available to Pro users, and now it has been rolled out to Plus subscribers.

How To Enable Google App Connectors

Step 1: Click the + button then “Connected apps” link

Click The Next “Connected Apps” Link

Choose The Gmail App To Connect

How Connectors Work With ChatGPT-5

According to OpenAI’s announcement:

“Once you enable them, ChatGPT will automatically reference them when relevant, making it faster and easier to bring information from these tools into your conversations without having to manually select them each time.

This capability is part of GPT-5 and will begin rolling out to Pro users globally this week, followed by Plus, Team, Enterprise, and Edu plans in the coming weeks. To enable, visit Settings → Connectors→ Connect on the application.”

Read OpenAI’s announcement:

Gmail, Google Calendar, and Google Contacts Connectors in ChatGPT (Plus)

Featured Image by Shutterstock/Visuals6x

The Verifier Layer: Why SEO Automation Still Needs Human Judgment via @sejournal, @DuaneForrester

AI tools can do a lot of SEO now. Draft content. Suggest keywords. Generate metadata. Flag potential issues. We’re well past the novelty stage.

But for all the speed and surface-level utility, there’s a hard truth underneath: AI still gets things wrong. And when it does, it does it convincingly.

It hallucinates stats. Misreads query intent. Asserts outdated best practices. Repeats myths you’ve spent years correcting. And if you’re in a regulated space (finance, healthcare, law), those errors aren’t just embarrassing. They’re dangerous.

The business stakes around accuracy aren’t theoretical; they’re measurable and growing fast. Over 200 class action lawsuits for false advertising were filed annually from 2020-2022 in just the food and beverage industry alone, compared to 53 suits in 2011. That’s a 4x increase in one sector.

Across all industries, California district courts saw over 500 false advertising cases in 2024. Class actions and government enforcement lawsuits collected more than $50 billion in settlements in 2023. Recent industry analysis shows false advertising penalties in the United States have doubled in the last decade.

This isn’t just about embarrassing mistakes anymore. It’s about legal exposure that scales with your content volume. Every AI-generated product description, every automated blog post, every algorithmically created landing page is a potential liability if it contains unverifiable claims.

And here’s the kicker: The trend is accelerating. Legal experts report “hundreds of new suits every year from 2020 to 2023,” with industry data showing significant increases in false advertising litigation. Consumers are more aware of marketing tactics, regulators are cracking down harder, and social media amplifies complaints faster than ever.

The math is simple: As AI generates more content at scale, the surface area for false claims expands exponentially. Without verification systems, you’re not just automating content creation, you’re automating legal risk.

What marketers want is fire-and-forget content automation (write product descriptions for these 200 SKUs, for example) that can be trusted by people and machines. Write it once, push it live, move on. But that only works when you can trust the system not to lie, drift, or contradict itself.

And that level of trust doesn’t come from the content generator. It comes from the thing sitting beside it: the verifier.

Marketers want trustworthy tools; data that’s accurate and verifiable, and repeatability. As ChatGPT 5’s recent rollout has shown, in the past, we had Google’s algorithm updates to manage and dance around. Now, it’s model updates, which can affect everything from the actual answers people see to how the tools built on their architecture operate and perform.

To build trust in these models, the companies behind them are building Universal Verifiers.

A universal verifier is an AI fact-checker that sits between the model and the user. It’s a system that checks AI output before it reaches you, or your audience. It’s trained separately from the model that generates content. Its job is to catch hallucinations, logic gaps, unverifiable claims, and ethical violations. It’s the machine version of a fact-checker with a good memory and a low tolerance for nonsense.

Technically speaking, a universal verifier is model-agnostic. It can evaluate outputs from any model, even if it wasn’t trained on the same data or doesn’t understand the prompt. It looks at what was said, what’s true, and whether those things match.

In the most advanced setups, a verifier wouldn’t just say yes or no. It would return a confidence score. Identify risky sentences. Suggest citations. Maybe even halt deployment if the risk was too high.

That’s the dream. But it’s not reality yet.

Industry reporting suggests OpenAI is integrating universal verifiers into GPT-5’s architecture, with recent leaks indicating this technology was instrumental in achieving gold medal performance at the International Mathematical Olympiad. OpenAI researcher Jerry Tworek has reportedly suggested this reinforcement learning system could form the basis for general artificial intelligence. OpenAI officially announced the IMO gold medal achievement, but public deployment of verifier-enhanced models is still months away, with no production API available today.

DeepMind has developed Search-Augmented Factuality Evaluator (SAFE), which matches human fact-checkers 72% of the time, and when they disagreed, SAFE was correct 76% of the time. That’s promising for research – not good enough for medical content or financial disclosures.

Across the industry, prototype verifiers exist, but only in controlled environments. They’re being tested inside safety teams. They haven’t been exposed to real-world noise, edge cases, or scale.

If you’re thinking about how this affects your work, you’re early. That’s a good place to be.

This is where it gets tricky. What level of confidence is enough?

In regulated sectors, that number is high. A verifier needs to be correct 95 to 99% of the time. Not just overall, but on every sentence, every claim, every generation.

In less regulated use cases, like content marketing, you might get away with 90%. But that depends on your brand risk, your legal exposure, and your tolerance for cleanup.

Here’s the problem: Current verifier models aren’t close to those thresholds. Even DeepMind’s SAFE system, which represents the state of the art in AI fact-checking, achieves 72% accuracy against human evaluators. That’s not trust. That’s a little better than a coin flip. (Technically, it’s 22% better than a coin flip, but you get the point.)

So today, trust still comes from one place: A human in the loop, because the AI UVs aren’t even close.

Here’s a disconnect no one’s really surfacing: Universal verifiers won’t likely live in your SEO tools. They don’t sit next to your content editor. They don’t plug into your CMS.

They live inside the LLM.

So even as OpenAI, DeepMind, and Anthropic develop these trust layers, that verification data doesn’t reach you, unless the model provider exposes it. Which means that today, even the best verifier in the world is functionally useless to your SEO workflow unless it shows its work.

Here’s how that might change:

Verifier metadata becomes part of the LLM response. Imagine every completion you get includes a confidence score, flags for unverifiable claims, or a short critique summary. These wouldn’t be generated by the same model; they’d be layered on top by a verifier model.

SEO tools start capturing that verifier output. If your tool calls an API that supports verification, it could display trust scores or risk flags next to content blocks. You might start seeing green/yellow/red labels right in the UI. That’s your cue to publish, pause, or escalate to human review.

Workflow automation integrates verifier signals. You could auto-hold content that falls below a 90% trust score. Flag high-risk topics. Track which model, which prompt, and which content formats fail most often. Content automation becomes more than optimization. It becomes risk-managed automation.

Verifiers influence ranking-readiness. If search engines adopt similar verification layers inside their own LLMs (and why wouldn’t they?), your content won’t just be judged on crawlability or link profile. It’ll be judged on whether it was retrieved, synthesized, and safe enough to survive the verifier filter. If Google’s verifier, for example, flags a claim as low-confidence, that content may never enter retrieval.

Enterprise teams could build pipelines around it. The big question is whether model providers will expose verifier outputs via API at all. There’s no guarantee they will – and even if they do, there’s no timeline for when that might happen. If verifier data does become available, that’s when you could build dashboards, trust thresholds, and error tracking. But that’s a big “if.”

So no, you can’t access a universal verifier in your SEO stack today. But your stack should be designed to integrate one as soon as it’s available.

Because when trust becomes part of ranking and content workflow design, the people who planned for it will win. And this gap in availability will shape who adopts first, and how fast.

The first wave of verifier integration won’t happen in ecommerce or blogging. It’ll happen in banking, insurance, healthcare, government, and legal.

These industries already have review workflows. They already track citations. They already pass content through legal, compliance, and risk before it goes live.

Verifier data is just another field in the checklist. Once a model can provide it, these teams will use it to tighten controls and speed up approvals. They’ll log verification scores. Adjust thresholds. Build content QA dashboards that look more like security ops than marketing tools.

That’s the future. It starts with the teams that are already being held accountable for what they publish.

You can’t install a verifier today. But you can build a practice that’s ready for one.

Start by designing your QA process like a verifier would:

  • Fact-check by default. Don’t publish without source validation. Build verification into your workflow now so it becomes automatic when verifiers start flagging questionable claims.
  • Track which parts of AI content fail reviews most often. That’s your training data for when verifiers arrive. Are statistics always wrong? Do product descriptions hallucinate features? Pattern recognition beats reactive fixes.
  • Define internal trust thresholds. What’s “good enough” to publish? 85%? 95%? Document it now. When verifier confidence scores become available, you’ll need these benchmarks to set automated hold rules.
  • Create logs. Who reviewed what, and why? That’s your audit trail. These records become invaluable when you need to prove due diligence to legal teams or adjust thresholds based on what actually breaks.
  • Tool audits. When you’re looking at a new tool to help with your AI SEO work, be sure to ask them if they are thinking about verifier data. If it becomes available, will their tools be ready to ingest and use it? How are they thinking about verifier data?
  • Don’t expect verifier data in your tools anytime soon. While industry reporting suggests OpenAI is integrating universal verifiers into GPT-5, there’s no indication that verifier metadata will be exposed to users through APIs. The technology might be moving from research to production, but that doesn’t mean the verification data will be accessible to SEO teams.

This isn’t about being paranoid. It’s about being ahead of the curve when trust becomes a surfaced metric.

People hear “AI verifier” and assume it means the human reviewer goes away.

It doesn’t. What happens instead is that human reviewers move up the stack.

You’ll stop reviewing line-by-line. Instead, you’ll review the verifier’s flags, manage thresholds, and define acceptable risk. You become the one who decides what the verifier means.

That’s not less important. That’s more strategic.

The verifier layer is coming. The question isn’t whether you’ll use it. It’s whether you’ll be ready when it arrives. Start building that readiness now, because in SEO, being six months ahead of the curve is the difference between competitive advantage and playing catch-up.

Trust, as it turns out, scales differently than content. The teams who treat trust as a design input now will own the next phase of search.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock