3 things Michelle Kim is into right now

Isegye Idol

If you thought K-pop was weird, virtual idolshumans who perform as anime-style digital characters via motion capturewill blow your mind. My favorite is a girl group called Isegye Idol, created by Woowakgood, a Korean VTuber (a streamer who likewise performs as a digital persona). Isegye Idol’s six members are anonymous, which seems to let them deploy a rare breed of honesty and humor. They play games (League of Legends, Go, Minecraft), chitchat, and perform kitschy music that’s somewhere between anime soundtrack and video-­game score. It’s very DIYand very intimate. And the group’s wild popularity speaks to the mood of Gen Z South Koreans, famously lonely and culturally adriftstruggling to find work, giving up on dating, trying to find friendships online. Isegye Idol shows what a magical online universe people can build when reality stops working for them.

Mr. Nobody Against Putin

Pavel Talankin didn’t have the easiest life as a schoolteacher in the copper-­smelting town of Karabash, Russia; UNESCO once called it the most toxic place on Earth. But video he shot, partially in secret, makes it clear he loved itthe smokestacks, the cold, the ice mustache he’d get walking around outside, and, most of all, his bright-eyed students. That makes it all the more painful when a distant, grinding war and state propaganda change the town. An antiwar progressive with a democracy flag in his classroom, Talankin had to deal with a new patriotic curriculum, mandatory parades, visits from mercenariesand the loss of the creative space he’d built with his students. Talankin’s footage tells his story in this Oscar-winning documentary from director David Borenstein, and what struck me most is how strange it is being an adult around kids. We shape them in profound ways we might not even recognize.

Repertoire by James Acaster

I am the kind of person who will pay $150 to watch a comedian in a smelly theater in San Francisco that charges $20 for a can of waterbecause I am crazy enough to hope that standup will not die. In February, I saw the British comedian James Acaster perform live … and it was a mediocre show. But Repertoire, his 2018 miniseries on Netflix, is gold. Shot shortly after Acaster went through a breakup, the four-part show features him portraying, among other characters, a cop who goes undercover as a standup comedian, forgets who he is, and gets divorced. And then things get weird. “What if every relationship you’ve ever been in,” Acaster asks, “is somebody slowly figuring out they didn’t like you as much as they hoped they would?” If the best comedy comes from paying attention to the hellhole that you’re in, I wish Acaster many more pitfalls.

One town’s scheme to get rid of its geese

“Pull over!” I order my brother one sunny February afternoon. Our target is in sight: a gaggle of Canada geese, pecking at grass near the dog park. As I approach, tiptoeing over their grayish-white poop, I notice that one bird wears a white cuff around its slender black neck. It’s a GPS tracker—part of a new tech-centered campaign to drive the geese out of my hometown of Foster City, California. 

the United States with a dot on the California coast line
__________________________
THE PLACE
Foster City, CA
USA

About 300 geese live in this sleepy Bay Area suburb, equal to nearly 1% of our human population—and some say this town isn’t big enough for the both of us. Goose poop notoriously blanketed our middle school’s lawn, and the birds have hassled residents for generations. My own grandmother remembers when geese took over her garage for five whole minutes before waddling out. She says, “I wanted to kill them, but I thought I’d get in trouble.”

Indeed, that idea doesn’t fly here. City officials backed out of a previous plan to kill 100 geese following uproar from local environmentalists. Still, the poop creates a public health hazard; the birds need to go. 

So the city paid nearly $400,000—roughly $1,300 per goose—to Wildlife Innovations, a company that resolves conflicts between humans and wildlife, to haze the geese with gadgets. The company’s approach is “basically, making the geese less comfortable,” Dan Biteman, head of the goose management plan and senior wildlife biologist at Wildlife Innovations, tells me.

The need for such conflict resolution is on the rise as land development collides with changes in animal behavior. Though overpopulation of Canada geese is a national nuisance in the US, such tensions also surface with other species in this country and elsewhere, including grizzlies on the Montana prairies, coyotes on San Francisco streets, and savanna elephants in Tanzania parks. 

So the people whose job it is to deal with recalcitrant critters are bringing on the gadgets.

Back in Foster City, I spot a black camera mounted to a tree trunk at Gull Park by the lagoon. They’re in seven parks around town, programmed to snap photos every 15 minutes and transmit them back to Wildlife Innovations HQ. If they detect geese, a biologist immediately drives over to disperse the birds. One team member uses devices like lasers or drones; another brings along a goose-hating border collie named Rocky. 

An orange foam pontoon boat with yellow eyes and sharp-looking jagged teeth
Belligerent birds must grapple with the Goosinator.
ANNIKA HOM

As a special measure, staff deploy the “Goosinator,” a small, remote-controlled neon-orange pontoon boat with a fearsome dog-like mouth painted on its bow, meant to evoke geese’s fear of coyotes and bright colors. It comes with attachable wheels and can zoom around on land or water to chase birds away. Biteman tells me the company is thinking about mounting speakers on trees and flying drones that will screech the calls of goose predators like red-tailed hawks or golden eagles. 

The company received federal permits required by the Migratory Bird Treaty Act to stick GPS trackers on 10 geese, too. This way, staff can surveil the geese and research their behavior and movements. 

At local goose hangouts, signs that look like “Wanted” posters alert the public to the new plan. As I watch some culprits graze (and defecate) on a church lawn, I think to myself: Enjoy it while it lasts. 

Annika Hom is an award-winning independent journalist. She’s written for National Geographic, Wired, and more.

There is no nature anymore

When people talk about “nature,” they’re generally talking about things that aren’t made by human beings. Rocks. Reefs. Red wolves. But while there is plenty of God’s creation to go around, it is hard to think of anything on Earth that human hands haven’t affected.

Mat Honan

In the Brazilian rainforest, scientists have found microplastics in the bellies of animals ranging from red howler monkeys to manatees. In remotest Yakutia, where much of the earth remains untrodden by human feet, the carbon in the sky above melts the permafrost below. In the Arctic Ocean, artificial light from ship traffic—on the rise as the polar ice cap melts away—now disrupts the nightly journey of zooplankton to the ocean surface, one of the largest animal migrations on the planet. The remote mountain lakes of the Alps are contaminated with all kinds of synthetic chemicals. Polar bears are full of flame retardants. Cesium-137, fallout from nuclear bomb explosions, lightly rimes the entire planet. 

These examples are mostly pollution—nuclear, carbon, chemical, light—but I raise them not to highlight the ways human industry and technology degrade the environment but to note how the things humans build change it. Nobody really knows what the exact effects of all that will be, but my point is that no part of the globe is free of human fingerprints. We have literally changed the world.  

We’ve changed ourselves as well. Humans are especially adept at bending human nature. Everything about us is up for grabs—appearance, health, our very thoughts. Pharmaceuticals, surgeries, vaccines, and hormones give us longer lives, take away our pain, ease our anxiety and depression, make us faster, stronger, more resilient. We’re getting glimpses of technologies that will let us change who our children will become before they’re even born. Electrodes implanted in people’s brains let them control computers and translate thoughts into speech. Prosthetics and exoskeletons straight out of comic books restore and enhance physical abilities, while gene-­editing technologies like CRISPR are rewriting our very DNA. And meanwhile, people have taken the sum total of all the information we have ever written down and poured it into vast calculating machines in an effort—at least by some—to build an intelligence greater than our own. 

So what even is nature, or natural, in this context? Is it “environmentalist,” in the conventional sense, to try to preserve what one could argue no longer exists? Should we employ technology to try to make the world more “natural”?  

Those questions led us to approach this Nature issue with humility. We try to grapple with them all the time—MIT Technology Review is, after all, a review of how people have altered and built upon nature.

And it’s a place to think about how we might repair it. Take solar geoengineering, for example—a subject we have covered with increasing frequency over the past few years. The basic idea of geoengineering is to find a technological fix for a problem technology caused: Burning ­petrochemicals to fuel the Industrial Revolution turned Earth’s atmosphere into a heat sink, fundamentally breaking the climate. Some geoengineers think that releasing particulate matter into the stratosphere would reflect sunlight back into space, thus reducing global temperatures. After years of theoretical discussions, some companies have begun to actively experiment with such technologies. This might seem like a great way to restore the world to a more natural state. It’s also fraught with controversy and peril. It could, for example, benefit some nations while harming others. It may give us license to continue burning fossil fuels and releasing greenhouse gases. The list goes on. 

Nature isn’t easy. 

In our May/June issue, we have attempted to take a hard look at nature in our unnatural world. We have stories about birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice and within ourselves—and in the far future, on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer. I don’t know if any of that will answer the questions I’ve been asking here—but we can’t help but try. It’s in our nature. 

Los Angeles is finally going underground

Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-­packed stretch of the city will be, if all goes well, a 25-minute train ride.

The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town.

These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-­balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day. 

a car on the D line track
A Metro train pulls into La Cienega station
Fairfax Station
Art by Susan Silton at the Fairfax station
Eamon Ore-Giron - LA Metro D Line - La Brea Station
Art by Eamon Ore-Giron at the La Brea station

Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above.

Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.

AI needs a strong data fabric to deliver business value

Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey.

But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it. 

Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics. 

“AI is incredibly good at producing results,” he says. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.”

In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation. 

Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure.

Losing context is a critical AI problem

Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost. 

Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions. 

Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says. 

“Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.”

In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed.

This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems.

Don’t consolidate, integrate

The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic.

The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan. 

The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful.

“It empowers confident, consistent decisions, and when these elements all come together, AI just doesn’t analyze and interpret the data — it drives smarter, faster decisions that really create business impact,” he says. “This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity.”

Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently.

Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation.

Beyond data isolation and dashboards

In the emerging era of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents can monitor events, trigger workflows, and make decisions in real time, often without direct human intervention. That speed creates new opportunities, but it also raises the stakes. When multiple agents operate across finance, supply chain, procurement, or customer operations, they must be guided by the same understanding of business priorities.

Without a common knowledge layer connecting disparate data together, coordination between systems quickly breaks down. One system might optimize for margin, another for liquidity, and another for compliance, each working from a different slice of data. 

Importantly, most enterprises already possess much of the knowledge needed to make this work, says Khan. Years of operational data, master data, workflows, and policy logic already exist across business applications — companies just need to make it accessible. Companies that deploy data fabrics gain greater trust in their data, with more than two thirds of enterprises seeing improved data accessibility, data visibility, and exerting more control over their data. 

“The opportunity isn’t just inventing context from scratch, it’s activating and connecting the context across your business that already exists,” he continues, adding that a data fabric is the “architecture that ensures data semantics, business processes and policies are connected as a unified system across all the clouds.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: introducing the 10 Things That Matter in AI Right Now

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: 10 Things That Matter in AI Right Now

What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now.

The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world.

We’ll be unpacking one item from the list each day here in The Download, explaining what it means and why it matters. Read the full rundown now—and stay tuned for the days ahead.

MIT Technology Review Narrated: desalination plants in the Middle East are increasingly vulnerable

As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region.

President Donald Trump recently threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why.

—Casey Crownhart

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 An unauthorized group has reportedly accessed Anthropic’s Mythos
Users in a private online forum may have gained access. (Bloomberg $)
+ Anthropic said the model was too dangerous for a full release. (Axios)
+ Mozilla used it to find 271 security vulnerabilities in Firefox. (Wired $)

2 Meta will track workers’ clicks and keystrokes for AI training
Tracking software is being installed on workers’ computers.(Reuters $)
+ Employees are up in arms about the program. (Business Insider)
+ LLMs could supercharge mass surveillance in the US. (MIT Technology Review)

3 ChatGPT allegedly advised the Florida State shooter
About when and where to strike, and which ammunition to use. (Washington Post $)
+ Florida’s attorney general is probing ChatGPT’s role in the shooting. (Ars Technica)
+ Does AI cause delusions or just amplify them? (MIT Technology Review)

4 SpaceX has secured the option to buy AI startup Cursor for $60 billion
Or pay $10 billion for the work they’re doing together. (The Verge)
+ SpaceX made the deal as it prepares to go public. (NYT $)
+ Musk’s endgame for the company may be a land grab in space. (The Atlantic $)

5 The Pentagon wants $54 billion for drones
That would rank among the top 10 military budgets for entire nations. (Ars Technica)
+ Shoplifters could soon be chased down by drones. (MIT Technology Review)

6 Apple’s new chief hardware officer signals a sprint to build in-house chips
Apple silicon lead Johny Srouji has been promoted to the role. (CNBC)

7 China’s government is tightening its grip on AI firms that try to leave
It’s doing all it can to stop firms like Manus sending talent and research overseas. (Washington Post $)

8 The FBI is probing the deaths of scientists tied to sensitive research
Including a nuclear physicist and MIT professor shot outside his home. (CNN)

9 The US is accelerating research into psychedelic medical treatment
Including the mysterious ibogaine. (Nature)
+ But psychedelics are (still) falling short in clinical trials. (MIT Technology Review)

10 The first retail boutique run by an AI agent has opened—and it’s chaos
The San Francisco shop is reassuringly mismanaged. (NYT $)

Quote of the day

“I was very impressed with myself to have the head of Apple calling to ‘kiss my ass’.” 

—Donald Trump pays a classy tribute to Tim Cook on Truth Social.

One More Thing

JOHN F. MALTA


This researcher wants to replace your brain, little by little

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death. His idea? Replace your body parts. All of them. Even your brain. 

Jean Hébert, a program manager at the US Advanced Research Projects Agency for Health (ARPA-H), believes we can beat aging by adding youthful tissue to people’s brains. Read the full story on his futuristic plan to extend human life


—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ A Lego set was sent to the edge of space—and survived.
+ Go behind the scenes with Werner Herzog as he guides a new generation of filmmakers.
+ This video about enshittification perfectly captures the frustration of the degrading internet.
+ NASA’s latest deep-space capture offers a rare view of planetary systems in their absolute infancy.

New Ecommerce Tools: April 22, 2026

Every week we publish a list of new products and services for ecommerce merchants. This week’s rundown includes AI store builders, shipping labels, composable commerce, shoppable media, video avatars, marketing campaigns, and token-based checkout.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Adobe introduces a brand visibility tool. Adobe has launched a tool for visibility on generative AI platforms. The company also announced an expansion of its Experience Manager with a contextual layer for AI agents, and the launch of  CX Enterprise, an agentic AI system to simplify how businesses manage the customer lifecycle, from acquiring prospects to driving conversions and loyalty.

Home page of Adobe for Business

Adobe for Business

Australia Post offers in-store label printing for eBay sellers. Australia Post has launched a “Print in Store” capability that allows eBay sellers to generate shipping labels at participating post offices by scanning a QR code (available on eBay). The service follows Australia Post’s new small parcel option for items under 250 grams (0.55 pounds), piloted in collaboration with eBay.

Kultura integrates conversational avatars into CMSs. Kaltura, a provider of agentic AI tools, has announced integrations for three content management systems: Adobe Experience Manager, WordPress, and Drupal. Organizations on those platforms can embed any Kaltura experience as a native component. Examples include agentic avatars that greet visitors, conversational search, and interactive videos.

Vaimo launches a smarter frontend for composable commerce. Vaimo, an ecommerce developer, has launched Nexus, a frontend orchestration platform for composable commerce. According to Vaimo, Nexus sits between frontend and backend systems, offering a composable architecture without large-scale rebuilds or prohibitive overhead. The platform includes an orchestration layer that aggregates data across systems, a performance-optimized frontend layer, and connectors for platforms such as Commercetools, Adobe Commerce, Contentful, and Sanity. Nexus is part of the Vaimo Accelerator program.

Home page of Vaimo

Vaimo

Klaviyo expands integration with Canva for marketing campaigns. Klaviyo, the email, text, and marketing platform, has announced an expanded integration with Canva. Marketers can bring their Canva designs into Klaviyo and then refine and personalize campaigns using segmentation, automation, and customer data.

Payabl. launches Click to Pay with Visa to help merchants reduce fraud. Financial technology provider Payabl. has launched Click to Pay with Visa, replacing manual card number entry with token-based checkouts. Customers can complete purchases in a few clicks once their card is enrolled. Visa Click to Pay is available through Payabl.checkout, enabling merchants to activate the service. The tool works across devices and supports existing security flows.

Shoplazza launches AI Store Builder. Shoplazza, a platform for direct-to-consumer brands, has launched AI Store Builder to simplify how merchants create, launch, and initiate operations. With Store Builder, merchants can generate a storefront by describing their products, brand positioning, and target markets through natural language. The system automatically builds key components, including home page layouts, product pages, collection structures, and policy content, reducing reliance on manual configuration and technical resources.

Home page of Shoplazza

Shoplazza

SiteGround launches ecommerce platform for small businesses. SiteGround, a platform helping small businesses build, host, and grow online, has launched SiteGround Ecommerce for new sellers, providing tools for payments, shipping, taxes, orders, and inventory. Additional features, per SiteGround, include built-in marketing to help store owners get found and convert visitors.

BlueSwitch launches Unified Commerce Suite. BlueSwitch, a Shopify Platinum Partner, has launched Unified Commerce Suite, a curated collection of production-ready Shopify modules to solve operational challenges faced by B2B merchants. Unified Commerce Suite addresses gaps across pricing, ordering, onboarding, fulfillment, shipping, and account workflows, according to BlueSwitch.

Wayvia launches shoppable media platform. Wayvia, an omnicommerce platform, has launched Shoppable Next Generation, an AI update of its shoppable media tool. Next Generation enables brands to connect marketing touchpoints, including display ads, social media posts, and email campaigns. The update provides customizable landing-page templates for conversion, basket building, and awareness. Each maps to a campaign objective in Meta Ads Manager.

Home page of Wayvia

Wayvia

UPS launches RFID sensing technology to eliminate manual scanning. UPS is rolling out radio frequency identification sensing across its U.S. small package network. The technology is in all UPS package delivery vehicles and facilities, and on every package shipped through UPS stores. RFID sensing automatically confirms that packages are picked up and in UPS’s possession.

Parsnipp launches AI search and GEO platform. Parsnipp has launched a generative engine optimization platform. The tool helps brands understand how they appear in AI-driven discovery and search by identifying specific signals, misconfigurations, and content gaps. Features include brand analytics across LLMs, competitor tracking, GEO content, search personas, and AI readiness recommendations.

Introducing Stable Commerce, an AI-powered ecommerce platform. Stable Commerce, a new AI-powered ecommerce platform, enables merchants to create and manage online stores through a chat-based AI interface. Merchants describe in plain language what they want to sell. The AI agent then generates a storefront that includes page design, product pages with descriptions and images, checkout integration, and shipping setup. Merchants can use the same conversational interface to change layout, products, or content.

Gr4vy launches toolkit for ChatGPT payments. Gr4vy, a payment orchestration platform, now supports agentic transactions, allowing merchants to manage and process payments in AI-driven environments. Gr4vy is also launching its Agentic Development Kit to equip and guide merchants in building and launching AI-native storefronts on ChatGPT and other genAI platforms. According to Gr4vy, merchants can launch AI-native storefronts, orchestrate AI transactions in real time, and maintain control over performance, security, and the customer experience.

Home page of Gr4vy

Gr4vy

Google Ads Posts GEO Partner Manager Role via @sejournal, @MattGSouthern

Google’s Large Customer Sales team has posted a role titled “GEO Partner Manager, Performance Solutions” on Google Careers. The listing is a single job posting inside Google’s ads sales organization.

The term “GEO” appears seven times across the listing, including the title. “Generative Engine Optimization” is spelled out twice. Other references include “GEO players,” “GEO ecosystem,” and “GEO/AEO companies.”

The listing says the role will “shape the GEO ecosystem to prioritize Google surfaces.” Responsibilities include influencing partners to prioritize Google-owned surfaces in their tools and methodologies, as well as in “Share of Model” analysis. “Share of Model” is an industry term for a brand’s presence in AI-generated answers.

Why This Matters

The terminology is worth noting because it sits alongside a different public position from Google’s search side. In July, Google’s Gary Illyes said standard SEO is sufficient for AI Overviews and AI Mode, and that specialized AEO or GEO optimization is not needed. As of publication, Google has not publicly updated that guidance.

Large Customer Sales manages relationships with major advertisers and agencies. The role’s alignment with the 3P Measurement team places it firmly inside Google’s ad-side partner work.

Microsoft and Google are in different places here, and the categories of evidence differ. In March, Bing added “GEO” to its official webmaster guidelines, defining the term and placing it alongside SEO as a named category. Bing’s AI Performance dashboard, launched in February, was positioned as a step toward GEO tooling.

The Google listing is one job posting inside an ads sales team. Both are adoption signals, but not the same level of commitment.

Looking Ahead

The language reflects how one team inside Google’s ads organization frames this work today. It doesn’t carry the same weight as a documentation update, a public statement from Google Search, or a policy change.

Whether similar GEO language appears in other Google job listings across Ads, Cloud, or Search would indicate whether this is a pattern or a single team’s choice.

For brands working with GEO or AEO partners, the listing is worth noting. The listing indicates Google’s ads team wants partner tools and methodologies to prioritize Google surfaces.


Featured Image: Jack_the_sparow/Shutterstock

AI Search Is Eating Itself & The SEO Industry Is The Source

Last September, Lily Ray asked Perplexity for the latest news on SEO and AI search. It told her, confidently, about the “September 2025 ‘Perspective’ Core Algorithm Update”; a Google update that, as she then wrote at length in “The AI Slop Loop,” didn’t exist. Google hasn’t named core updates in years. “Perspectives” was already a SERP feature. If a real update had rolled out while she was in Austria, her inbox would have told her before Perplexity did.

She checked the citations. Both pointed at AI-generated posts on SEO agency blogs: sites that had run a content pipeline, hallucinated an update, and published it as reporting. Perplexity read the slop, treated it as source material, and served it back to her as news.

In February, the BBC’s Thomas Germain spent 20 minutes writing a blog post on his personal site. Its title: “The best tech journalists at eating hot dogs.” It ranked him first, invented a 2026 South Dakota International Hot Dog Championship that had never happened, and cited precisely nothing. Within 24 hours, both Google’s AI Overviews and ChatGPT were passing his fabrication along to anyone who asked. Claude didn’t bite. Google and OpenAI did.

Everyone who has looked has seen it.

I’ve Argued About The Ouroboros Before. I Had The Timeline Wrong

The prevailing framing for this problem has been model collapse. You train a model on web text, the web fills up with AI output, the next model trains on a corpus increasingly made of its own exhaust, and eventually the distribution flattens into mush. Innovation comes from exceptions, and probabilistic systems that converge toward the mean attenuate exceptions by design. I’ve used the phrase digital ouroboros for this.

That framing assumes training cycles. It assumes time. It assumes that contamination moves at the speed of model release.

It doesn’t. What Lily documented, what Germain documented, what the New York Times then went and quantified – none of that is training-side. The models involved were not retrained between the hallucination appearing on a blog and being served as citation-backed fact. The contamination moved at the speed of a crawl. The ouroboros isn’t taking generations to eat itself. It’s eating itself at query time, every time someone asks one of these systems a question.

The pipe everyone has been watching is not the pipe that is breaking.

The Distinction That Matters

Model collapse is a training-corpus problem. Synthetic content seeps into the pre-training data, the next generation of model inherits it, capability degrades. Researchers have been warning about this for two years. They’re right. They’re also describing something slow enough that everyone can nod gravely and keep shipping.

Retrieval contamination is faster and already here. RAG systems – Perplexity, Google AI Overviews, ChatGPT with search – do not generate answers purely from parametric memory. They fetch documents from the live web, stuff them into context, and generate a response conditioned on what they retrieved. If the retriever surfaces a hallucinated SEO post, the answer inherits the hallucination. No retraining required.

The academic literature on this is clear. PoisonedRAG (Zou et al., 2024) showed that injecting a small number of crafted passages into a retrieval corpus was sufficient to control the output of a RAG system on targeted queries. BadRAG (Xue et al., 2024) demonstrated the same class of attack using semantic backdoors. Both papers treat this as an adversarial problem: what happens when an attacker deliberately poisons the corpus.

What Germain and Lily accidentally proved is that the adversarial model is the normal operating model. You don’t need a crafted adversarial passage. You need a blog post. The open web is the corpus, and anyone with a domain can write to it.

The Oumi analysis commissioned by the New York Times put numbers on what this costs. Across 4,326 SimpleQA tests, Google’s AI Overviews answered correctly 85% of the time on Gemini 2, 91% on Gemini 3. At Google’s scale – more than five trillion searches a year – a 9% error rate still translates to tens of millions of wrong answers every hour. But the more revealing figure is this: on Gemini 3, 56% of the correct answers were ungrounded, up from 37% on Gemini 2. The upgrade improved surface accuracy and made the citations worse. When the model got something right, more than half the time, the source it pointed to didn’t support the claim.

The retrieval layer is not a filter. It is the infection vector.

Who’s Seeding The Corpus

The industry that has most enthusiastically produced it – and then most enthusiastically written about the consequences of consuming it – is the SEO industry. I’ve written before about content scaling being just content spinning with better grammar, and about the AI visibility tool complex that builds dashboards from the output of non-deterministic systems. This is the same loop, one layer deeper. An SEO agency runs an AI content pipeline because AI Overviews have cut their clients’ traffic. The pipeline publishes speculative “winners and losers” posts during a core update that’s still rolling out, citing nothing. Another agency’s pipeline picks those up as sources. The output floods into the retrieval index. AI Overviews cites one of them. The original agency then writes a case study about how AI Overviews are “surfacing” their content.

An Ahrefs study of over 26,000 ChatGPT source URLs found that “best X” listicles accounted for nearly 44% of all cited page types, including cases where brands rank themselves first against their competitors. Harpreet Chatha told the BBC you can publish “the best waterproof shoes for 2026,” put yourself first, and be cited in AI Overviews and ChatGPT within days. Lily, during the actual March 2026 core update, found AI-generated articles claiming to list winners and losers while the update was still rolling out; articles that opened with filler and listed brands without a single real citation.

The practitioners scaling AI content are also the ones most directly harmed when AI search systems cite that content as fact. Nobody forced this. The industry built the pipeline, fed it, and complained about what came out the other end. Not adversarial poisoning. Just the industry polluting its own water supply and then hiring consultants to test it.

The Tier That Matters

The Oumi study is about AI Overviews, which is free by design. Google AI Overviews reportedly reached over two billion monthly active users by mid-2025. ChatGPT has around 900 million weekly active users, of which roughly 50 million pay. Meaning about 94% of the people interacting with OpenAI’s product are on the free tier.

The paid tiers are better. Per OpenAI’s own launch claims, cited in Lily’s piece, GPT-5.4 is 33% less likely to produce false individual claims than GPT-5.2. The free-tier GPT-5.3 is also improved over its predecessor (26.8% fewer hallucinations with web search, 19.7% fewer without), but it’s still meaningfully less reliable than the paywalled version. Gemini 3, which made AI Overviews more accurate on surface tests, also made the ungrounded rate worse. Better answer, weaker citation.

Nobody seems to mind. The reliable version of the product is paywalled. The version most of the planet gets – including the version at the top of Google Search – can be manipulated by 20 minutes of work on a personal website. Intelligence is the marketing category. What two billion users actually receive is a confident summarization of whatever the crawler happened to find.

Grokipedia As The Terminal State

The accidents of the retrieval layer are one thing. Grokipedia is the version where accident is no longer a useful word.

Elon Musk’s xAI launched Grokipedia on Oct. 27, 2025, with 885,279 articles, all generated or rewritten by Grok. Some of them were lifted from Wikipedia wholesale, with a disclaimer at the bottom acknowledging the CC-BY-SA license; a license Wikipedia maintains precisely because a community of human editors writes and verifies the content. Others were rewritten from scratch. PolitiFact found Grokipedia citations, including Instagram reels as sources, which Wikipedia’s own policies rule out as “generally unacceptable.” Grokipedia’s entry on Canadian singer Feist said her father died in May 2021, citing a 2017 Vice article about Canadian indie rock that made no mention of the death. And her father was still alive when that article was written. The Nobel Prize in Physics entry added an uncited sentence claiming physics is traditionally the first prize awarded at the ceremony, which isn’t true.

Musk said the goal is to “research the rest of the internet, whatever is publicly available, and correct the Wikipedia article.” The rest of the internet now includes the synthetic content produced by every AI content pipeline pointed at it. An AI system reading the open web, rewriting Wikipedia based on what it finds, and presenting the result as a reference work is the retrieval-contamination problem with the feedback loop made explicit and shipped as a product.

By mid-February 2026, Grokipedia had lost most of its Google visibility. Wikipedia outranks Grokipedia for searches about Grokipedia itself.

“This human-created knowledge is what AI companies rely on to generate content; even Grokipedia needs Wikipedia to exist.” – The Wikimedia Foundation

The synthetic encyclopedia is subsidized by the human one. When the subsidy stops, the thing depending on it stops making sense.

Wikipedia is not beyond criticism. Its edit wars, ideological gatekeeping, and systemic gaps in who gets to shape articles are well-documented and real. But the response to a flawed human editorial process is not to remove the humans entirely and call the result an improvement. I’ve written before about the accountability vacuum that opens when you replace human judgment with API calls. Wikipedia’s problems are the problems of a messy, contested, accountable system. Grokipedia’s problems are the problems of a system with no accountability at all.

The Citation Layer Is Decoupling From Authorship

I wrote recently about Reddit selling “Authentic Human Conversation™” to AI companies while the platform’s own moderators report that they can no longer tell which comments are human. The Oumi study found that of 5,380 sources cited by AI Overviews, Facebook and Reddit were the second and fourth most common. The citation layer of the most-used answer engine in the world is substantially built on two platforms that cannot verify the human origin of their own content.

Human creators are pulling out of the open web because the traffic bargain has collapsed. Answer engines are citing content whose authorship cannot be verified, or was never human to begin with. The citation is still there. The thing being cited is not what it used to be.

The ouroboros framing was right. The timeline wasn’t. Retrieval collapse doesn’t wait for the next training run. It needs an indexable URL and a retrieval system willing to trust it.

The systems are willing. And more than half the time they get an answer right, they can’t point to a source that supports what they just told you.

More Resources:


This post was originally published on The Inference.


Featured Image: Anton Vierietin/Shutterstock

Does AI Actually Reward Quality Content?

For well over a decade, SEOs and marketers have debated the importance of high-quality, original content. After just about every major update, the message from Google was clear: If you want to rank, cut it out with the derivative listicles and other quick-churn assets that are big on keywords and light on substance.

More recently, our current understanding of how LLMs select which sources to cite in responses has SEOs and content marketers championing high-quality, original, and in-depth content with renewed fervor. If you want AI to identify your content as the best source with which to answer a user’s query, logically, it must be among the best online content available on the topic.

While that’s all great in theory, I’m sure many of you reading this have experienced that crushing disappointment on publishing, only for it to sink like a stone with barely a ripple. Somehow, your magnum opus languishes on page 4 of the relevant search results, outranked by content that, in your humble opinion, isn’t that remarkable.

Can we really call something high quality if it doesn’t achieve the strategic outcome that led us to create it?

Even when our content succeeds, there’s still the nagging worry that we might perhaps be investing too much time and money trying to achieve content perfection. Did that white paper really need to be 10 pages? Or would a simpler, five-page version have done just as well?

Might it be possible to achieve the same results with a little less quality? How do we find the sweet spot? In short, what’s the minimal viable product?

I’m not going to pretend to have the answer. And that’s because the question isn’t clear on what we mean by quality content.

A Question Of Quality

I’m as guilty as anyone of writing about the need for high-quality content as if it’s obvious what it is and how to achieve it without any further explanation. It’s a form of industry shorthand that has become increasingly meaningless through overuse.

Ask 10 CMOs, SEOs, and content marketers to define what they mean by high-quality content, and you’ll probably get 15 different answers.

Is “quality” determined by thought leadership and subject matter expertise? Or can a few average thoughts be elevated to high quality with skilled writing, a strong layout, and some clever design work?

Is “depth” characterized by longer word counts and more detailed research? Or is it really about demonstrating a superior understanding of a topic by exploring more nuanced or highfalutin’ ideas? Never mind the graphs, can you somehow weave in some Ancient Greek philosophy to get the point across?

And how much originality adds up to “original”? If you reference someone else’s work, are you somehow detracting from your own originality score?

While I can’t confidently give you a single, unambiguous definition of what high quality is, I can tell you what it isn’t: While it may be important, high-quality content is no silver bullet.

Just because your content is meticulously researched and extremely well executed doesn’t mean it’s somehow entitled to high rankings.

Does Original Content Actually Perform Better?

I tasked my team with conducting some qualitative research to answer the question: Does original content perform better than repurposed, unoriginal content, in both traditional search and AI-generated responses?

Of course, the internet is a big place (who knew?). So, for the purposes of this study, we restricted the definition of “search” to Google’s search results and to citations within AI platforms Gemini, ChatGPT, and Perplexity.

Similarly, because you’ve got to compare apples with apples, the team focused on popular search queries in the B2B SaaS and professional services space; mid-funnel, informational queries like “marketing automation tools” and “email deliverability tools.”

The team then identified and analyzed the top-ranking URLs for each query before assigning each one a score from 0 to 3 in five different categories.

  • Primary contribution.
  • Structural novelty.
  • Interpretive depth.
  • Source dependence.
  • Contextual insight.

With a maximum total score of 15, each page was then classified as follows:

  • 12-15: Group A (Original).
  • 7-11: Borderline (Excluded).
  • 0-6: Group B (Repurposed).

When the data came back, it appeared at first glance that URLs with higher originality scores (Group A) do tend to rank more consistently in Google and appear more frequently in AI responses than repurposed or derivative content (Group B).

However, before all the content marketers scream “I told you so” at anyone in earshot, you might want to read this next bit first.

Data analysts are notoriously skeptical of knee-jerk first glance conclusions (again, who knew?). The team crunched the data further, using data sciency techniques involving far more Greek letters than I’m used to seeing. They concluded that, while the correlation exists, it’s weak. Strong performance in one part of the dataset doesn’t reliably predict strong performance elsewhere in the dataset. The relationship simply isn’t consistent enough to say with any confidence that highly original content performs better every time.

Even so, while the correlation may be weak, it doesn’t appear to be entirely random. Looking at the overall averages, stripped of extreme cases that might skew the results, we did detect a pattern.

For example, original content appeared to perform better in relation to queries requiring interpretation or judgment, such as “benefits of marketing automation” or “email marketing best practices.” But that relationship virtually disappeared for more straightforward requests for information like “what is marketing automation.”

This makes sense. When the answer is factual, being original matters less than being accurate. When the answer requires perspective or judgment, originality becomes more valuable.

So, where does that leave us? We can’t confidently prove that original content always outperforms repurposed content. On the other hand, we can rule out the idea that originality has no impact at all. Therefore, what we can say is that original insight helps in some contexts, for some query types. It just isn’t a guaranteed lever you can pull for predictable results.

When Mediocre Content Has The Edge

Back in the 2010s, the API industry was booming. And that meant lots of content being published on every aspect of how APIs function. At the very least, a software company would need to publish detailed documentation for each of its APIs, from technical specifications and structures to implementation guides and walkthroughs.

This created a problem for one of our clients, a small startup of 10 people: How could they compete for visibility in search, let alone attract positive attention, when the entire conversation around APIs appeared to be dominated by industry giants? The competitors already had massive online footprints, larger content budgets, established domain authority, and significantly more comprehensive resources. How could we ever outrank them?

Conventional wisdom might have seen us attempt to fight quantity with quality by creating the best possible online resource on the topic of APIs. If we could publish content that goes far deeper and offers more value than the competition, we might gradually earn trust and authority through original, detailed research and thought leadership.

With enough budget and a long-term commitment, you could definitely build a strategy around such an approach. Except, of course, we would have needed both quality and quantity to have any chance of overtaking their competitors.

Trying to compete for visibility in every relevant subtopic and keyword on the subject of APIs would mean fighting on way too many fronts at once. How could we find an original angle on a topic that’s already well served online? How could we talk about APIs in a way that would differentiate their software from everyone else’s?

Short answer: We couldn’t. So, we flipped the problem. What if, instead of being last to join the race for the most relevant keywords today, we could be first out of the blocks in the race for whichever keyword might become relevant tomorrow?

I sent out a survey to the relevant audience, asking a bunch of typical users what search terms they would use in certain scenarios. The results revealed a plethora of short- and long-term keywords, but when we looked for any common themes, two words stood out. One was “API,” naturally. The other was “design.”

“API design” hadn’t cropped up in our initial keyword research as a potential opportunity. But as the search volume for “API design” was practically zero, that’s hardly surprising. Yet we now had clear evidence that, as the industry matured, so too would the search terms people used.

And because very few currently search for “API design,” none of the competitors appeared to be targeting the keyword or publishing content on the topic at all.

This was our window of opportunity. Never mind original content: We had an original keyword, an entire topic niche, to ourselves.

However, we also knew the value of that keyword would evaporate overnight if one or more competitors got there before us.

Forget spending six months developing an award-winning whitepaper series. We didn’t need perfection – with all the time, expense, and effort that entails – because we were staring at the SEO equivalent of an open goal.

In just a few days, we threw together a simple landing page focused on API design. It wasn’t exceptional. At only about 1,500 words, it wasn’t comprehensive. As content goes, it was pretty mediocre. But that’s all it took.

About 12 months later, just as predicted, the search volume materialized. Our single modest page continued to outrank every major competitor, even when they started chasing that new search volume with their own landing pages and content hubs.

Within two years, the keyword “API design” was worth approximately £200 per click. But our client didn’t need to pay for clicks. In effect, we won the space before anyone else even realized there was a space worth winning.

Perfection Is The Enemy Of Good

Striving to achieve the best possible iteration of your content, endlessly refining and polishing and second-guessing every detail, can get in the way of just getting it out there. Sometimes, good enough really is good enough.

I’m not arguing that we should stop striving for excellence in our content. As I hope our little study demonstrated, there are situations where well-researched, original content can give you an advantage. And, of course, success doesn’t end with rankings, citations, and clicks. Once they land on your content, you still want visitors to be wowed, persuaded, and motivated into action.

But like so many things in life, success depends on timing at least as much as it does on quality or originality. In a way, that’s what originality is all about; not necessarily being best but being first.

The API design landing page didn’t succeed because it was mediocre. It succeeded because they got there first. Quality mattered, but not in the way most content strategies define it.

This matters even more in AI search. LLMs can curate ideas and summarize information, but they can’t have original thoughts, provide firsthand experiences, or offer up fresh perspectives (as of now). While there are no guarantees, as our limited research shows, in AI at least, being the original source has influence.

Start asking what your content can say that hasn’t already been said, and then say it before someone else does.

More Resources:


Featured Image: ImageFlow/Shutterstock