From integration chaos to digital clarity: Nutrien Ag Solutions’ post-acquisition reset

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Sriram Kalyan, head of applications and data at Nutrien Ag Solutions, Australia, about turning a high-risk post-acquisition IT landscape into a scalable digital foundation. Sriram shares how the merger of two major Australian agricultural companies created duplicated systems, fragile integrations, and operational risk, compounded by the sudden loss of key platform experts and partners. He explains how leadership alignment, disciplined platform consolidation, and a clear focus on business outcomes transformed integration from an invisible liability into a strategic enabler, positioning Nutrien Ag Solutions for future growth, cloud transformation, and enterprise scale.

Click here to continue.

What it takes to make agentic AI work in retail

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Prasad Banala, director of software engineering at a large US-based retail organization, about operationalizing agentic AI across the software development lifecycle. Prasad explains how his team applies AI to validate requirements, generate and analyze test cases, and accelerate issue resolution, while maintaining strict governance, human-in-the-loop review, and measurable quality outcomes.

Click here to continue.

How uncrewed narco subs could transform the Colombian drug trade

On a bright morning last April, a surveillance plane operated by the Colombian military spotted a 40-foot-long shark-like silhouette idling in the ocean just off Tayrona National Park. It was, unmistakably, a “narco sub,” a stealthy fiberglass vessel that sails with its hull almost entirely underwater, used by drug cartels to move cocaine north. The plane’s crew radioed it in, and eventually nearby coast guard boats got the order, routine but urgent: Intercept.

In Cartagena, about 150 miles from the action, Captain Jaime González Zamudio, commander of the regional coast guard group, sat down at his desk to watch what happened next. On his computer monitor, icons representing his patrol boats raced toward the sub’s coordinates as updates crackled over his radio from the crews at sea. This was all standard; Colombia is the world’s largest producer of cocaine, and its navy has been seizing narco subs for decades. And so the captain was pretty sure what the outcome would be. His crew would catch up to the sub, just a bit of it showing above the water’s surface. They’d bring it to heel, board it, and force open the hatch to find two, three, maybe four exhausted men suffocating in a mix of diesel fumes and humidity, and a cargo compartment holding several tons of cocaine.

The boats caught up to the sub. A crew boarded, forced open the hatch, and confirmed that the vessel was secure. But from that point on, things were different.

First, some unexpected details came over the radio: There was no cocaine on board. Neither was there a crew, nor a helm, nor even enough room for a person to lie down. Instead, inside the hull the crew found a fuel tank, an autopilot system and control electronics, and a remotely monitored security camera. González Zamudio’s crew started sending pictures back to Cartagena: Bolted to the hull was another camera, as well as two plastic rectangles, each about the size of a cookie sheet—antennas for connecting to Starlink satellite internet.

The authorities towed the boat back to Cartagena, where military techs took a closer look. Weeks later, they came to an unsettling conclusion: This was Colombia’s first confirmed uncrewed narco sub. It could be operated by remote control, but it was also capable of some degree of autonomous travel. The techs concluded that the sub was likely a prototype built by the Clan del Golfo, a powerful criminal group that operates along the Caribbean coast.

For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.

Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. Law enforcement around the world is just beginning to grapple with what the Tayrona sub means for the future—whether it was merely an isolated experiment or the opening move in a new era of autonomous drug smuggling at sea.


Drug traffickers love the ocean. “You can move drug traffic through legal and illegal routes,” says Juan Pablo Serrano, a captain in the Colombian navy and head of the operational coordination center for Orión, a multiagency, multinational counternarcotics effort. The giant container ships at the heart of global commerce offer a favorite approach, Serrano says. Bribe a chain of dockworkers and inspectors, hide a load in one of thousands of cargo boxes, and put it on a totally legal commercial vessel headed to Europe or North America. That route is slow and expensive—involving months of transit and bribes spread across a wide network—but relatively low risk. “A ship can carry 5,000 containers. Good luck finding the right one,” he says.

Far less legal, but much faster and cheaper, are small, powerful motorboats. Quick to build and cheap to crew, these “go-fasts” top out at just under 50 feet long and can move smaller loads in hours rather than days. But they’re also easy for coastal radars and patrols to spot.

Submersibles—or, more accurately, “semisubmersibles”—fit somewhere in the middle. They take more money and engineering to build than an open speedboat, but they buy stealth—even if a bit of the vessel rides at the surface, the bulk stays hidden underwater. That adds another option to a portfolio that smugglers constantly rebalance across three variables: risk, time, and cost. When US and Colombian authorities tightened control over air routes and commercial shipping in the early 1990s, subs became more attractive. The first ones were crude wooden hulls with a fiberglass shell and extra fuel tanks, cobbled together in mangrove estuaries, hidden from prying eyes. Today’s fiberglass semisubmersible designs ride mostly below the surface, relying on diesel engines that can push multi-ton loads for days at a time while presenting little more than a ripple and a hot exhaust pipe to radar and infrared sensors.

A typical semisubmersible costs under $2 million to build and can carry three metric tons of cocaine. That’s worth over $160 million in Europe—wholesale.

Most ferry between South American coasts and handoff points in Central America and Mexico, where allied criminal organizations break up the cargo and slowly funnel it toward the US. But some now go much farther. In 2019, Spanish authorities intercepted a semisubmersible after a 27-day transatlantic voyage from Brazil. In 2024, police in the Solomon Islands found the first narco sub in the Asia-Pacific region, a semisubmersible probably originating from Colombia on its way to Australia or New Zealand.

If the variables are risk, time, and cost, then the economics of a narco sub are simple. Even if they spend more time on the water than a powerboat, they’re less likely to get caught—and a relative bargain to produce. A narco sub might cost between $1 million and $2 million to build, but a kilo of cocaine costs just about $500 to make. “By the time that kilo reaches Europe, it can sell for between $44,000 and $55,000,” Serrano says. A typical semisubmersible carries up to three metric tons—cargo worth well over $160 million at European wholesale prices.

Starlink panel with a rusty mount
hands holding a Starlink antenna
rusty round white surveillance camera

Off-the-shelf nautical autopilots, WiFi antennas, Starlink satellite internet connections, and remote cameras are all drug smugglers need to turn semisubmersibles into drone ships.

As a result, narco subs are getting more common. Seizures by authorities tripled in the last 20 years, according to Colombia’s International Center for Research and Analysis Against Maritime Drug Trafficking (CMCON), and Serrano admits that the Orión alliance has enough ships and aircraft to catch only a fraction of what sails.

Until now, though, narco subs have had one major flaw: They depended on people, usually poor fishermen or low-level recruits sealed into stifling compartments for days at a time, steering by GPS and sight, hoping not to be spotted. That made the subs expensive and a risk to drug sellers if captured. Like good capitalists, the Tayrona boat’s builders seem to have been trying to obviate labor costs with automation. No crew means more room for drugs or fuel and no sailors to pay—or to get arrested or flip if a mission goes wrong.

“If you don’t have a person or people on board, that makes the transoceanic routes much more feasible,” says Henry Shuldiner, a researcher at InSight Crime who has analyzed hundreds of narco-sub cases. It’s one thing, he notes, to persuade someone to spend a day or two going from Colombia to Panama for a big payout; it’s another to ask four people to spend three weeks sealed inside a cramped tube, sleeping, eating, and relieving themselves in the same space. “That’s a hard sell,” Shuldiner says.

An uncrewed sub doesn’t have to race to a rendezvous because its crew can endure only a few days inside. It can move more slowly and stealthily. It can wait out patrols or bad weather, loiter near a meeting point, or take longer and less well-monitored routes. And if something goes wrong—if a military plane appears or navigation fails—its owners can simply scuttle the vessel from afar.

Meanwhile, the basic technology to make all that work is getting more and more affordable, and the potential profit margins are rising. “The rapidly approaching universality of autonomous technology could be a nightmare for the U.S. Coast Guard,” wrote two Coast Guard officers in the US Naval Institute’s journal Proceedings in 2021. And as if to prove how good an idea drone narco subs are, the US Marine Corps and the weapons builder Leidos are testing a low-profile uncrewed vessel called the Sea Specter, which they describe as being “inspired” by narco-sub design.

The possibility that drug smugglers are experimenting with autonomous subs isn’t just theoretical. Law enforcement agencies on other smuggling routes have found signs the Tayrona sub isn’t an isolated case. In 2022, Spanish police seized three small submersible drones near Cádiz, on Spain’s southern coast. Two years later, Italian authorities confiscated a remote-­controlled minisubmarine they believed was intended for drug runs. “The probability of expansion is high,” says Diego Cánovas, a port and maritime security expert in Spain. Tayrona, the biggest and most technologically advanced uncrewed narco sub found so far, is more likely a preview than an anomaly.


Today, the Tayrona semisubmersible sits on a strip of grass at the ARC Bolívar naval base in Cartagena. It’s exposed to the elements; rain has streaked its paint. To one side lies an older, bulkier narco sub seized a decade ago, a blue cylinder with a clumsy profile. The Tayrona’s hull looks lower, leaner, and more refined.

Up close, it is also unmistakably handmade. The hull is a dull gray-blue, the fiberglass rough in places, with scrapes and dents from the tow that brought it into port. It has no identifying marks on the exterior—nothing that would tie it to a country, a company, or a port. On the upper surface sit the two Starlink antennas, painted over in the same gray-blue to keep them from standing out against the sea.

I climb up a ladder and drop through the small hatch near the stern. Inside, the air is damp and close, the walls beaded with condensation. Small puddles of fuel have collected in the bilge. The vessel has no seating, no helm or steering wheel, and not enough space to stand up straight or lie down. It’s clear it was never meant to carry people. A technical report by CMCON found that the sub would have enough fuel for a journey of some 800 nautical miles, and the central cargo bay would hold between 1 and 1.5 tons of cocaine.

At the aft end, the machinery compartment is a tangle of hardware: diesel engine, batteries, pumps, and a chaotic bundle of cables feeding an electronics rack. All the core components are still there. Inside that rack, investigators identified a NAC-3 autopilot processor, a commercial unit designed to steer midsize boats by tying into standard hydraulic pumps, heading sensors, and rudder-­feedback systems. They cost about $2,200 on Amazon.

“These are plug-and-play technologies,” says Wilmar Martínez, a mechatronics professor at the University of America in Bogotá, when I show him pictures of the inside of the sub. “Midcareer mechatronics students could install them.”


For all its advantages, an autonomous drug-smuggling submarine wouldn’t be invincible. Even without a crew on board, there are still people in the chain. Every satellite internet terminal—Starlink or not—comes with a billing address, a payment method, and a log of where and when it pings the constellation. Colombian officers have begun to talk about negotiating formal agreements with providers, asking them to alert authorities when a transceiver’s movements match known smuggling patterns. Brazil’s government has already cut a deal with Starlink to curb criminal use of its service in the Amazon.

The basic playbook for finding a drone sub will look much like the one for crewed semisubmersibles. Aircraft and ships will use radar to pick out small anomalies and infrared cameras to look for the heat of a diesel engine or the turbulence of a wake. That said, it might not work. “If they wind up being smaller, they’re going to be darn near impossible to detect,” says Michael Knickerbocker, a former US Navy officer who advises defense tech firms.

Autonomous drug subs are “a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities,” says one researcher.

Even worse, navies already act on only a fraction of their intelligence leads because they don’t have enough ships and aircraft. The answer, Knickerbocker argues, is “robot on robot.” Navies and coast guards will need swarms of their own small, relatively cheap uncrewed systems—surface vessels, underwater gliders, and long-endurance aerial vehicles that can loiter, sense, and relay data back to human operators. Those experiments have already begun. The US 4th Fleet, which covers Latin America and the Caribbean, is experimenting with uncrewed platforms in counternarcotics patrols. Across the Atlantic, the European Union’s European Maritime Safety Agency operates drones for maritime surveillance.

Today, though, the major screens against oceangoing vessels of all kinds are coastal radar networks. Spain operates SIVE to watch over choke points like the Strait of Gibraltar, and in the Pacific, Australia’s over-the-horizon radar network, JORN, can spot objects hundreds of miles away, far beyond the range of conventional radar.

Even so, it’s not enough to just spot an uncrewed narco sub. Law enforcement also has to stop it—and that will be tricky.

man in naval uniform pointing at a map
To find drone subs, international law enforcement will likely have to rely on networks of surveillance systems and, someday, swarms of their own drones.
CARLOS PARRA RIOS

With a crewed vessel, Colombian doctrine says coast guard units should try to hail the boat first with lights, sirens, radio calls, and warning shots. If that fails, interceptor crews sometimes have to jump aboard and force the hatch. Officers worry that future autonomous craft could be wired to sink or even explode if someone gets too close. “If they get destroyed, we may lose the evidence,” says Víctor González Badrán, a navy captain and director of CMCON. “That means no seizure and no legal proceedings against that organization.” 

That’s where electronic warfare enters the picture—radio-frequency jamming, cyber tools, perhaps more exotic options. In the simplest version, jamming means flooding the receiver with noise so that commands from the operator never reach the vessel. Spoofing goes a step further, feeding fake signals so that the sub thinks it’s somewhere else or obediently follows a fake set of waypoints. Cyber tools might aim higher up the chain, trying to penetrate the software that runs the vessel or the networks it uses to talk to satellite constellations. At the cutting edge of these countermeasures are electromagnetic pulses designed to fry electronics outright, turning a million-dollar narco sub into a dead hull drifting at sea.

In reality, the tools that might catch a future Tayrona sub are unevenly distributed, politically sensitive, and often experimental. Powerful cyber or electromagnetic tricks are closely guarded secrets; using them in a drug case risks exposing capabilities that militaries would rather reserve for wars. Systems like Australia’s JORN radar are tightly held national security assets, their exact performance specs classified, and sharing raw data with countries on the front lines of the cocaine trade would inevitably mean revealing hints as to how they got it. “Just because a capability exists doesn’t mean you employ it,” Knickerbocker says. 

Analysts don’t think uncrewed narco subs will reshape the global drug trade, despite the technological leap. Trafficking organizations will still hedge their bets across those three variables, hiding cocaine in shipping containers, dissolving it into liquids and paints, racing it north in fast boats. “I don’t think this is revolutionary,” Shuldiner says. “But it’s a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities.”

There’s still that chance, though, that everything international law enforcement agencies know about drug smuggling is about to change. González Zamudio says he keeps getting requests from foreign navies, coast guards, and security agencies to come see the Tayrona sub. He greets their delegations, takes them out to the strip of grass on the base, and walks them around it, gives them tours. It has become a kind of pilgrimage. Everyone who makes it worries that the next time a narco sub appears near a distant coastline, they’ll board it as usual, force the hatch—and find it full of cocaine and gadgets, but without a single human occupant. And no one knows what happens after that. 

Eduardo Echeverri López is a journalist based in Colombia.

The building legal case for global climate justice

The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe. 

Meanwhile, places like the Solomon Islands and Chad—low-lying or just plain sweltering—have emitted relatively little carbon dioxide, but by dint of their latitude and history, they rank among the countries most vulnerable to the fiercest consequences of global warming. That means increasingly devastating cyclones, heat waves, famines, and floods.

Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation for the homes that will be destroyed, the shorelines that will disappear beneath rising seas, and the lives that will be cut short. By one estimate, the major economies owe a climate debt to the rest of the world approaching $200 trillion in reparations.

Legally, though, the case has been far harder to make. Even putting aside the jurisdictional problems, early climate science couldn’t trace the provenance of airborne molecules of carbon dioxide across oceans and years. Deep-pocketed corporations with top-tier legal teams easily exploited those difficulties. 

Now those tides might be turning. More climate-related lawsuits are getting filed, particularly in the Global South. Governments, nonprofits, and citizens in the most climate-exposed nations continue to test new legal arguments in new courts, and some of those courts are showing a new willingness to put nations and their industries on the hook as a matter of human rights. In addition, the science of figuring out exactly who is to blame for specific weather disasters, and to what degree, is getting better and better. 

It’s true that no court has yet held any climate emitter liable for climate-related damages. For starters, nations are generally immune from lawsuits originating in other countries. That’s why most cases have focused on major carbon producers. But they’ve leaned on a pretty powerful defense. 

While oil and gas companies extract, refine, and sell the world’s fossil fuels, most of the emissions come out of “the vehicles, power plants, and factories that burn the fuel,” as Michael Gerrard and Jessica Wentz, of Columbia Law School’s Sabin Center, note in a recent piece in Nature. In other words, companies just dig the stuff up. It’s not their fault someone else sets it on fire.

So victims of extreme weather events continue to try new legal avenues and approaches, backed by ever-more-convincing science. Plaintiffs in the Philippines recently sued the oil giant Shell over its role in driving Super Typhoon Odette, a 2021 storm that killed more than 400 people and displaced nearly 800,000. The case relies partially on an attribution study that found climate change made extreme rainfall like that seen in Odette twice as likely. 

IVAN JOESEFF GUIWANON/GREENPEACE

Overall, evidence of corporate culpability—linking a specific company’s fossil fuel to a specific disaster—is getting easier to find. For example, a study published in Nature in September was able to determine how much particular companies contributed to a series of 21st-century heat waves.

A number of recent legal decisions signal improving odds for these kinds of suits. Notably, a handful of determinations in climate cases before the European Court of Human Rights affirmed that states have legal obligations to protect people from the effects of climate change. And though it dismissed the case of a Peruvian farmer who sued a German power company over fears that a melting alpine glacier could destroy his property, a German court determined that major carbon polluters could in principle be found liable for climate damages tied to their emissions. 

At least one lawsuit has already emerged that could test that principle: Dozens of Pakistani farmers whose land was deluged during the massive flooding events of 2022 have sued a pair of major German power and cement companies.

Even if the lawsuit fails, that would be a problem with the system, not the science. Major carbon-polluting countries and companies have a disproportionate responsibility for climate-change-powered disasters. 

Wealthy nations continued to encourage business practices that pollute the atmosphere, even as the threat of climate change grew increasingly grave. And oil and gas companies remain the kingpin suppliers to a fossil-fuel-addicted world. They have operated with the full knowledge of the massive social, environmental, and human cost imposed by their business while lobbying fiercely against any rules that would force them to pay for those harms or clean up their act. 

They did it. They knew. In a civil society where rule of law matters, they should pay the price. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: autonomous narco submarines, and virtue signaling chatbots

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How uncrewed narco subs could transform the Colombian drug trade

For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.

Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. And law enforcement around the world is just beginning to grapple with what this means for the future. Read the full story.

—Eduardo Echeverri López

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

 Google DeepMind wants to know if chatbots are just virtue signaling

The news: Google DeepMind is calling for the moral behavior of large language models—such as what they do when called on to act as companions, therapists, medical advisors, and so on—to be scrutinized with the same kind of rigor as their ability to code or do math.

Why it matters: As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people’s behalf. LLMs may be able to influence human decision-making. And yet nobody knows how trustworthy this technology really is at such tasks. Read the full story.

—Will Douglas Heaven

The building legal case for global climate justice

The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe.

Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation. Legally, though, the case has been far harder to make. But now those tides might be turning. Read the full story.

—James Temple

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is building an online portal to access content banned elsewhere 
The freedom.gov site is Washington’s broadbrush solution to global censorship. (Reuters)
+ The Trump administration is on a mission to train a cadre of elite coders. (FT $)

2 Mark Zuckerberg overruled wellbeing experts to keep beauty filters on Instagram
Because removing them may have impinged on “free expression,” apparently. (FT $)+ The CEO claims that increasing engagement is not Instagram’s goal. (CNBC)
+ Instead, the company’s true calling is to give its users “something useful”. (WSJ $)
+ A new investigation found Meta is failing to protect children from predators. (WP $)

3 Silicon Valley is working on a shadow power grid for US data centers
AI firms are planning to build their own private power plants across the US. (WP $)
+ They’re pushing the narrative that generative AI will save the Earth. (Wired $)
+ We need better metrics to measure data center sustainability with. (IEEE Spectrum)
+ The data center boom in the desert. (MIT Technology Review)

4 Russian forces are struggling with Starlink and Telegram crackdowns
New restrictions have left troops without a means to communicate. (Bloomberg $)

5 Bill Gates won’t speak at India’s AI summit after all
Given the growing controversy surrounding his ties to Jeffrey Epstein. (BBC)
+ The event has been accused of being disorganized and poorly managed. (Reuters)
+ AI leaders didn’t appreciate this awkward photoshoot. (Bloomberg $)

6 AI software sales are slowing down
Last year’s boom appears to be waning, vendors have warned. (WSJ $)
+ What even is the AI bubble? (MIT Technology Review)

7 eBay has acquired its clothes resale rival Depop 👚
It’s a naked play to corner younger Gen Z shoppers. (NYT $)

8 There’s a lot more going on inside cells than we originally thought
It’s seriously crowded inside there. (Quanta Magazine)

9 What it means to create a chart-topping app
Does anyone care any more? (The Verge)

10 Do we really need eight hours of sleep?
Research suggests some people really are fine operating on as little as four hours of snooze time. (New Yorker $)

Quote of the day

“Too often, those victims have been left to fight alone…That is not justice. It is failure.”

—Keir Starmer, the UK’s prime minister, outlines plans to force technology firms to remove deepfake nudes and revenge porn within 48 hours or risk being blocked in the UK, the Guardian reports.

One more thing

End of life decisions are difficult and distressing. Could AI help?

End-of-life decisions can be extremely upsetting for surrogates—the people who have to make those calls on behalf of another person. Friends or family members may disagree over what’s best for their loved one, which can lead to distressing situations.

David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial intelligence-based tool that can help surrogates predict what the patients themselves would want in any given situation.

Wendler hopes to start building their tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Oakland Library keeps a remarkable public log of all the weird and wonderful artefacts their librarians find tucked away in the pages of their books.
+ Orchids are beautiful, but temperamental. Here’s how to keep them alive.
+ I love that New York’s Transit Museum is holding a Pizza Rat Debunked event.
+ These British indie bands aren’t really lauded at home—but in China, they’re treated like royalty.

Microsoft has a new plan to prove what’s real and what’s AI online

AI-enabled deception now permeates our online lives. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. 

It is into this mess that Microsoft has put forward a blueprint, shared with MIT Technology Review, for how to prove what’s real online. 

An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today’s most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.

To understand the gold standard that Microsoft is pushing, imagine you have a Rembrandt painting and you are trying to document its authenticity. You might describe its provenance with a detailed manifest of where the painting came from and all the times it changed hands. You might apply a watermark that would be invisible to humans but readable by a machine. And you could digitally scan the painting and generate a mathematical signature, like a fingerprint, based on the brush strokes. If you showed the piece at a museum, a skeptical visitor could then examine these proofs to verify that it’s an original.

All of these methods are already being used to varying degrees in the effort to vet content online. Microsoft evaluated 60 different combinations of them, modeling how each setup would hold up under different failure scenarios—from metadata being stripped to content being slightly altered or deliberately manipulated. The team then mapped which combinations produce sound results that platforms can confidently show to people online, and which ones are so unreliable that they may cause more confusion than clarification. 

The company’s chief scientific officer, Eric Horvitz, says the work was prompted by legislation—like California’s AI Transparency Act, which will take effect in August—and the speed at which AI has developed to combine video and voice with striking fidelity.

“You might call this self-regulation,” Horvitz told MIT Technology Review. But it’s clear he sees pursuing the work as boosting Microsoft’s image: “We’re also trying to be a selected, desired provider to people who want to know what’s going on in the world.”

Nevertheless, Horvitz declined to commit to Microsoft using its own recommendation across its platforms. The company sits at the center of a giant AI content ecosystem: It runs Copilot, which can generate images and text; it operates Azure, the cloud service through which customers can access OpenAI and other major AI models; it owns LinkedIn, one of the world’s largest professional platforms; and it holds a significant stake in OpenAI. But when asked about in-house implementation, Horvitz said in a statement, “Product groups and leaders across the company were involved in this study to inform product road maps and infrastructure, and our engineering teams are taking action on the report’s findings.”

It’s important to note that there are inherent limits to these tools; just as they would not tell you what your Rembrandt means, they are not built to determine if content is accurate or not. They only reveal if it has been manipulated. It’s a point that Horvitz says he has to make to lawmakers and others who are skeptical of Big Tech as an arbiter of fact.

“It’s not about making any decisions about what’s true and not true,” he said. “It’s about coming up with labels that just tell folks where stuff came from.”

Hany Farid, a professor at UC Berkeley who specializes in digital forensics but wasn’t involved in the Microsoft research, says that if the industry adopted the company’s blueprint, it would be meaningfully more difficult to deceive the public with manipulated content. Sophisticated individuals or governments can work to bypass such tools, he says, but the new standard could eliminate a significant portion of misleading material.

“I don’t think it solves the problem, but I think it takes a nice big chunk out of it,” he says.

Still, there are reasons to see Microsoft’s approach as an example of somewhat naïve techno-optimism. There is growing evidence that people are swayed by AI-generated content even when they know that it is false. And in a recent study of pro-Russian AI-generated videos about the war in Ukraine, comments pointing out that the videos were made with AI received far less engagement than comments treating them as genuine. 

“Are there people who, no matter what you tell them, are going to believe what they believe?” Farid asks. “Yes.” But, he adds, “there are a vast majority of Americans and citizens around the world who I do think want to know the truth.”

That desire has not exactly led to urgent action from tech companies. Google started adding a watermark to content generated by its AI tools in 2023, which Farid says has been helpful in his investigations. Some platforms use C2PA, a provenance standard Microsoft helped launch in 2021. But the full suite of changes that Microsoft suggests, powerful as they are, might remain only suggestions if they threaten the business models of AI companies or social media platforms.

“If the Mark Zuckerbergs and the Elon Musks of the world think that putting ‘AI generated’ labels on something will reduce engagement, then of course they’re incentivized not to do it,” Farid says. Platforms like Meta and Google have already said they’d include labels for AI-generated content, but an audit conducted by Indicator last year found that only 30% of its test posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube were correctly labeled as AI-generated.

More forceful moves toward content verification might come from the many pieces of AI regulation pending around the world. The European Union’s AI Act, as well as proposed rules in India and elsewhere, would all compel AI companies to require some form of disclosure that a piece of content was generated with AI. 

One priority from Microsoft is, unsurprisingly, to play a role in shaping these rules. The company waged a lobbying effort during the drafting of California’s AI Transparency Act, which Horvitz said made the legislation’s requirements on how tech companies must disclose AI-generated content “a bit more realistic.”

But another is a very real concern about what could happen if the rollout of such content-verification technology is done poorly. Lawmakers are demanding tools that can verify what’s real, but the tools are fragile. If labeling systems are rushed out, inconsistently applied, or frequently wrong, people could come to distrust them altogether, and the entire effort would backfire. That’s why the researchers argue that it may be better in some cases to show nothing at all than a verdict that could be wrong.

Inadequate tools could also create new avenues for what the researchers call sociotechnical attacks. Imagine that someone takes a real image of a fraught political event and uses an AI tool to change only an inconsequential share of pixels in the image. When it spreads online, it could be misleadingly classified by platforms as AI-manipulated. But combining provenance and watermark tools would mean platforms could clarify that the content was only partially AI generated, and point out where the changes were made.

California’s AI Transparency Act will be the first major test of these tools in the US, but enforcement could be challenged by President Trump’s executive order from late last year seeking to curtail state AI regulations that are “burdensome” to the industry. The administration has also generally taken a posture against efforts to curb disinformation, and last year, via DOGE, it canceled grants related to misinformation. And, of course, official government channels in the Trump administration have shared content manipulated with AI (MIT Technology Review reported that the Department of Homeland Security, for example, uses video generators from Google and Adobe to make content it shares with the public).

I asked Horvitz whether fake content from this source worries him as much as that coming from the rest of social media. He initially declined to comment, but then he said, “Governments have not been outside the sectors that have been behind various kinds of manipulative disinformation, and this is worldwide.”

AI Turns Weather Data into Sales

Weather impacts sales. Every retailer knows it. But for most, the likelihood that it might rain, snow, or sleet on the third of March somewhere in the Midwest is rarely used.

Vendors such as Weather Trends have offered accurate, long-range forecasts for more than 20 years. But the opportunity is not predicting the weather; it’s knowing what to do with the data.

AI might change that.

Screenshot of Weather Trends data on a spreadsheet-looking interface

How does a retailer apply Weather Trends data to everyday decisions?

Ecommerce Challenges

Artificial intelligence is becoming the panacea for common ecommerce challenges, including weather-related, such as:

  • Demand forecasting,
  • Pricing and markdown optimization,
  • Personalization,
  • Weather-informed fulfillment and delivery promise,
  • Triggered marketing and advertising.

Demand forecasting

In 2017, when Boise, Idaho, experienced “snowmegeddon,” the farm and ranch retailer I worked for knew it was coming. The company subscribed to long-term weather prediction data that warned of record snowfall.

The business increased its wholesale orders for snow-related products, but cautiously. Company leadership doubted the data.

They were rightly concerned about the cost of a mistake. Underestimates can lead to stockouts and missed revenue (which happened in this case).

Yet overestimates increase carrying costs, markdown risk, or spoilage in perishable categories.

It was difficult to weigh the potential losses and benefits. Looking back, AI may have made that decision easier, not in predicting the snowfall, but clarifying the risk.

Pricing optimization

Pricing and markdown decisions are demand forecasts expressed in dollars. Retailers estimate how quickly products will sell and adjust prices to preserve margins.

Weather complicates those decisions. An online merchant in sunny Florida might mark down winter goods just as one in Bismarck, North Dakota, is facing the next snowstorm.

AI-informed pricing solutions may help merchants resolve this mismatch in demand perception.

Rather than showing every customer the same prices, AI can incorporate local variables, such as regional weather patterns, forecast probabilities, and conversion behavior, to find the just-right price for each region and each weather forecast.

Instead of applying a single markdown logic, AI pricing engines can adjust promotions based on expected demand in a shopper’s locale.

Personalization

Personalization tools infer shopper intent from behavior and context. Weather introduces another powerful signal.

Shoppers browsing during a cold snap, heat wave, or storm likely have unique needs. Demand for seasonal goods, comfort-related products, or event-driven purchases often shifts in response to immediate weather conditions.

AI-driven personalization engines may incorporate weather data (real-time or forecast) to adjust recommendations, site search results, category emphasis, and promotional messaging.

Thus outerwear, hydration products, or indoor activity items may receive greater visibility depending on conditions.

Unlike pricing, merchandising decisions typically carry low risk. They influence what shoppers see rather than what merchants commit to.

Fulfillment expectations

Weather affects logistics as much as demand. Snow, storms, and temperature extremes can disrupt carrier networks, delay shipments, and reshape delivery expectations. Yet many ecommerce platforms generate delivery estimates from static assumptions.

That is a problem. Most shoppers expect fast delivery and sometimes react harshly, such as initiating chargebacks, when delayed.

AI-driven fulfillment models can incorporate weather variables, carrier performance patterns, and regional risk factors when calculating estimated arrival windows.

Triggered marketing

Weather also creates short-lived demand, such as umbrellas on a rainy day.

An AI agent connected to Meta Ads could automatically trigger campaigns based on weather-influenced demand. The AI would write copy, generate images or video, set budgets, and even learn from its successes and failures.

Competitive Advantage

The combination of AI and weather data could give merchants a competitive advantage, but separating hype from reality will require testing.

If weather impacts sales, AI might predict those changes and optimize for them.

Google Offers AI Certificate Free For Eligible U.S. Small Businesses via @sejournal, @MattGSouthern

Google has launched the Google AI Professional Certificate, a self-paced program covering data analysis, content creation, research, and vibe coding.

Every participant receives three months of free access to Google AI Pro. Eligible U.S. small businesses can access the entire program at no cost through a separate application (more on eligibility below).

The certificate is available now on Coursera, Google Skills, and Udemy. In the U.S. and Canada, the subscription costs $49 per month.

What The Certificate Covers

The program consists of seven modules, each of which can be completed in about an hour. No prior AI experience is required.

Participants complete more than 20 hands-on activities. These include creating presentations and marketing materials, conducting deep research, building infographics, analyzing data, and building custom apps without writing code.

After completing all seven modules, participants earn a Google certificate they can add to LinkedIn and share with employers.

Free Access For Eligible U.S. Small Businesses

Google is offering the certificate at no cost to eligible U.S. small and medium-sized businesses with 500 or fewer employees. The offer also includes three months of free Google Workspace Business Standard (for new Workspace customers, up to 300 seats).

To qualify, businesses must be registered in the U.S. and submit their Employer Identification Number (EIN) through a dedicated application on Coursera. Coursera said the verification process takes 5-7 business days.

Businesses can also apply at grow.google/small-business. Google said it is working with the U.S. Chamber of Commerce and America’s Small Business Development Centers to distribute the program.

How This Helps

The program builds on Google AI Essentials, which has become the most popular course on Coursera. The AI Professional Certificate goes further, focusing on applied use cases rather than introductory concepts.

The certificate focuses on tools like Gemini, NotebookLM, and Google AI Studio, so the skills are tied to Google’s ecosystem. Google launched a separate Generative AI Leader certification for Google Cloud in May 2025, though that program focused on non-technical business leaders and required a $99 exam fee. The new AI Professional Certificate has no exam fee.

Looking Ahead

The Google AI Professional Certificate is available now on Coursera, Google Skills, and Udemy. Eligible U.S. small businesses can apply for no-cost access at grow.google/small-business.

For professionals already familiar with Google’s AI tools through earlier training programs, this certificate adds structured, employer-recognized credentials to practical skills you may already be developing on your own.

Why AI Misreads The Middle Of Your Best Pages via @sejournal, @DuaneForrester

The middle is where your content dies, and not because your writing suddenly gets bad halfway down the page, and not because your reader gets bored. But because large language models have a repeatable weakness with long contexts, and modern AI systems increasingly squeeze long content before the model even reads it.

That combo creates what I think of as dog-bone thinking. Strong at the beginning, strong at the end, and the middle gets wobbly. The model drifts, loses the thread, or grabs the wrong supporting detail. You can publish a long, well-researched piece and still watch the system lift the intro, lift the conclusion, then hallucinate the connective tissue in between.

This is not theory as it shows up in research, and it also shows up in production systems.

Image Credit: Duane Forrester

Why The Dog-Bone Happens

There are two stacked failure modes, and they hit the same place.

First, “lost in the middle” is real. Stanford and collaborators measured how language models behave when key information moves around inside long inputs. Performance was often highest when the relevant material was at the beginning or end, and it dropped when the relevant material sat in the middle. That’s the dog-bone pattern, quantified.

Second, long contexts are getting bigger, but systems are also getting more aggressive about compression. Even if a model can take a massive input, the product pipeline frequently prunes, summarizes, or compresses to control cost and keep agent workflows stable. That makes the middle even more fragile, because it is the easiest segment to collapse into mushy summary.

A fresh example: ATACompressor is a 2026 arXiv paper focused on adaptive, task-aware compression for long-context processing. It explicitly frames “lost in the middle” as a problem in long contexts and positions compression as a strategy that must preserve task-relevant content while shrinking everything else.

So you were right if you ever told someone to “shorten the middle.” Now, I’d offer this refinement:

You are not shortening the middle for the LLM so much as engineering the middle to survive both attention bias and compression.

Two Filters, One Danger Zone

Think of your content going through two filters before it becomes an answer.

  • Filter 1: Model Attention Behavior: Even if the system passes your text in full, the model’s ability to use it is position-sensitive. Start and end tend to perform better, middle tends to perform worse.
  • Filter 2: System-Level Context Management: Before the model sees anything, many systems condense the input. That can be explicit summarization, learned compression, or “context folding” patterns used by agents to keep working memory small. One example in this space is AgentFold, which focuses on proactive context folding for long-horizon web agents.

If you accept those two filters as normal, the middle becomes a double-risk zone. It gets ignored more often, and it gets compressed more often.

That is the balancing logic with the dog-bone idea. A “shorten the middle” approach becomes a direct mitigation for both filters. You are reducing what the system will compress away, and you are making what remains easier for the model to retrieve and use.

What To Do About It Without Turning Your Writing Into A Spec Sheet

This is not a call to kill longform as longform still matters for humans, and for machines that use your content as a knowledge base. The fix is structural, not “write less.”

You want the middle to carry higher information density with clearer anchors.

Here’s the practical guidance, kept tight on purpose.

1. Put “Answer Blocks” In The Middle, Not Connective Prose

Most long articles have a soft, wandering middle where the author builds nuance, adds color, and tries to be thorough. Humans can follow that. Models are more likely to lose the thread there. Instead, make the middle a sequence of short blocks where each block can stand alone.

An answer block has:
A clear claim. A constraint. A supporting detail. A direct implication.

If a block cannot survive being quoted by itself, it will not survive compression. This is how you make the middle “hard to summarize badly.”

2. Re-Key The Topic Halfway Through

Drift often happens because the model stops seeing consistent anchors.

At the midpoint, add a short “re-key” that restates the thesis in plain words, restates the key entities, and restates the decision criteria. Two to four sentences are often enough here. Think of this as continuity control for the model.

It also helps compression systems. When you restate what matters, you are telling the compressor what not to throw away.

3. Keep Proof Local To The Claim

Models and compressors both behave better when the supporting detail sits close to the statement it supports.

If your claim is in paragraph 14, and the proof is in paragraph 37, a compressor will often reduce the middle into a summary that drops the link between them. Then the model fills that gap with a best guess.

Local proof looks like:
Claim, then the number, date, definition, or citation right there. If you need a longer explanation, do it after you’ve anchored the claim.

This is also how you become easier to cite. It is hard to cite a claim that requires stitching context from multiple sections.

4. Use Consistent Naming For The Core Objects

This is a quiet one, but it matters a lot. If you rename the same thing five times for style, humans nod, but models can drift.

Pick the term for the core thing and keep it consistent throughout. You can add synonyms for humans, but keep the primary label stable. When systems extract or compress, stable labels become handles. Unstable labels become fog.

5. Treat “Structured Outputs” As A Clue For How Machines Prefer To Consume Information

A big trend in LLM tooling is structured outputs and constrained decoding. The point is not that your article should be JSON. The point is that the ecosystem is moving toward machine-parseable extraction. That trend tells you something important: machines want facts in predictable shapes.

So, inside the middle of your article, include at least a few predictable shapes:
Definitions. Step sequences. Criteria lists. Comparisons with fixed attributes. Named entities tied to specific claims.

Do that, and your content becomes easier to extract, easier to compress safely, and easier to reuse correctly.

How This Shows Up In Real SEO Work

This is the crossover point. If you are an SEO or content lead, you are not optimizing for “a model.” You are optimizing for systems that retrieve, compress, and synthesize.

Your visible symptoms will look like:

  • Your article gets paraphrased correctly at the top, but the middle concept is misrepresented. That’s lost-in-the-middle plus compression.
  • Your brand gets mentioned, but your supporting evidence does not get carried into the answer. That’s local proof failing. The model cannot justify citing you, so it uses you as background color.
  • Your nuanced middle sections become generic. That’s compression, turning your nuance into a bland summary, then the model treating that summary as the “true” middle.
  • Your “shorten the middle” move is how you reduce these failure rates. Not by cutting value, but by tightening the information geometry.

A Simple Way To Edit For Middle Survival

Here’s a clean, five-step workflow you can apply to any long piece, and it’s a sequence you can run in an hour or less.

  1. Identify the midpoint and read only the middle third. If the middle third can’t be summarized in two sentences without losing meaning, it’s too soft.
  2. Add one re-key paragraph at the start of the middle third. Restate: the main claim, the boundaries, and the “so what.” Keep it short.
  3. Convert the middle third into four to eight answer blocks. Each block must be quotable. Each block must include its own constraint and at least one supporting detail.
  4. Move proof next to claim. If proof is far away, pull a compact proof element up. A number, a definition, a source reference. You can keep the longer explanation later.
  5. Stabilize the labels. Pick the name for your key entities and stick to them across the middle.

If you want the nerdy justification for why this works, it is because you are designing for both failure modes documented above: the “lost in the middle” position sensitivity measured in long-context studies, and the reality that production systems compress and fold context to keep agents and workflows stable.

Wrapping Up

Bigger context windows do not save you. They can make your problem worse, because long content invites more compression, and compression invites more loss in the middle.

So yes, keep writing longform when it is warranted, but stop treating the middle like a place to wander. Treat it like the load-bearing span of a bridge. Put the strongest beams there, not the nicest decorations.

That’s how you build content that survives both human reading and machine reuse, without turning your writing into sterile documentation.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Collagery/Shutterstock

35-Year SEO Veteran: Great SEO Is Good GEO — But Not Everyone’s Been Doing Great SEO via @sejournal, @theshelleywalsh

As SEOs, we are used to being adaptable to changing algorithms, so LLM optimization should be a simple extension of that process.

To discuss the industry debates surrounding the differences between SEO and GEO and clarify whether they are the same or different, I spoke with SEO veteran Grant Simmons.

Grant has over 30 years of experience helping brands grow and has spent decades focused on meaning, intent, and topical authority long before LLMs entered the conversation.

I spoke with Grant about signal alignment, how Google’s latest continuation patents reveal the mechanics of LLM citations, and what SEOs are getting wrong about topical focus.

We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.” – Grant Simmons

You can watch the full interview with Grant on IMHO below, or continue reading the article summary.

Great SEO Is Good GEO

At Google Search Live in December 2025, John Mueller said, “Good SEO is good GEO.”

I asked Grant what he thought were the differences between optimizing for search engines and for machines, and if he thought there were any overlaps.

Grant’s approach echoes what John Mueller said, but “Not everyone has been doing great SEO,” he explained. “Great SEO was always about building topical authority.”

He continued to say, “Essentially, machines (whether it’s Google or whether it’s an LLM) have to understand the underlying meaning of the content so they can present the best answer.

They have to understand the query or the prompt, then they have to send the best answer. So in that way, it’s very similar.”

Where Grant sees divergence is in how the systems evaluate content. Google has historically ranked pages, and even with passage ranking, it still considers the page and the site as a whole. LLMs operate differently.

“LLMs are looking more at that passage side, you know, something that’s easily extractable, something that has value semantically related to the query or the prompt. And so there’s that fundamental difference.”

Grant also stressed that great SEO has always been holistic, touching social media, PR, content, and brand messaging. Having brand awareness, brand visibility, and brand consistency across all channels is a significant factor in LLM representation. And this is exactly the kind of work that the best SEOs do.

“We’re marketers. We should make sure, not just from a standpoint of what we do in SEO and GEO for our clients, which is connecting a need and intent to the product or service that satisfies that intent, we’re also doing the same in our own marketing. We have to understand what our clients are looking for.

“[GEO] is the same [as SEO] if you’re doing it well. It’s not the same if you weren’t. And of course, there’s nuance.”

My thoughts are that SEOs who have been in the industry the longest are experiencing less disruption because they have seen it all before. They learned to be adaptable in the early years when there was so much flux as we progressed from multiple search engines to just one. Whereas for anyone new to the industry, they don’t have the same background points of reference.

Why Consensus Matters To Be Surfaced By LLMs

I went on to ask Grant about Google’s latest continuation patents, which describe two distinct systems that work together.

The first is what Grant describes as a response confidence engine. This system evaluates whether a passage can be corroborated, whether the information has consensus across the web.

“If they return a passage and they can corroborated that it is true, and when we say true, it’s true in the sense of more than one person is saying it, that doesn’t mean it’s true, but it means the consensus is there,” Grant explained. “The consensus generally wins out.”

The second system is what Grant calls a linkifying engine. Once a passage has been confirmed through consensus, this engine determines whether a specific sentence or sub-element within that passage, what Grant calls a “chunklet,” can be matched and linked to a source.

“Consensus decides whether it’s surfaced in the first place. The linkify engine actually decides whether it’s linkable, whether a citation is actually going to happen,” Grant said.

Getting mentioned by an LLM is one thing. Getting an actual link back to your content requires that the specific passage is both verifiable through consensus and uniquely attributable to your source.

Golden Knowledge Content Wins

So, what kind of content earns this kind of AI visibility? Grant described it as “golden knowledge,” content that is unique in some meaningful way.

“Generally, data-driven, your own data, your own opinion that’s proof-backed, evidence-backed. Taking a different view of things,” Grant said. “But in the same way of taking a different view, there still has to be some kind of consensus. If other people are agreeing with you, that is really important. Your content needs the uniqueness and the data-driven aspect, but it still has to align with the overall consensus on the web.”

Grant was also clear that while we often talk about writing for machines, the orientation should remain human-centered: “We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.”

This balance between uniqueness and consensus is perhaps the most actionable takeaway. Content that simply restates what everyone else is saying won’t stand out. But content that takes a position without corroboration elsewhere won’t pass the confidence threshold to be surfaced. The sweet spot is original, data-driven insight that others can and do validate.

The Biggest Mistakes SEOs Make With Topical Focus

When I asked Grant about the most common mistakes he sees with topical diversification on pages, his answer was clear: trying to be everything to everyone.

“When you think about intent, suddenly you understand that pages have a right to exist,” Grant said. “I call it path to satisfaction. Understanding who the audience is and what they need to find, you have to provide a path to that satisfaction.”

Grant pointed out that most SEOs inherit existing sites rather than building from scratch. The temptation is to focus on the surface-level optimizations, such as title tags, meta descriptions, and headers, without reviewing whether a page is actually focused on a specific intent or whether it has what he calls “drift.”

“What they won’t do is fundamentally review the page and understand whether that page is focused on a specific intent or whether it has this drift,” Grant explained. “Cleaning out those outliers, topics that you’re covering when you don’t really mean to, is essentially diffusing what the page means. Those are the things that I think SEOs miss out on.”

This ties directly back to LLM citability. If a page lacks clear topical focus, it becomes harder for AI systems to extract a self-contained passage that answers a specific query. Tightening that focus isn’t just good SEO; it’s the foundation of being visible in AI-generated responses.

Grant’s Strategy Recommendation For 2026

I finished by asking Grant what he’s recommending to his clients right now.

“Let’s double down on what’s working,” Grant said. “LLM traffic is so small today that optimizing for LLMs is important for the future but not for today’s metrics. Let’s improve our SEO. Let’s get to that great SEO level. And as we’re doing that, we are incorporating the elements that will help you show up for GEO, that will help show up on these other surfaces.”

His focus is on great content, topical authority, uniqueness, data-driven approaches, citations, and digital PR. In Grant’s words: “Getting content so good that LLMs can’t ignore you, Google can’t ignore you, and publications can’t ignore you.”

It’s the Steve Martin philosophy applied to SEO: “Be so good they can’t ignore you,” and, coincidence or not, the rule I have applied for the last 15 years in SEO.

Watch the full interview with Grant Simmons here:

Thank you to Grant Simmons for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal