The Download: AI to measure pain, and how to deal with conspiracy theorists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI is changing how we quantify pain

Researchers around the world are racing to turn pain—medicine’s most subjective vital sign—into something a camera or sensor can score as reliably as blood pressure.

The push has already produced PainChek—a smartphone app that scans people’s faces for tiny muscle movements and uses artificial intelligence to output a pain score—which has been cleared by regulators on three continents and has logged more than 10 million pain assessments. Other startups are beginning to make similar inroads.

The way we assess pain may finally be shifting, but when algorithms measure our suffering, does that change the way we treat it? Read the full story.

—Deena Mousa

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about our bodies. If you haven’t already, subscribe now to receive future issues once they land.

How to help friends and family dig out of a conspiracy theory black hole

—Niall Firth 

Someone I know became a conspiracy theorist seemingly overnight.

It was during the pandemic. They suddenly started posting daily on Facebook about the dangers of covid vaccines and masks, warning of an attempt to control us.

As a science and technology journalist, I felt that my duty was to respond. I tried, but all I got was derision. Even now I still wonder: Are there things I could have done differently to talk them back down and help them see sense? 

I gave Sander van der Linden, professor of social psychology in society at the University of Cambridge, a call to ask: What would he advise if family members or friends show signs of having fallen down the rabbit hole? Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here. It’s also part of our How To series, giving you practical advice to help you get things done. 

If you’re interested in hearing more about how to survive in the age of conspiracies, join our features editor Amanda Silverman and executive editor Niall Firth for a subscriber-exclusive Roundtable conversation with conspiracy expert Mike Rothschild. It’s at 1pm ET on Thursday November 20—register now to join us!

Google is still aiming for its “moonshot” 2030 energy goals

—Casey Crownhart

Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. As you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google.

They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise. Read the full story.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ChatGPT is now “warmer and more conversational”
But it’s also slightly more willing to discuss sexual and violent content. (The Register)
+ ChatGPT has a very specific writing style. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 The US could deny visas to visitors with obesity, cancer or diabetes
As part of its ongoing efforts to stem the flow of people trying to enter the country. (WP $)

3 Microsoft is planning to create its own AI chip
And it’s going to use OpenAI’s internal chip-building plans to do it. (Bloomberg $)
+ The company is working on a colossal data center in Atlanta. (WSJ $)

4 Early AI agent adopters are convinced they’ll see a return on their investment soon 
Mind you, they would say that. (WSJ $)
+ An AI adoption riddle. (MIT Technology Review)

5 Waymo’s robotaxis are hitting American highways
Until now, they’ve typically gone out of their way to avoid them. (The Verge)
+ Its vehicles will now reach speeds of up to 65 miles per hour. (FT $)
+ Waymo is proving long-time detractor Elon Musk wrong. (Insider $)

6 A new Russian unit is hunting down Ukraine’s drone operators
It’s tasked with killing the pilots behind Ukraine’s successful attacks. (FT $)
+ US startup Anduril wants to build drones in the UAE. (Bloomberg $)
+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)

7 Anthropic’s Claude successfully controlled a robot dog
It’s important to know what AI models may do when given access to physical systems. (Wired $)

8 Grok briefly claimed Donald Trump won the 2020 US election
As reliable as ever, I see. (The Guardian)

9 The Northern Lights are playing havoc with satellites
Solar storms may look spectacular, but they make it harder to keep tabs on space. (NYT $)
+ Seriously though, they look amazing. (The Atlantic $)
+ NASA’s new AI model can predict when a solar storm may strike. (MIT Technology Review)

10 Apple users can now use digital versions of their passports
But it’s strictly for internal flights within the US only. (TechCrunch)

Quote of the day

“I hope this mistake will turn into an experience.”

—Vladimir Vitukhin, chief executive of the company behind Russia’s first anthropomorphic robot AIDOL, offers a philosophical response to the machine falling flat on its face during a reveal event, the New York Times reports.

One more thing

Welcome to the oldest part of the metaverse

Headlines treat the metaverse as a hazy dream yet to be built. But if it’s defined as a network of virtual worlds we can inhabit, its oldest corner has been already running for 25 years.

It’s a medieval fantasy kingdom created for the online role-playing game Ultima Online. It was the first to simulate an entire world: a vast, dynamic realm where players could interact with almost anything, from fruit on trees to books on shelves.

Ultima Online has already endured a quarter-century of market competition, economic turmoil, and political strife. So what can this game and its players tell us about creating the virtual worlds of the future? Read the full story

—John-Clark Levin

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Unlikely duo Sting and Shaggy are starring together in a New York musical.
+ Barry Manilow was almost in Airplane!? That would be an entirely different kind of flying, altogether ✈
+ What makes someone sexy? Well, that depends.
+ Keep an eye on those pink dolphins, they’re notorious thieves.

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in a wide range of 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

Google DeepMind first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But SIMA 2 has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability.

The researchers claim that SIMA 2 can carry out a range of more complex tasks inside virtual worlds, figure out how to solve certain challenges by itself, and chat with its users. It can also improve itself by tackling harder tasks multiple times and learning through trial and error.

“Games have been a driving force behind agent research for quite a while,” Joe Marino, a research scientist at Google DeepMind, said in a press conference this week. He noted that even a simple action in a game, such as lighting a lantern, can involve multiple steps: “It’s a really complex set of tasks you need to solve to progress.”

The ultimate aim is to develop next-generation agents that are able to follow instructions and carry out open-ended tasks inside more complex environments than a web browser. In the long run, Google DeepMind wants to use such agents to drive real-world robots. Marino claimed that the skills SIMA 2 has learned, such as navigating an environment, using tools, and collaborating with humans to solve problems, are essential building blocks for future robot companions.

Unlike previous work on game-playing agents such as AlphaZero, which beat a Go grandmaster in 2016, or AlphaStar, which beat 99.8% of ranked human competition players at the video game StarCraft 2 in 2019, the idea behind SIMA is to train an agent to play an open-ended game without preset goals. Instead, the agent learns to carry out instructions given to it by people.

Humans control SIMA 2 via text chat, by talking to it out loud, or by drawing on the game’s screen. The agent takes in a video game’s pixels frame by frame and figures out what actions it needs to take to carry out its tasks.

Like its predecessor, SIMA 2 was trained on footage of humans playing eight commercial video games, including No Man’s Sky and Goat Simulator 3, as well as three virtual worlds created by the company. The agent learned to match keyboard and mouse inputs to actions.

Hooked up to Gemini, the researchers claim, SIMA 2 is far better at following instructions (asking questions and providing updates as it goes) and figuring out for itself how to perform certain more complex tasks.  

Google DeepMind tested the agent inside environments it had never seen before. In one set of experiments, researchers asked Genie 3, the latest version of the firm’s world model, to produce environments from scratch and dropped SIMA 2 into them. They found that the agent was able to navigate and carry out instructions there.

The researchers also used Gemini to generate new tasks for SIMA 2. If the agent failed, at first Gemini generated tips that SIMA 2 took on board when it tried again. Repeating a task multiple times in this way often allowed SIMA 2 to improve by trial and error until it succeeded, Marino said.

Git gud

SIMA 2 is still an experiment. The agent struggles with complex tasks that require multiple steps and more time to complete. It also remembers only its most recent interactions (to make SIMA 2 more responsive, the team cut its long-term memory). It’s also still nowhere near as good as people at using a mouse and keyboard to interact with a virtual world.

Julian Togelius, an AI researcher at New York University who works on creativity and video games, thinks it’s an interesting result. Previous attempts at training a single system to play multiple games haven’t gone too well, he says. That’s because training models to control multiple games just by watching the screen isn’t easy: “Playing in real time from visual input only is ‘hard mode,’” he says.

In particular, Togelius calls out GATO, a previous system from Google DeepMind, which—despite being hyped at the time—could not transfer skills across a significant number of virtual environments.  

Still, he is open-minded about whether or not SIMA 2 could lead to better robots. “The real world is both harder and easier than video games,” he says. It’s harder because you can’t just press A to open a door. At the same time, a robot in the real world will know exactly what its body can and can’t do at any time. That’s not the case in video games, where the rules inside each virtual world can differ.

Others are more skeptical. Matthew Guzdial, an AI researcher at the University of Alberta, isn’t too surprised that SIMA 2 can play many different video games. He notes that most games have very similar keyboard and mouse controls: Learn one and you learn them all. “If you put a game with weird input in front of it, I don’t think it’d be able to perform well,” he says.

Guzdial also questions how much of what SIMA 2 has learned would really carry over to robots. “It’s much harder to understand visuals from cameras in the real world compared to games, which are designed with easily parsable visuals for human players,” he says.

Still, Marino and his colleagues hope to continue their work with Genie 3 to allow the agent to improve inside a kind of endless virtual training dojo, where Genie generates worlds for SIMA to learn in via trial and error guided by Gemini’s feedback. “We’ve kind of just scratched the surface of what’s possible,” he said at the press conference.  

OpenAI’s new LLM exposes the secrets of how AI really works

ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

That’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks.

“As these AI systems get more powerful, they’re going to get integrated more and more into very important domains,” Leo Gao, a research scientist at OpenAI, told MIT Technology Review in an exclusive preview of the new work. “It’s very important to make sure they’re safe.”

This is still early research. The new model, called a weight-sparse transformer, is far smaller and far less capable than top-tier mass-market models like the firm’s GPT-5, Anthropic’s Claude, and Google DeepMind’s Gemini. At most it’s as capable as GPT-1, a model that OpenAI developed back in 2018, says Gao (though he and his colleagues haven’t done a direct comparison).    

But the aim isn’t to compete with the best in class (at least, not yet). Instead, by looking at how this experimental model works, OpenAI hopes to learn about the hidden mechanisms inside those bigger and better versions of the technology.

It’s interesting research, says Elisenda Grigsby, a mathematician at Boston College who studies how LLMs work and who was not involved in the project: “I’m sure the methods it introduces will have a significant impact.” 

Lee Sharkey, a research scientist at AI startup Goodfire, agrees. “This work aims at the right target and seems well executed,” he says.

Why models are so hard to understand

OpenAI’s work is part of a hot new field of research known as mechanistic interpretability, which is trying to map the internal mechanisms that models use when they carry out different tasks.

That’s harder than it sounds. LLMs are built from neural networks, which consist of nodes, called neurons, arranged in layers. In most networks, each neuron is connected to every other neuron in its adjacent layers. Such a network is known as a dense network.

Dense networks are relatively efficient to train and run, but they spread what they learn across a vast knot of connections. The result is that simple concepts or functions can be split up between neurons in different parts of a model. At the same time, specific neurons can also end up representing multiple different features, a phenomenon known as superposition (a term borrowed from quantum physics). The upshot is that you can’t relate specific parts of a model to specific concepts.

“Neural networks are big and complicated and tangled up and very difficult to understand,” says Dan Mossing, who leads the mechanistic interpretability team at OpenAI. “We’ve sort of said: ‘Okay, what if we tried to make that not the case?’”

Instead of building a model using a dense network, OpenAI started with a type of neural network known as a weight-sparse transformer, in which each neuron is connected to only a few other neurons. This forced the model to represent features in localized clusters rather than spread them out.

Their model is far slower than any LLM on the market. But it is easier to relate its neurons or groups of neurons to specific concepts and functions. “There’s a really drastic difference in how interpretable the model is,” says Gao.

Gao and his colleagues have tested the new model with very simple tasks. For example, they asked it to complete a block of text that opens with quotation marks by adding matching marks at the end.  

It’s a trivial request for an LLM. The point is that figuring out how a model does even a straightforward task like that involves unpicking a complicated tangle of neurons and connections, says Gao. But with the new model, they were able to follow the exact steps the model took.

“We actually found a circuit that’s exactly the algorithm you would think to implement by hand, but it’s fully learned by the model,” he says. “I think this is really cool and exciting.”

Where will the research go next? Grigsby is not convinced the technique would scale up to larger models that have to handle a variety of more difficult tasks.    

Gao and Mossing acknowledge that this is a big limitation of the model they have built so far and agree that the approach will never lead to models that match the performance of cutting-edge products like GPT-5. And yet OpenAI thinks it might be able to improve the technique enough to build a transparent model on a par with GPT-3, the firm’s breakthrough 2021 LLM. 

“Maybe within a few years, we could have a fully interpretable GPT-3, so that you could go inside every single part of it and you could understand how it does every single thing,” says Gao. “If we had such a system, we would learn so much.”

Canada Still Works for U.S. Sellers

Cross-border ecommerce between the United States and Canada has rarely been more uncertain than in 2025. Nonetheless, for U.S. merchants, Canada remains an easy first step into international sales.

Canadian shoppers buy heavily from U.S. sites. But this year’s holiday shopping season is testing even experienced sellers. Tariffs, a postal strike, and shifting taxes have turned what should be a smooth northern route into a logistical puzzle.

Tariff Dust-Up

Many pundits have described recent trade relations as a war. Helaine Rich, the vice president of strategic sales and administration at ePost Global, an international shipping provider, was more tactful.

“The current administration really took a look at what countries are asking [American businesses] to pay when we’re shipping into their countries and what they’re charging as a premium on U.S. goods,” said Rich.

The result was a tariff for the United States that upset North-South relations. The back-and-forth trade negotiations included on-again, off-again reciprocal tariffs and surtaxes from the Canadian government — in some cases, up to 25%.

At peaks in the U.S.-Canada dispute, ecommerce “shipment volumes dropped quickly as tariffs rose and some Canadian consumers began avoiding U.S. brands,” Rich said.

Yet duties existed long before the recent dust-up.

Canadian Duties

“Nothing is more frustrating than thinking you paid for a product and then getting surprised at the door that you have to pay this, that, and the other duty,” said Rich.

Unfortunately, this is a common occurrence for Canadian shoppers buying from American ecommerce stores. Here are a few examples of what Canada typically adds.

  • Duties. Based on the type of goods, their origin (including whether they qualify under the United States–Mexico–Canada free-trade agreement or other treaties), and classification (harmonized system code).
  • GST. The Canadian goods and services tax applies to most imported goods, calculated on the Canadian-dollar value of goods plus duties.
  • PST or HST. Depending on the destination province, the importer or consumer may also owe provincial sales tax (PST) or harmonized sales tax (HST), combining federal and provincial components.

Postal Strike

The final challenge, beyond a volatile trade environment and duties due, is getting a package delivered in Canada.

Rich noted that the Canadian Union of Postal Workers is again holding strikes during the holiday season, delaying most holiday shipments, as it did last year.

Canada Post is the nation’s primary last-mile delivery service. Thus sending shipments via the U.S. Postal Service or any other carrier that partners with Canada Post is a risk.

For instance, Walmart Marketplace does not allow Canada Post or any affiliated services. Sellers who try to circumvent Walmart’s restriction face suspension.

Opportunity Nonetheless

Despite tariffs, taxes, and strikes, Canada remains a significant growth market for U.S. ecommerce retailers.

Depending on the projection, total Canadian ecommerce sales will range from $40 billion USD to $43 billion in 2025. About 20% of those sales will go to American stores, making the Canadian market worth around $8 billion for the year. (In contrast, 2024 U.S. retail ecommerce sales were roughly $1.2 trillion.)

Canada offers U.S. merchants geographic proximity, familiar consumer behavior, and lesser competition. Plus, nearly all consumers speak English.

If a U.S.-based online store wants to expand internationally, Canada is a great place to start.

Selling to Canada

Success requires researching three factors: cost competitiveness, duty-paid pricing, and carrier contingencies.

Cost

Before marketing to Canadian shoppers, merchants should model the total landed cost — product, shipping, duties, and GST/HST — to ensure each sale is profitable, according to Rich at ePost Global.

This step includes identifying and understanding tariffs or product restrictions. Canada forbids the import of some products sold domestically in the United States.

Bottom line: Can a U.S. business competitively sell its products into Canada?

Duty-paid pricing

Duty-paid pricing is akin to free shipping.

Shipping fees create friction, prompting shoppers to abandon orders if too expensive. Ditto for import duties.

Free shipping offers remove that friction.

The equation is similar to “delivery duty paid.” If paying Canadian duties for customers increases sales volume enough to offset the cost and generate more profit, do it.

Carriers

Finally, no carrier is a good choice when it is experiencing a strike. Shipments will almost certainly encounter delays.

Have a contingency plan that uses carriers not impacted by the current Canadian strikes.

Google Sharpens Suspension Accuracy and Speeds Up Appeals for Advertisers via @sejournal, @brookeosmundson

Google account suspensions have long been one of the most stressful issues advertisers face. A single notification can pause revenue, disrupt campaigns, and leave teams scrambling to understand what went wrong, often at no fault of their own.

Over the past several months, Google has heard that feedback and is now rolling out measurable improvements aimed at reducing the burden on legitimate advertisers.

These updates should bring meaningful relief. Misapplied suspensions are down, appeals are moving faster, and Google is promising more transparency into why enforcement actions happen at all.

What’s Changed in Google’s Process

Google announced several updates aimed at preventing unnecessary enforcement actions and speeding up resolutions when mistakes happen.

Google Ads Liaison Ginny Marvin shared additional context in a LinkedIn video. She explained that advertisers often faced long, unclear appeal processes. Many of those advertisers were compliant, but still got caught in broad enforcement filters designed to protect users. The new improvements are meant to address that gap and create a smoother experience for legitimate businesses.

Screenshot taken by author, November 2025

According to Google’s data:

  • Incorrect account suspensions are down more than 80%
  • Appeals are being resolved 70% faster
  • 99% of appeals are reviewed within 24 hours

These numbers reflect improvements in Google’s automated systems, better internal checks, and more precise policy evaluation. The goal is to reduce the number of trusted advertisers who get suspended by mistake and to shorten the time it takes to recover when an account needs review.

Google also mentioned ongoing work to make enforcement decisions easier to understand. While full visibility into every signal is unlikely, these updates indicate an effort to give advertisers clearer direction when issues occur.

How This Helps Advertisers

These changes bring meaningful stability to daily operations. When incorrect suspensions drop by such a large margin, advertisers experience fewer unexpected pauses in performance.

That consistency matters for both in-house teams and agencies managing multiple accounts.

The faster appeal timeline also reduces the fallout from any suspension that does occur. Getting nearly all appeals reviewed within a day helps advertisers avoid extended downtime and protects campaign momentum.

Clarity matters as well. Advertisers have long asked for more detail when suspensions happen.

Even small improvements in transparency can save hours of troubleshooting and prevent repeated appeals that contribute to delays.

These updates should also improve confidence in Google’s enforcement systems. When advertisers trust the process, they can focus on optimization instead of worrying that a routine change will trigger a policy issue.

How This Shapes Future Enforcement

Google’s changes reflect a broader effort to balance user protection with a better advertiser experience. Automated enforcement will always play a significant role in preventing harmful behavior, but legitimate businesses need a system that treats them fairly and resolves issues quickly.

The latest results show encouraging progress. There is still room for improvement, especially in policy clarity and long-term consistency, but the direction is positive.

Google has stated that this work will continue and that advertiser feedback remains central to future updates. For marketers, this signals a more stable and predictable enforcement environment, which supports healthier performance and stronger planning across campaigns.

Google Reminds Websites To Use One Review Target via @sejournal, @MattGSouthern

Google updated its review snippet documentation to clarify that each review or rating in structured data should point to one clear target, reducing ambiguity.

  • Google updated its review snippet docs to clarify how review targets should be specified
  • You should avoid attaching the same review or rating to multiple different entities
  • A quick audit of templates and plugins can catch confusing nesting.
Lazy Link Building Building Strategies That Work via @sejournal, @martinibuster

I like coming up with novel approaches to link building. One way to brainstorm an approach is to reverse a common method. I created a couple of approaches to link building, several are passive and two others are a little more active but have very little to do with email outreach. I wrote about these tips back around 2013, but I’ve polished them up and updated them for today.

Passive Link Building

Someone asked that I put together some tips for those who are too lazy to do link building. So here it goes!

Guilt Trip Copyright Infringers

Check who’s stealing your content. Be  hard on scrapers. But if it’s an otherwise legit site, you might want to hold off asking them to take down your content. Check if they’re linking to a competitor or similar sites, like from a links page.

You can ask them nicely to take down the content and after they email you back to confirm the link is down, email them back to thank them. But then say something like, “I see you are linking to Site-X.com. If my content was good enough to show on your site, then I would be grateful and much obliged if you considered it good enough to list from your links page.

I heard a keynote speaker at an SEO conference once encouraging people to come down hard on people who steal your content. I strongly disagree with that approach. Some people who steal your content sometimes are under the impression that if it’s on the Internet then it’s free and they can use it on their own site.  Some think it’s free to use as long as they link back to your site.

If they are linking to your site, tell them that you prefer they don’t infringe on your copyright but that you would be happy to write them a different article they can use as long as they link back to your site. You can be nice to people and still get a link.

Reverse Guest Posting

Instead of publishing articles on someone else’s site, solicit people to publish on your site. Many people tweet, promote, and link from their sites to sites that they are interviewed on. An interesting thing about doing this is that interviewing people who have a certain amount of celebrity helps to bring more people to your site, especially if people are searching for that person.

Relationship Building

Authors of books are great for this kind of outreach. People are interested in what authors and experts say. Sometimes you can find the most popular authors and influencers at industry conferences. I’ve met some really famous and influential people at conferences and got their email address and scored interviews by just going up and talking to these people.

This is called relationship building. SEOs and digital marketers are so overly focused on sending out emails and doing everything online that they forget that people actually get together in person at industry events, meetups, and other kinds of social events.

Giveaways

This is an oldie and I get it that many SEOs have talked about this. But this is something that I used successfully from way back around 2005. I did an annual giveway to my readers and website members.

The way I did it was to contact some manufacturers of products that are popular with my readers and ask for a discount if I buy in bulk and tell them I’ll be promoting their products to my subscribers, readers, and members. I’ve been responsible for making several companies popular by bringing attention to their products, elevating them from a regional business to a nationwide business.

Leverage Niche Audience For Links

The way to do this is to identify an underserved subtopic of your niche, then create a useful section that addresses a need for that niche. The idea is to create a compelling reason to link to the site.

Here is an example of how to do this for a travel destination site.

Research gluten free, dairy free, nut-free, raw food dining destinations. Then make a point to visit, interview, and build a resource for those.

Conduct interviews with lodging and restaurant owners that offer gluten free options. You’ll be surprised by how many restaurants and lodgings might decide on their own to link to your site or maybe just hint at it.

Summary

Outreach to sites about a niche topic, not just to businesses but also to organizations and associations related to that niche that have links and resources pages. Just tell them about the site, quickly explain what it offers and ask for a link. This method is flexible and can be adapted to a wide range of niche topics. And if they have an email or publish articles, suggest contributing to those but don’t ask for a link, just ask for a mention.

Don’t underestimate the power of building positive awareness of your site. Focus on creating positive feelings for your site (goodwill) and generating positive word of mouth, otherwise known as external signals of quality. The rankings will generally follow.

Featured Image by Shutterstock/pathdoc

The Quid Pro No Method Of Link Building via @sejournal, @martinibuster

Expressly paying for links has been out for awhile. Quid Pro No is in. These are some things you can do when a website asks for money in exchange for a link. During the course of building links, whether it’s free links, publishing an article or getting a brand mention, it’s not unusual to get solicited for money. It’s tempting to take the bait and get a project done. But I’m going to suggest some considerations prior to making a decision as well as a way to turn it around using an approach that I call Quid Pro No.

Link building, digital pr, brand mention building can often lead to solicitations for a paid link. There are many good reasons for not engaging in paid links and in my experience it’s possible to get a link without doing it their way when someone asks you for money in return for a link.

Red Light Means Stop

The first consideration is that someone who has their hand out for money is a red light is because it’s highly likely they have done this before and are highly likely linking to low quality websites that are in really bad neighborhoods, putting the publisher’s site and any sites associated with it into the outlier part of the web graph where sites are identified as spam and tend to not get indexed. In this case consider it a favor that they outed their site for the crap neighborhood it resides in and walk away. Quid pro… no.

Getting solicited for money can be a frequent occurrence. Site publishers, some of them apparently legit, are publishing Guest Post Submission Guidelines for the purpose of attracting paying submissions. It’s an industry and overly normalized in certain circles. Beware.

Spook The Fish

A less frequent occurrence is by the newb who’s trying to extract something. If the site checks out then there may be room for some kind of concession. If they’re asking for money, in this case, Quid Pro No means to FUD them away from this kind of activity THEN turn them around to doing the project on your terms.

When angling on a river fish that’s on the hook might make a run downstream away from you which makes it tough to land the fish because you’re fighting the fish and the current. Sometimes a tap on the rod will spook them into changing position. Sometimes a sharp pull can direct them to turn around. For this character I have found it efficacious to spook them with all the bad things that can happen and turn them around to where I want them to be.

Very briefly, and in the most polite terms, explain you’d love to do business, but that there are other considerations. Here’s what you can trot out:

  • FTC Guidelines
    FTC guidelines prohibit a web publisher from accepting money for an unlabeled advertisement.
  • Google Guidelines
    Google prohibits paid links

Land The Link

What’s in it for me is a useful concept that can be used to convince someone that it’s in their interest to do things your way. It’s important to convince the other party that there’s something in it for them. They want something so sometimes it’s worthwhile to make them feel as if they’re getting something out of the deal.

The approach I take for closing a project, whether it’s a free link or an article project is to circle back to asking for an article project by focusing on communicating why my site is high quality and ways that we can cross-promote. It’s essentially relationship building. The message is that your site is authoritative, well promoted and that there are ways that both sites can benefit without doing a straight link buy.

But at this point I want to emphasize again that any site that’s asking for money in exchange for a link is not necessarily a good neighborhood. So you might not actually want a link from them if they’re linking out to low quality sites.

Or Go For A Labeled Sponsored Post

However, another way to turn this around is to just go ahead and pay them as long as it’s a labeled as a sponsored post and contains either multiple no-follow links and or brand mentions. Sponsored posts get indexed by search engines and AI platforms that will use those as validation for how great your site is and recommend it.

What’s beautiful about a labeled sponsored post is that they give you full control over the messaging, which can be more valuable than a tossed-off link in a random paragraph. And because everything is disclosed and compliant, you reduce the long-term risk while still capturing visibility in AI Mode, ChatGPT and Perplexity through the citation signals.

Quid Pro No

Quid Pro No is about negatively responding to a solicitation and turning it around and getting something you want without actually saying the word no.

Featured Image by Shutterstock/Studio Romantic

Google Defends Parasite SEO Crackdown As EU Opens Investigation via @sejournal, @MattGSouthern

Google has defended its enforcement of site reputation abuse policies after the European Commission announced an investigation into whether the company unfairly demotes news publishers in search results.

The company published a blog post stating the investigation “is misguided and risks harming millions of European users” and that it “risks rewarding bad actors and degrading the quality of search results.”

Google’s Chief Scientist for Search, Pandu Nayak, wrote the response.

Background

The European Commission announced an investigation under the Digital Markets Act examining whether Google’s anti-spam policies unfairly penalize legitimate publisher revenue models.

Publishers complained that Google demotes news sites running sponsored content and third-party promotional material. EU antitrust chief Teresa Ribera said:

“We are concerned that Google’s policies do not allow news publishers to be treated in a fair, reasonable and non-discriminatory manner in its search results.”

Google updated its site reputation abuse policy last year to combat parasite SEO. The practice involves spammers paying publishers to host content on established domains to manipulate search rankings.

The policy targets content like payday loan reviews on educational sites, casino content on medical sites, or third-party coupon pages on news publishers. Google provided specific examples in its announcement including weight-loss pill spam and payday loan promotions.

Manual enforcement began shortly after. Google issued penalties to major publishers including Forbes, The Wall Street Journal, Time and CNN in November 2024.

Google later updated the policy to clarify that first-party oversight doesn’t exempt content primarily designed to exploit ranking signals.

Google’s Defense

Google’s response emphasized three points.

First, Google stated that a German court dismissed a similar claim, ruling the anti-spam policy was “valid, reasonable, and applied consistently.”

Second, Google says its policy protects users from scams and low-quality content. Allowing pay-to-play ranking manipulation would “enable bad actors to displace sites that don’t use those spammy tactics.”

Third, Google says smaller creators support the crackdown. The company claims its policy “helps level the playing field” so legitimate sites competing on content quality aren’t outranked by sites using deceptive tactics.

Nayak argues the Digital Markets Act is already making Search ‘less helpful for European businesses and users,’ and says the new probe risks rewarding bad actors.

The company has relied exclusively on manual enforcement so far. Google confirmed in May 2024 that it hadn’t launched algorithmic actions for site reputation abuse, only manual reviews by human evaluators.

Google added site reputation abuse to its Search Quality Rater Guidelines in January 2025, defining it as content published on host sites “mainly because of that host site’s already-established ranking signals.”

Why This Matters

The investigation creates a conflict between spam enforcement and publisher business models.

Google maintains parasite SEO degrades search results regardless of who profits. Publishers argue sponsored content with editorial oversight provides legitimate value and revenue during challenging times for media.

The distinction matters. If Google’s policy captures legitimate publisher-advertiser partnerships, it restricts how news organizations monetize content. If the policy only targets manipulative tactics, it protects search quality.

The EU’s position suggests regulators view Google’s enforcement as potentially discriminatory. The Digital Markets Act prohibits gatekeepers from unfairly penalizing others, with fines up to 10% of global revenue for violations.

Google addressed concerns about the policy in December 2024, confirming that affiliate content properly marked isn’t affected and that publishers must submit reconsideration requests through Search Console to remove penalties.

The updated policy documentation clarified that simply having third-party content isn’t a violation unless explicitly published to exploit a site’s rankings.

The policy has sparked debate in the SEO community about whether Google should penalize sites based on business arrangements rather than content quality.

Looking Ahead

The European Commission has opened the investigation under the Digital Markets Act and will now gather evidence and define the specific DMA provisions under examination.

Google will receive formal statements of objections outlining alleged violations. The company can respond with arguments defending its policies.

DMA investigations move faster than traditional antitrust cases. Publishers may submit formal complaints providing evidence of traffic losses and revenue impacts.

The outcome could force changes to how Google enforces spam policies in Europe or validate its current approach to protecting search quality.


Featured Image: daily_creativity/Shutterstock

llms.txt: The Web’s Next Great Idea, Or Its Next Spam Magnet via @sejournal, @DuaneForrester

At a recent conference, I was asked if llms.txt mattered. I’m personally not a fan, and we’ll get into why below. I listened to a friend who told me I needed to learn more about it as she believed I didn’t fully understand the proposal, and I have to admit that she was right. After doing a deep dive on it, I now understand it much better. Unfortunately, that only served to crystallize my initial misgivings. And while this may sound like a single person disliking an idea, I’m actually trying to view this from the perspective of the search engine or the AI platform. Why would they, or why wouldn’t they, adopt this protocol? And that POV led me to some, I think, interesting insights.

We all know that search is not the only discovery layer anymore. Large-language-model (LLM)-driven tools are rewriting how web content is found, consumed, and represented. The proposed protocol, called llms.txt, attempts to help websites guide those tools. But the idea carries the same trust challenges that killed earlier “help the machine understand me” signals. This article explores what llms.txt is meant to do (as I understand it), why platforms would be reluctant, how it can be abused, and what must change before it becomes meaningful.

Image Credit: Duane Forrester

What llms.txt Hoped To Fix

Modern websites are built for human browsers: heavy JavaScript, complex navigation, interstitials, ads, dynamic templates. But most LLMs, especially at inference time, operate in constrained environments: limited context windows, single-pass document reads, and simpler retrieval than traditional search indexers. The original proposal from Answer.AI suggests adding an llms.txt markdown file at the root of a site, which lists the most important pages, optionally with flattened content so AI systems don’t have to scramble through noise.

Supporters describe the file as “a hand-crafted sitemap for AI tools” rather than a crawl-block file. In short, the theory: Give your site’s most valuable content in a cleaner, more accessible format so tools don’t skip it or misinterpret it.

The Trust Problem That Never Dies

If you step back, you discover this is a familiar pattern. Early in the web’s history, something like the meta keywords tag let a site declare what it was about; it was widely abused and ultimately ignored. Similarly, authorship markup (rel=author, etc) tried to help machines understand authority, and again, manipulation followed. Structured data (schema.org) succeeded only after years of governance and shared adoption across search engines. llms.txt sits squarely inside this lineage: a self-declared signal that promises clarity but trusts the publisher to tell the truth. Without verification, every little root-file standard becomes a vector for manipulation.

The Abuse Playbook (What Spam Teams See Immediately)

What concerns platform policy teams is plain: If a website publishes a file called llms.txt and claims whatever it likes, how does the platform know that what’s listed matches the live content users see, or can be trusted in any way? Several exploit paths open up:

  1. Cloaking through the manifest. A site lists pages in the file that are hidden from regular visitors or behind paywalls, then the AI tool ingests content nobody else sees.
  2. Keyword stuffing or link dumping. The file becomes a directory stuffed with affiliate links, low-value pages, or keyword-heavy anchors aimed at gaming retrieval.
  3. Poisoning or biasing content. If agents trust manifest entries more than the crawl of messy HTML, a malicious actor can place manipulative instructions or biased lists that affect downstream results.
  4. Third-party link chains. The file could point to off-domain URLs, redirect farms, or content islands, making your site a conduit or amplifier for low-quality content.
  5. Trust laundering. The presence of a manifest might lead an LLM to assign higher weight to listed URLs, so a thin or spammy page gets a boost purely by appearance of structure.

The broader commentary flags this risk. For instance, some industry observers argue that llms.txt “creates opportunities for abuse, such as cloaking.” And community feedback apparently confirms minimal actual uptake: “No LLM reads them.” That absence of usage ironically means fewer real-world case studies of abuse, but it also means fewer safety mechanisms have been tested.

Why Platforms Hesitate

From a platform’s viewpoint, the calculus is pragmatic: New signals add cost, risk, and enforcement burden. Here’s how the logic works.

First, signal quality. If llms.txt entries are noisy, spammy, or inconsistent with the live site, then trusting them can reduce rather than raise content quality. Platforms must ask: Will this file improve our model’s answer accuracy or create risk of misinformation or manipulation?

Second, verification cost. To trust a manifest, you need to cross-check it against the live HTML, canonical tags, structured data, site logs, etc. That takes resources. Without verification, a manifest is just another list that might lie.

Third, abuse handling. If a bad actor publishes an llms.txt manifest that lists misleading URLs which an LLM ingests, who handles the fallout? The site owner? The AI platform? The model provider? That liability issue is real.

Fourth, user-harm risk. An LLM citing content from a manifest might produce inaccurate or biased answers. This just adds to the current problem we already face with inaccurate answers and people following incorrect, wrong, or dangerous answers.

Google has already stated that it will not rely on llms.txt for its “AI Overviews” feature and continues to follow “normal SEO.” And John Mueller wrote: “FWIW no AI system currently uses llms.txt.” So the tools that could use the manifest are largely staying on the sidelines. This reflects the idea that a root-file standard without established trust is a liability.

Why Adoption Without Governance Fails

Every successful web standard has shared DNA: a governing body, a clear vocabulary, and an enforcement pathway. The standards that survive all answer one question early … “Who owns the rules?”

Schema.org worked because that answer was clear. It began as a coalition between Bing, Google, Yahoo, and Yandex. The collaboration defined a bounded vocabulary, agreed syntax, and a feedback loop with publishers. When abuse emerged (fake reviews, fake product data), those engines coordinated enforcement and refined documentation. The signal endured because it wasn’t owned by a single company or left to self-police.

Robots.txt, in contrast, survived by being minimal. It didn’t try to describe content quality or semantics. It only told crawlers what not to touch. That simplicity reduced its surface area for abuse. It required almost no trust between webmasters and platforms. The worst that could happen was over-blocking your own content; there was no incentive to lie inside the file.

llms.txt lives in the opposite world. It invites publishers to self-declare what matters most and, in its full-text variant, what the “truth” of that content is. There’s no consortium overseeing the format, no standardized schema to validate against, and no enforcement group to vet misuse. Anyone can publish one. Nobody has to respect it. And no major LLM provider today is known to consume it in production. Maybe they are, privately, but publicly, no announcements about adoption.

What Would Need To Change For Trust To Build

To shift from optional neat-idea to actual trusted signal, several conditions must be met, and each of these entails a cost in either dollars or human time, so again, dollars.

  • First, manifest verification. A signature or DNS-based verification could tie an llms.txt file to site ownership, reducing spoof risk. (cost to website)
  • Second, cross-checking. Platforms should validate that URLs listed correspond to live, public pages, and identify mismatch or cloaking via automated checks. (cost to engine/platform)
  • Third, transparency and logging. Public registries of manifests and logs of updates would make dramatic changes visible and allow community auditing. (cost to someone)
  • Fourth, measurement of benefit. Platforms need empirical evidence that ingesting llms.txt leads to meaningful improvements in answer correctness, citation accuracy, or brand representation. Until then, this is speculative. (cost to engine/platform)
  • Finally, abuse deterrence. Mechanisms must be built to detect and penalize spammy or manipulative manifest usage. Without that, spam teams simply assume negative benefit. (cost to engine/platform)

Until those elements are in place, platforms will treat llms.txt as optional at best or irrelevant at worst. So maybe you get a small benefit? Or maybe not…

The Real Value Today

For site owners, llms.txt still may have some value, but not as a guaranteed path to traffic or “AI ranking.” It can function as a content alignment tool, guiding internal teams to identify priority URLs you want AI systems to see. For documentation-heavy sites, internal agent systems, or partner tools that you control, it may make sense to publish a manifest and experiment.

However, if your goal is to influence large public LLM-powered results (such as those by Google, OpenAI, or Perplexity), you should tread cautiously. There is no public evidence those systems honor llms.txt yet. In other words: Treat llms.txt as a “mirror” of your content strategy, not a “magnet” pulling traffic. Of course, this means building the file(s) and maintaining them, so factor in the added work v. whatever return you believe you will receive.

Closing Thoughts

The web keeps trying to teach machines about itself. Each generation invents a new format, a new way to declare “here’s what matters.” And each time the same question decides its fate: “Can this signal be trusted?” With llms.txt, the idea is sound, but the trust mechanisms aren’t yet baked in. Until verification, governance, and empirical proof arrive, llms.txt will reside in the grey zone between promise and problem.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock