This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT.
QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. And while it’s unclear how many users have joined the boycott, there’s no denying QuitGPT is getting attention.Read the full story.
—Michelle Kim
EVs could be cheaper to own than gas cars in Africa by 2040
Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.
There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Read the full story.
—Casey Crownhart
MIT Technology Review Narrated: How next-generation nuclear reactors break out of the 20th-century blueprint
The popularity of commercial nuclear reactors has surged in recent years as worries about climate change and energy independence drowned out concerns about meltdowns and radioactive waste.
The problem is, building nuclear power plants is expensive and slow.
A new generation of nuclear power technology could reinvent what a reactor looks like—and how it works. Advocates hope that new tech can refresh the industry and help replace fossil fuels without emitting greenhouse gases.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Social media giants have agreed to be rated on teen safety Meta, TikTok and Snap will undergo independent assessments over how effectively they protect the mental health of teen users. (WP $) + Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to be graded. (LA Times $)
2 The FDA has refused to review Moderna’s mRNA flu vaccine It’s the latest in a long line of anti-vaccination moves the agency is making. (Ars Technica) + Experts worry it’ll have a knock-on effect on investment in future vaccines. (The Guardian) + Moderna says it was blindsided by the decision. (CNN)
3 EV battery factories are pivoting to manufacturing energy cells Energy storage systems are in, electric vehicles are out. (FT $)
4 Why OpenAI killed off ChatGPT’s 4o model The qualities that make it attractive for some users make it incredibly risky for others. (WSJ $) + Bereft users have set up their own Reddit community to mourn. (Futurism) + Why GPT-4o’s sudden shutdown left people grieving. (MIT Technology Review)
5 Drug cartels have started laundering money through crypto And law enforcement is struggling to stop them. (Bloomberg $)
6 Morocco wants to build an AI for Africa The country’s Minister of Digital Transition has a plan. (Rest of World) + What Africa needs to do to become a major AI player. (MIT Technology Review)
7 Christian influencers are bowing out of the news cycle They’re choosing to ignore world events to protect their own inner peace. (The Atlantic $)
8 An RFK Jr-approved diet is pretty joyless Don’t expect any dessert, for one. (Insider $) + The US government’s health site uses Grok to dispense nutrition advice. (Wired $)
9 Don’t toss out your used vape Hackers can give it a second life as a musical synthesizer. (Wired $)
10 An ice skating duo danced to AI music at the Winter Olympics Centuries of bangers to choose from, and this is what they opted for. (TechCrunch) + AI is coming for music, too. (MIT Technology Review)
Quote of the day
“These companies are terrified that no one’s going to notice them.”
—Tom Goodwin, co-founder of business consulting firm All We Have Is Now, tells the Guardian why AI startups are going to increasingly desperate measures to grab would-be customers’ attention.
One more thing
How AI is changing gymnastics judging
The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.
At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.
For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.
—Jessica Taylor Price
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Today marks the birthday of the late, great Leslie Nielsen—one of the best to ever do it. + Congratulations are in order for Hannah Cox, who has just completed 100 marathons in 100 days across India in her dad’s memory. + Feeling down? A trip to Finland could be just what you need. + We love Padre Guilherme, the Catholic priest dropping incredible Gregorian chant beats.
Risky business of AI assistants OpenClaw, a viral tool created by independent engineer Peter Steinberger, allows users to create personalized AI assistants. Security experts are alarmed by its vulnerabilities, with even the Chinese government issuing warnings about the risks.
The prompt injection threat Tools like OpenClaw have many vulnerabilities, but the one experts are most worried about its prompt injection. Unlike conventional hacking, prompt injection tricks an LLM by embedding malicious text in emails or websites the AI reads.
No silver bullet for security Researchers are exploring multiple defense strategies: training LLMs to ignore injections, using detector LLMs to screen inputs, and creating policies that restrict harmful outputs. The fundamental challenge remains balancing utility with security in AI assistants.
AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.
That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral.
OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to readallofthesecurityblogposts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities.
In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research.
Risk management
OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time.
But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files.
There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numeroussuchvulnerabilities that put security-naïve users at risk.
Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches.
But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will.
And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks.
It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says.
Building guardrails
The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants.
Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe.
Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job.
One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection.
But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while.
Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack.
The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user.
“The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.”
On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.”
Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool.
As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant.
But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.
This week’s rundown of new products and services for ecommerce merchants includes rollouts for reverse logistics, fraud prevention, fulfillment, AI assistants, AI store builders, chargebacks, checkouts, agentic commerce, and automated marketing.
Got an ecommerce product release? Email updates@practicalecommerce.com.
New Tools for Merchants
ReturnPro partners with Clarity to detect fraud on returns.ReturnPro, a provider of returns management and reverse logistics, has partnered with Clarity, an item intelligence platform, to introduce AI-powered fraud-detection technology that identifies counterfeit, altered, and fraudulent returns and flags missing accessories at the point of return. Clarity’s AI technology combines X-ray intelligence with computer vision to see inside the actual product, comparing each returned item against its original manufacturer profile and detecting counterfeits, component swaps, and product manipulation.
ReturnPro
Bolt partners with Socure for ecommerce identity.Bolt, a financial technology platform for one-click checkout, has partnered with Socure to verify real people in real time at the moment of purchase. By integrating Socure’s RiskOS platform, Bolt delivers an ecommerce identity layer powered by predictive risk signals and compliance decisioning. Socure’s Identity Graph enables low-friction authentication for trusted consumers, adaptive protections, and cross-merchant trust signals.
Knowband launches generative AI plugins for PrestaShop.Knowband, an ecommerce developer, has launched two AI-based plugins for merchants. The PrestaShop AI Chatbot module answers product and order questions in real time. It supports multiple languages and currencies and uses vector search to understand query meanings. The PrestaShop LLMs Txt Generator module helps store owners automatically produce llms.txt files for their catalog, increasing the likelihood that genAI platforms discover and reference the products.
ShipTime acquires Warehowz to expand North American capabilities.ShipTime, a Canada-based logistics technology platform, has acquired an ownership stake in Warehowz, an on-demand warehousing and fulfillment marketplace with a network of 2,500 warehouses across North America. According to ShipTime, integrating Warehowz into ShipTime’s ecosystem enables merchants to gain greater control, visibility, and adaptability across the supply chain.
ShipTime
WordPress.com releases a Claude connector.WordPress.com has launched an official connector for Claude, the AI assistant developed by Anthropic. Once set up, Claude can answer questions using your WordPress.com site data, not estimates or generic guidance. According to WordPress, the Claude plugin can identify what readers respond to, surface content that needs refreshing, and spot opportunities for improvement.
Cside launches AI Agent Detection toolkit.Cside, a provider of website security and compliance, has launched its AI Agent Detection toolkit to identify agentic traffic and behavior from both traditional and AI-powered headless browsers. AI Agent Detection governs which AI agents can interact with the website, what they are allowed to do, and when human validation is required. Cside says the new toolkit enables merchants to leverage agentic commerce behavior for cross-selling, dynamic pricing, and additional verification requirements.
Chargebase launches to help merchants cut chargebacks.Chargebase has launched its chargeback-prevention platform for ecommerce and SaaS businesses. The platform automates the alert-resolution process, matching alerts to orders and handling backend communication with transaction dispute platforms such as Verifi and Ethoca. By receiving real-time alerts when a customer initiates a dispute with their bank, merchants can issue a quick refund and avoid a costly chargeback. Merchants pay when the platform helps avoid or resolve a dispute.
Chargebase
Loop expands Europe-based returns capabilities with Sendcloud integration.Loop, a post-purchase platform for Shopify sellers, has launched Ship by Loop 2.0, an upgraded version of its integrated return shipping service that now includes Sendcloud, a Europe-based shipping platform. With the Sendcloud integration, Loop merchants gain access to an expanded carrier network across Europe without leaving Loop’s returns portal. The enhancement also introduces QR code returns and InPost locker drop-offs.
SDLC Corp announces connector for syncing Shopify and Odoo ERP data.SDLC Corp, part of open-source developer Odoo, has launched an SDLC Connector for teams running Shopify and Odoo ERP. According to SDLC Corp, the connector synchronizes products, customers, orders, inventory, payments, and collections in real-time. The integration features include real-time Shopify-to-Odoo data sync, automated imports with validation, bidirectional inventory updates, webhook and scheduled auto-sync modes, multi-store support, custom field mapping within the Odoo dashboard, token-based authentication, and more.
Genstore launches AI tool to build and operate stores.Genstore, a store builder, has launched its ecommerce platform that uses autonomous AI to build and operate ecommerce sites. According to Genstore, its platform deploys coordinated AI agents that collaborate to execute real business tasks autonomously. The design agent creates layout, branding, and motion. The product agent generates listings, descriptions, and imagery. The launch agent prepares search engine, compliance, and store readiness. And the analytics agent uncovers conversion-driving insights.
Genstore
Prolisto launches Lite for creating eBay listings.Prolisto, a software development company specializing in ecommerce automation, has announced the launch of Prolisto Lite, a free AI-powered web app that simplifies and accelerates the process of creating eBay listings. According to Prolisto, Lite analyzes uploaded product images and generates an eBay title, a detailed search-engine-friendly description, and the appropriate item specifics.
EcomHint launches conversion rate optimization tool for Shopify and WooCommerce.EcomHint has launched its AI-powered conversion rate optimization tool for Shopify and WooCommerce merchants. The tool helps merchants identify conversion issues throughout the shopping journey and provides step-by-step guidance on how to fix them. EcomHint combines AI-based visual analysis, technical checks, and Lighthouse performance metrics to review key parts of the store, including home and product pages, cart and checkout friction points, and page speed. EcomHint bases its recommendations on an analysis of 700 online stores.
Veho introduces FlexSave delivery option.Veho, a parcel delivery platform for ecommerce, has launched FlexSave to help online brands offer cost-effective delivery. According to Veho, FlexSave enables shippers to reduce costs by replacing day-certain delivery dates with slightly broader delivery windows. Veho customers continue to receive proactive delivery updates, live support, and photo delivery confirmation.
Reputation Signals Now Matter More Than Reviews Alone
Positive reviews are no longer the primary fast path to the top of local search results.
As Google Local Pack and Maps continue to evolve, reputation signals are playing a much larger role in how businesses earn visibility. At the same time, AI tools are emerging as a new entry point for local discovery, changing how brands are cited, mentioned, and recommended.
Join Alexia Platenburg, Senior Product Marketing Manager at GatherUp, for a data-driven look at the local SEO signals shaping visibility today. In this session, she will break down how modern reputation signals influence rankings and what scalable, defensible reputation programs look like for local SEO agencies and multi-location brands.
You will walk away with a clear framework for using reputation as a true visibility and ranking lever, not just a step toward conversion. The session connects reviews, owner responses, and broader reputation signals to measurable outcomes across Google Local Pack, Maps, and AI-powered discovery.
What You’ll Learn
How review volume, velocity, ratings, and owner responses influence Local Pack and Maps rankings
How to protect your brand from fake reviews before they impact trust at scale
Why Attend?
This webinar offers a practical, evidence-based view of how reputation management is shaping local visibility in 2026. You will gain clear guidance on what matters now, what to prioritize, and how to build trust signals that support long-term local growth.
Register now to learn how reputation is driving local visibility, trust, and growth in 2026.
🛑 Can’t attend live? Register anyway, and we’ll send you the on-demand recording after the webinar.
Google’s VP of Ads and Commerce, Vidhya Srinivasan, published her third annual letter to the industry, outlining how the company plans to connect advertising, commerce, and AI across Search, YouTube, and Gemini in 2026.
The letter covers agentic commerce, AI-powered ad formats, creator partnerships, and creative tools. Several of the announcements build on features Google previewed at NRF 2026 in January and detailed during its Q4 2025 earnings call earlier this month.
What’s New
UCP Adoption
The letter confirms that the Universal Commerce Protocol now powers purchases from Etsy and Wayfair for U.S. shoppers inside AI Mode in Search and Gemini. Google said it has received interest from “hundreds of top tech companies, payments partners and retailers” since launching UCP.
When Google announced UCP at NRF, the company said the protocol was co-developed with Shopify and that more than 20 companies had endorsed it.
Google also said UCP’s potential “extends far beyond retail,” describing it as the foundation for agentic experiences across all commercial categories.
AI Mode Ad Formats
Srinivasan wrote that Google is testing a new ad format in AI Mode that highlights retailers offering products relevant to a query and marks them as sponsored. The letter describes the format as helping “shoppers easily find convenient buying options” while giving retailers visibility during the consideration stage.
The letter also mentioned Direct Offers, the ad pilot Google introduced at NRF that lets businesses share tailored deals with shoppers in AI Mode. Google plans to expand Direct Offers beyond price-based promotions to include loyalty benefits and product bundles.
Creator-Brand Matching
Srinivasan described YouTube creators as “today’s most trusted tastemakers,” citing a Google/Kantar study of 2,160 weekly video viewers. YouTube CEO Neal Mohan outlined related creator and commerce priorities in his own annual letter last month.
The letter highlights new AI-powered tools that match brands with creator communities based on content and audience analysis. Google said it started with its “open call” feature for sourcing creator partnerships and plans to go further in 2026.
Creative Asset Stats
Google said it saw a 3x increase in Gemini-generated assets in 2025, and that Q4 alone accounted for nearly 70 million assets across AI Max and Performance Max campaigns, according to Google internal data.
Srinivasan wrote that Veo 3, Google’s video generation tool, is now in Google Ads Asset Studio alongside the previously launched Nano Banana.
AI Max Performance Claims
Srinivasan wrote that AI Max is “unlocking billions of net-new searches” that advertisers had not previously reached.
Google introduced AI Max as an expansion tool for Search campaigns and discussed its performance during the Q4 earnings call.
What this letter adds is a bigger picture of where Google’s leadership sees these pieces fitting together. Srinivasan says this is the year agentic commerce moves from concept to operating reality, with UCP as the connective layer across shopping, payments, and AI agents.
For advertisers, the notable updates are the expansion of Direct Offers beyond price discounts and the testing of AI Mode ad formats in travel. For ecommerce stores, the Etsy and Wayfair confirmation shows that UCP checkout is processing real transactions with recognizable retailers. But the open questions I raised in January’s coverage about Merchant Center controls, opt-in mechanics, and reporting remain unanswered.
Looking Ahead
Srinivasan’s letter didn’t include specific launch dates for the features coming later this year. Google Marketing Live, the company’s annual ads event, takes place in the spring and would be the likely venue for more detailed announcements.
Google’s John Mueller shared a case where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results.
The issue, which Mueller described on Bluesky, is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, making the HTTP version easy to overlook.
What Happened
Mueller described the case as “a weird one.” The site used HTTPS, but a server-default HTTP homepage was still accessible at the HTTP version of the domain.
“A hidden homepage causing site-name & favicon problems in Search. This was a weird one. The site used HTTPS, however there was a server-default HTTP homepage remaining.”
The tricky part is that Chrome can upgrade HTTP navigations to HTTPS, which makes the HTTP version easy to miss in normal browsing. Googlebot doesn’t follow Chrome’s upgrade behavior.
“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”
Google’s site name system pulls the name and favicon from the homepage to determine what to display in search results. The system reads structured data from the website, title tags, heading elements, og:site_name, and other signals on the homepage. If Googlebot is reading a server-default HTTP page instead of the actual HTTPS homepage, it’s working with the wrong signals.
How To Check For This
Mueller suggested two ways to see what Googlebot sees.
First, he joked that you could use AI. Then he corrected himself.
“No wait, curl on the command line. Or a tool like the structured data test in Search Console.”
Running curl http://yourdomain.com from the command line would show the raw HTTP response without Chrome’s auto-upgrade. If the response returns a server-default page instead of your actual homepage, that’s the problem.
If you want to see what Google retrieved and rendered, use the URL Inspection tool in Search Console and run a Live Test. Google’s site name documentation also notes that site names aren’t supported in the Rich Results Test.
This case introduces a new complication. The problem wasn’t in the structured data or the HTTPS homepage itself. It was a ghost page in the HTTP version, which you’d have no reason to check because your browser never showed it.
Google’s site name documentation explicitly mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what can go wrong when an HTTP version contains content different from the HTTPS homepage you intended to serve.
The takeaway for troubleshooting site-name or favicon problems in search results is to check the HTTP version of your homepage directly. Don’t rely on what Chrome shows you.
Looking Ahead
Google’s site name documentation specifies that WebSite structured data must be on “the homepage of the site,” defined as the domain-level root URI. For sites running HTTPS, that means the HTTPS homepage is the intended source.
If your site name or favicon looks wrong in search results and your HTTPS homepage has the correct structured data, check whether an HTTP version of the homepage still exists. Use curl or the URL Inspection tool’s Live Test to view it directly. If a server-default page is sitting there, removing it or redirecting HTTP to HTTPS at the server level should resolve the issue.
Most articles don’t make good videos. The ones that do share qualities that translate naturally to 60-second formats. Identifying them before you commit production resources saves more time than any editing shortcut.
Data and first-party guidance point to repeatable patterns in how short-form video holds attention. These patterns shape script structure in ways that written content doesn’t prepare you for.
A 1,500-word article that performs well as text may contain only 150 words worth converting to video, and those 150 words may not be the ones you’d instinctively choose.
In this guide, I’ll walk through the selection and scripting process, drawing on company guidance, third-party analysis, and creator workflows. The focus here is on two decisions that matter more than production quality. Which content to convert, and how to structure scripts that hold attention on each platform.
Some creator workflows follow an 80/20 rule: Spend most of your time choosing what article to convert, then polish the output. You may assume production quality drives results. In practice, selection matters more than polish.
How-To Content
How-to and tutorial content adapts well when converted to video. The reason is because of how it’s structured. How-to content breaks naturally into steps, and each step becomes either a standalone clip or a beat within a longer video. The segmentation is already built into the written piece.
Listicles
Listicles have this quality as well. Each list item gives you a cut point, so a “7 ways to improve X” article can become seven separate videos or one video with seven sections.
FAQs
FAQ content works well as each question-answer pair delivers complete value on its own, matching how people consume short-form video. They arrive mid-scroll, expecting an immediate payoff.
Case Studies
Case studies with clear problem-solution-result structures fit naturally into 60 seconds. Problem in the first 10, solution in the middle 40, result in the final 10. The narrative arc compresses without losing its logic.
Avoiding Content That Doesn’t Convert
Content that converts poorly has its own unique qualities.
Limited-Time Announcements
Announcements with a short shelf life rarely justify the effort because by the time you script, record, edit, and publish, the information may be stale.
Rapidly-Changing Data
Statistics-heavy pieces where data changes frequently create maintenance problems. A video claiming “X platform has 500 million users” becomes misleading within months, but it keeps circulating after the number expires.
Complex Arguments
Complex arguments that require multiple supporting points rarely fit into 60 seconds. If an article’s value comes from building a case across 2,000 words, extracting 150 words guts the logic that made it persuasive.
Audit Using Engagement Metrics
Before committing production resources, audit existing content using engagement metrics. Articles with 5%+ engagement rates or 1,000+ monthly visits make prime candidates because they’ve already validated the topic with your audience. Converting them becomes distribution rather than experimentation.
In one Diggity Marketing case study, a real estate technology company saw 148% higher referral traffic after repurposing blog content this way. They identified where their audience searched, built format-specific assets, and drove users back to core content. The blog became a hub with videos pulling attention from social platforms back to owned properties.
Script Timing That Matches Platform Retention
Each platform has different structural requirements, which means scripts optimized for one may underperform on another.
OpusClip’s retention analysis suggests YouTube Shorts see strong retention at 15-30 seconds, with tutorials often running 25-40 seconds. YouTube Shorts can run up to three minutes, but many retention-focused workflows start with shorter cuts. Many Reels strategies even skew shorter, and retention can drop as videos run longer.
TikTok for Business recommends 21-34 seconds for In-Feed ads. On TikTok, strong completion and replay behavior tend to correlate with wider distribution. If you’re aiming for TikTok’s Creator Rewards Program, videos need to be at least one minute long.
These differences mean a single script may need three versions for cross-platform distribution. Your core content stays the same, but timing adjusts to match each platform’s retention curve.
For videos at least one minute long (required for TikTok’s Creator Rewards Program), TikTok’s Creative Codes recommend a three-part structure: hook, body, and close.
The math shapes everything else. Industry standard speaking pace for video runs 140-160 words per minute, which means a 60-second script caps at roughly 150 words. TikTok’s research shows 90% of ad recall happens within the first six seconds, so your hook needs to land in that window. At a typical speaking pace, six seconds gives you about 15-20 words to establish why viewers should care.
That leaves roughly 45 seconds for body content and 5-10 seconds for your close.
If you’re spending 15-20 words on the hook and 15-25 on the close, the body gets 100-120 words. That’s enough for two or three points with room to breathe between them. More than three points in that space creates rush that tanks retention.
Think about it like this: If you can only say 150 words, you have to choose the 150 most important words from your 1,500-word article. That selection process is where conversion skill lives.
Hook Formulas Backed By Retention Data
The first three seconds determine whether viewers stay or scroll. A video with a weak opening has a problem, regardless of how strong the rest of the content is.
Here are some examples of strong hook formulas.
Surprising Stat
A surprising statistic paired with immediate relevance stops the scroll. Numbers signal credibility, and the surprise creates curiosity.
In practice, it reads like this:
“60% of people admit to procrastinating regularly, even when they know it causes stress.”
There’s an attention-grabbing stat, followed by why it matters to the viewer. This works because the number is specific and the relevance is universal.
Look for the most striking data points in your articles and move them to the front. Your article may have buried it in paragraph seven, while your video leads with it.
Questions
Question hooks create tension. Once you pose a question, the mind wants an answer, and viewers have to keep watching to close that loop.
The question needs to be specific enough to promise an answer in 60 seconds but broad enough to matter to your audience.
“What’s the one thing successful people have in common?” works. “What are the 47 traits of successful people?” doesn’t work because viewers know they can’t get that answer in under a minute.
Direct Stake
Direct stake hooks can capture the attention of professional audiences.
“If your site uses Product markup, this affects your shopping visibility,” tells professionals whether this video applies to them. This respects their time because they don’t have to guess whether the content is relevant.
Vague promises like “this changes everything” underperform because they don’t commit to delivering anything specific.
Converting Articles To Scripts
Converting articles to short video scripts is all about extracting what’s most important. Start by reading your article and asking what the most surprising, useful, or consequential single fact is. That becomes your hook.
Often, the most compelling part of an article sits in paragraph three or four. Written content gives you time to build context in your opening, whereas video doesn’t.
You can simplify the extraction process by following the hook, hint, value, credibility, takeaway, action (HHVCTA) framework.
HHVCTA Framework
The HHVCTA framework maps article content to video structure. The hook at 0-2 seconds stops the scroll, and the hint at 2-5 seconds previews what viewers will learn.
Value delivery from 5-45 seconds delivers on the hook’s promise, with credibility woven throughout or concentrated in a key moment. The takeaway at 45-55 seconds lands the message, and the action in the final seconds directs viewers to next steps.
This prevents frontloading all value with nothing left for the final 15 seconds. It also prevents the opposite mistake of saving everything for the end, which viewers never reach because they swiped.
A common underperforming pattern is the hook-delivery gap. The hook asks a question, the body pivots to related content without answering it, and viewers who stayed for an answer feel cheated.
After writing your body, reread the hook and check whether you actually answered the question. If your hook says “here’s why X happens” but your body covers effects without explaining causes, the script is a fail.
Maintaining Attention Through The Middle
The middle section is where most videos lose viewers. Incorporating pattern interrupts every 3-5 seconds can help maintain viewer engagement. Effective techniques include text overlays, B-roll, camera angle changes, and graphics.
While sound is essential to the TikTok experience, captions are critical for viewers in “quiet mode” (commuting, in bed, at work) and for discoverability. Showing and saying information together boosts retention and gives platforms additional signals about your content.
SEO For Video Content
TikTok says it considers “video information” like captions, sounds, and hashtags, so captions, on-screen text, and spoken audio can help your video get understood and surfaced in recommendations and search features.
Video titles should include primary keywords while piquing curiosity, and descriptions expand on the titles with more keyword context.
Caption accuracy matters for search. Auto-generated captions contain errors that platforms can pick up as content signals, so a video about “SEO” with captions reading “CEO” may surface for wrong queries. Review and fix auto-captions before publishing.
Hashtags signal categories to algorithms. Use broad tags like #marketing to reach large audiences, and specific ones like #emailmarketing for direct relevance. Evergreen content benefits from evergreen hashtags that maintain visibility months after posting.
Batch Production For Scale
Content teams that produce video at scale typically batch their workflows. Creators like Thomas Frank and Ali Abdaal have documented their batch filming processes, and Gary Vaynerchuk’s “64 pieces of content in a day” model is built on recording pillar content and distributing clips afterward.
Creating one video from scratch each time burns hours on repetitive decisions. You can cut per-video time by 60-80% through batching.
Batching refers to scripting multiple videos in one session, filming them all together, editing in batches with consistent formats, and then scheduling across platforms.
A typical batch for four to eight videos breaks down like this. Scripting all at once using templates takes two to three hours. Filming all videos in one session with consistent setup takes three to four hours. Editing across several days takes six to eight hours total. Scheduling takes about an hour.
Per-video time drops to roughly 40 minutes. Without batching, individual videos typically take 150-180 minutes each. The savings come from eliminating setup and context-switching between sessions.
Measuring Results
Short-form video works as top-of-funnel for most content teams. A video with 100,000 views and zero conversions may matter less than one with 10,000 views and 500 email signups.
When results fall short, retention curves pinpoint the problem. Sub-60% retention at three seconds points to hook issues. Steady early retention with sharp mid-video drops suggests pacing problems. Late drops typically mean content ran long or delivered value without giving viewers a reason to stay.
Looking Ahead
The gap between written content and video content is smaller than most teams assume. The research already exists. The expertise already exists. The structure is the only new variable, and that structure fits on an index card.
Teams that struggle with video production usually aren’t struggling with production itself. They’re struggling with selection and compression. They try to convert articles that don’t fit the format, or they refuse to cut material that worked in text but dies on screen.
The 150-word constraint is a decision-making tool that cuts editing in half before you start. Pick one article from your archive that performed well and had a clear takeaway. Convert it to a script. Then record it, read the retention curve, and adjust as needed.
You can keep reading articles like this one, but doing the work and iterating on it will teach you more than I ever could.
Hello, my fellow digital marketers! This study was born out of a question that gave me a combination of irritation and renewed curiosity: “If I turned off all paid media, would my business actually suffer?”
This is a question that is as old as time (in digital marketing time that is), and just like swallows returning to Capistrano, I am posed this question every Q1, when a brand reviews my annual paid media budget recommendation.
What I thought was going to be a four-week test, actually turned out to be a three-month test with a one-month analysis.
The Scenario
The analysis was done for a fast-casual dining restaurant chain that operates 50+ restaurants across 10+ U.S. States that I was asked to do some auditing on. But, honestly, this repeats for most brands and verticals (much less so in B2B, I will note).
As alluded to earlier, the brand had a noticeable disconnect between the restaurant deciphering the impact of its website revenue and the media dollars spent, vs. wondering if it was just cannibalizing its name recognition and organic efforts. It challenged the belief that media was not contributing, and wanted to turn it off in a trial period. But more so, it just didn’t think the paid media was contributing, and it wanted to save some cash. So, we obliged.
Due to the disconnect in restaurant dining revenue being passed back to digital media, we elected to focus on its direct on-site online order of food to validate the data.
Before and after the test period, it was using search, Performance Max, paid social, digital OOH, and Display. All channels covered both prospecting and retargeting efforts.
It ran limited email efforts to its customers registered for rewards, but it does not have a mobile app, so the customer list is quite small, while its annual digital media investment is around $1.1 million.
For the analysis, we planned to pause media for five weeks in the middle of its low season (which is about four months long), and then compare the overall impact on the site before it was paused and after we brought it back.
Thrilling? Well, let’s just say some folks do get all hot and bothered around a mid-to-low impact media holdout aggregate site activity analysis.
The Important Parameters
So, there are some important things to note around this test:
Traditional media was never stopped, but always ran at a low level, mostly billboards and radio.
For unexplainable reasons, they never hooked up Search Console.
They run a consistent SEO effort.
The analysis was done on the same restaurants for all three time periods (they had a couple shutdown and one open in this time period, so that data was removed from the assessment).
Their primary key performance indicator is in-restaurant visits, but they struggle to connect the source back to media initiative (we use three different foot traffic tracking vendors to measure it, but they don’t have the ability to pass back the in-restaurant sales data to the visit).
Foot traffic leads are actually worth 15-25% more than online order sales, but we do not have true pass-through revenue for them.
Against the recommendation to do this test on an isolated market was not taken, and they did a full blackout.
Breakout of Online Orders vs. Store Visits 4/14/25 to 5/18/25 (Image from author, January 2026)
Hypothesis
In my typical (and often inappropriate snarckastic manner, or so I am told), I referred them to my 2021 article, “How Paid Search Incrementality Impacts SEO (Does 1+1=3?),” and told them that this should be their baseline for anticipated impact. For those who don’t want to click on the link and read the article, my stance was that removing paid media and running organic only would have a grand net loss for the brand in terms of traffic and sales.
To give you a sense of performance, prior to the test, paid media accounted for ~28% of incremental site traffic and ~23% of online orders. Which in turn supports the following beliefs:
With paid search engine-driven traffic exiting, we expect organic to rise, but not enough to offset what paid drove.
With paid social out, we expect a net loss of overall social traffic, in addition to any halo impact driven by social awareness (i.e., direct to site, organic search).
With programmatic traffic out, we expect a decrease in aggregate search traffic and direct to site traffic.
Net-net, the loss of paid media will result in a net loss of site traffic, leading to a net loss of online sales, which will be greater than media cost that would’ve been used to generate those sales.
Data trends (Image from author, January 2026)
The Pre-Test Data
Having selected a five-week period as our control period, we reviewed the initial data upfront:
Spend
Impressions
Clicks/Site Visits
Online Orders
Revenue
Search
$30,000
395,000+
57,000+
6,000+
$250,000+
Performance Max
$20,000+
9 million+
27,000+
275+
$11,000+
Social
$23,000+
12 million+
38,000+
40+
$500
Programmatic Display
$450
19,000+
100+
1
$13
DOOH*
$5,000
62,000+
0
0
$0.00
Total
$80,000
21 million+
123,000+
6,000+
$262,000+
*Digital Out-of-Home advertising (DOOH)
Additionally, organic search had 131K+ site visits (42% of total), along with 12K+ online orders (46% of total) and $532K+ of revenue (47% of total).
While direct to site traffic had 78K+ site visits (25% of total), along with 8K+ online orders (29% of total) and $315K+ of revenue (28% of total).
Based on pre-test data, every site visit (from all traffic sources) was equal to $3.61 in online order revenue.
The Test Itself
Organic search site visits rose 14% (+18K), orders rose 31% (+4K), and revenue rose 30% (+$161K)
Direct to site visits dropped 4% (-3K), orders dropped 3% (-277) and revenue dropped 5% (-$15K)
The single largest channel loss of traffic was social (organic+paid), which dropped 98% (-39K) in visits, and dropped 55% in orders (note this was from 80 to 36) and 27% in revenue (a loss of $400)
All other site non-paid media traffic channels remained relatively flat
Overall site visits dropped 22% (-68K+), orders dropped 9% (-2,500) and revenue dropped 9% (-$105K)
Since total site visits decreased by 68K+ and not 123K+, this means the halo effect from paid media of site visits is ~55K
Visits between test periods (Image from author, January 2026)
This means that, despite organic search growing, as it was not being “cannibalized” by paid media, it could not offset the traffic or sales volume that paid search and performance max contributed.
Additionally, the lack of paid awareness media (i.e., display, social, etc.) leads to a contraction of total related searches to the brand name, as illustrated by the aggregate drop in total search traffic to the site, but also a drop in direct to site traffic as well.
“But Jon, they saved on ad spend, that should be helping them come out ahead?”
Wrong.
While they didn’t spend $80K on ads. Thus, the paid media cost per paid site visit dropped from $0.64 to $0. But they lost an aggregate 68K+ visits. On average a visit to the site in the pretest period (for all traffic sources) had a revenue value back to the brand of $3.61, during the test that rose to $4.20 (increased as more direct to site and organic search took a bigger piece of the traffic contribution pie).
This means, the actual Sales Value Impact=((Avg Revenue per Paid Media Site Visit)*Direct Paid Media Visit Lost/Gained)-/+Ad Spend Saved or Spent
Another way to write that formula is SVI=((ARPMSV)*DPMVLG)-/+Ad Spend)
Meaning on the conservative side it would be:
(($2.12)*-123,572)+$79,626.40)= -$182,346.32
But the reality is, organic search rose as it was no longer being cannibalized by paid. But not by enough to offset the loss of paid, so it requires separating the loss into direct impact and halo impact.
The direct impact is similar to the formula above, but you swap paid media visits out for total site visits change (68,652), which then brings the net loss down to $65,915.84.
Then there is a halo effect of paid media on organic. Which is why non-paid visits couldn’t offset the total loss in visits when paid was out. To calculate out the halo effect impact, we would do a formula of:
Halo Sales Value Impact= (((Avg Revenue per Paid Media Site Visit)*((Paid Media Traffic Lost or Gained-Total Traffic Loss or Gained in Test Period))
Or, written as HSVI=(((ARPMSV)*((PMTLG-(PPMT-TTLGTP)))
Meaning on the conservative side, it would be:
((($2.12)*((123,572-68,652)))=$116,430.40
Combine the 2 outcomes together, and you get your loss of $182,346.24 explained.
This means, that by not running paid media, full impact was a net missed revenue opportunity of $182,346.32, between direct and halo.
This goes to an extremely conservative method, and does not account for store visit revenue, or any shifts in revenue per visit over time.
In addition to the traffic that would’ve stood to gain additions to their email/audience lists.
Bringing paid media back
After the 5 weeks offline, we brought media back, in fact we increased investment by 48% (no change of channels, but all incremental went to social awareness and display). This generated 29% fewer clicks from pre-test, but increased impressions 107%.
In the post-test period, vs the test period, the return of media, at the increased investment, lead to a 21% decrease in organic search traffic. But with paid search and performance max generating enough gain to have a net positive in all of search of +6% in site visits. Overall search driven online orders and revenue both saw a 2% increase when paid was reintroduced.
The only true net loss in growth was direct to site (which was believed that it would rise when media was back in market), which decreased in traffic by 6% and orders by 10%. Overall saw a 38% lift in total site visits, but a drop of 1% in online orders (revenue was flat). The loss in online orders was exclusively direct to site traffic.
What Does This All Mean?
A variety of things.
No matter what way you cut it, the presence of paid media had a halo effect on all activity, most notably, the aggregation of paid and organic search.
Visits (Image from author, January 2026)
But the post-click impact on the site may not follow the same path.
Online orders (Image from author, January 2026)
Which means, a larger view must be taken to examine additional impact (i.e., foot traffic, loyalty club sign-ups, and LTV).
It also reinforces the concept of 1+1=3, the theory of incrementality. While no other changes were made beyond the exiting of paid digital media, and the brand remained in low season the whole way through, the actual impact of inbound traffic lost (not covered by organic) was considerable.
It also stands as a reminder: Just because a site visit doesn’t generate immediate sales/revenue, it does not mean it doesn’t serve a purpose (i.e., foot traffic).
The Takeaway
Any brand that has more paid media site traffic than non-paid site traffic, and thinks they can turn off paid and coast equally on just non-paid traffic alone, has the same mindset as any NY Jets fan (the inability to accept a very harsh reality).
But, despite my writing, I am an optimist, and I encourage brands to do a similar study, if for no other reason than to have the data on hand for when the CMO comes in saying they want to turn off paid because they don’t think they should pay for it.
But word to the wise: Don’t do what we did here, do a market holdout, so that if things go south, it isn’t system-wide.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
A first look at Making AI Work, MIT Technology Review’s new AI newsletter
Are you interested in learning more about the ways in which AI is actually being used? We’ve launched a new weekly newsletter series exploring just that: digging into how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work.
Each edition of Making AI Work begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool.
The first edition takes a look at how AI is changing health care, digging into the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. Sign up here to receive the seven editions straight to your inbox, and if you’d like to read more about AI’s impact on health care in the meantime, check out some of our past reporting:
+ How AI is changing how we quantify pain by helping health-care providers better assess their patients’ discomfort. Read the full story.
+ End-of-life decisions are difficult and distressing. Could AI help?
+ Artificial intelligence is infiltrating health care. But we shouldn’t let it make all the decisions unchecked. Read the full story.
Why the Moltbook frenzy was like Pokémon
Lots of influential people in tech recently described Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them—sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?
The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon. Read the full story to find out why.
—James O’Donnell
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI has begun testing ads in ChatGPT But the ads won’t influence the responses it provides, apparently. (The Verge) + Users who pay at least $20 a month for the chatbot will be exempt. (Gizmodo) + So will users believed to be under 18. (Axios)
2 The White House has a plan to stop data centers from raising electricity prices It’s going to ask AI companies to voluntarily commit to keeping costs down. (Politico) + The US federal government is adopting AI left, right and center. (WP $) + We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)
3 Elon Musk wants to colonize the moon For now at least, his grand ambitions to live on Mars are taking a backseat. (CNN) + His full rationale for this U-turn isn’t exactly clear. (Ars Technica) + Musk also wants to become the first to launch a working data center in space. (FT $) + The case against humans in space. (MIT Technology Review)
4 Cheap AI tools are helping criminals to ramp up their scams They’re using LLMs to massively scale up their attacks. (Bloomberg $) + Cyberattacks by AI agents are coming. (MIT Technology Review)
5 Iceland could be heading towards becoming one giant glacier If human-driven warming disrupts a vital ocean current, that is. (WP $) + Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)
6 Amazon is planning to launch an AI content marketplace It’s reported to have spoken to media publishers to gauge their interest. (The Information $)
7 Doctors can’t agree on how to diagnose Alzheimer’s They worry that some patients are being misdiagnosed. (WSJ $)
8 The first wave of AI enthusiasts are burning out A new study has found that AI tools are linked to employees working more, not less. (TechCrunch)
9 We’re finally moving towards better ways to measure body fat BMI is a flawed metric. Physicians are finally using better measures. (New Scientist $) + These are the best ways to measure your body fat. (MIT Technology Review)
10 It’s getting harder to become a social media megastar Maybe that’s a good thing? (Insider $) + The likes of Mr Beast are still raking in serious cash, though. (The Information $)
Quote of the day
“This case is as easy as ABC—addicting, brains, children.”
—Lawyer Mark Lanier lays out his case during the opening statements of a new tech addiction trial in which a woman has accused Meta of deliberately designing their platforms to be addictive, the New York Times reports.
One more thing
China wants to restore the sea with high-tech marine ranches
A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.
Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.
The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.
—Matthew Ponsford
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Wow, Joel and Ethan Coen’s dark comedic classic Fargois 30 years old. + A new exhibition in New York is rightfully paying tribute to one of the greatest technological inventions: the Walkman ($) + This gigantic sleeping dachshund sculpture in South Korea is completely bonkers. + A beautiful heart-shaped pendant linked to King Henry VIII has been secured by the British Museum.
In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT.
The campaign urged ChatGPT users to cancel their subscriptions, flagging a substantial contribution by OpenAI president Greg Brockman to President Donald Trump’s super PAC MAGA Inc. It also pointed out that the US Immigration and Customs Enforcement, or ICE, uses a résumé screening tool powered by ChatGPT-4. The federal agency has become a political flashpoint since its agents fatally shot two people in Minneapolis in January.
For Stephen, who had already been tinkering with other chatbots, learning about Brockman’s donation was the final straw. “That’s really the straw that broke the camel’s back,” he says. When he canceled his ChatGPT subscription, a survey popped up asking what OpenAI could have done to keep his subscription. “Don’t support the fascist regime,” he wrote.
QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. Many lamented the performance of GPT-5.2, the latest model. Others shared memes parodying the chatbot’s sycophancy. Some planned a “Mass Cancellation Party” in San Francisco, a sardonic nod to the GPT-4o funeral that an OpenAI employee had floated, poking fun at users who are mourning the model’s impending retirement. Still, others are protesting against what they see as a deepening entanglement between OpenAI and the Trump administration.
OpenAI did not respond to a request for comment.
As of December 2025,ChatGPT had nearly 900 million weekly active users, according to The Information. While it’s unclear how many users have joined the boycott, QuitGPT is getting attention. A recent Instagram post from the campaign has more than 36 million views and 1.3 million likes. And the organizers say that more than 17,000 people have signed up on the campaign’s website, which asks people whether they canceled their subscriptions, will commit to stop using ChatGPT, or will share the campaign on social media.
“There are lots of examples of failed campaigns like this, but we have seen a lot of effectiveness,” says Dana Fisher, a sociologist at American University. A wave of canceled subscriptions rarely sways a company’s behavior, unless it reaches a critical mass, she says. “The place where there’s a pressure point that might work is where the consumer behavior is if enough people actually use their … money to express their political opinions.”
MIT Technology Review reached out to threeemployees at OpenAI, none of whom said they were familiar with the campaign.
Dozens of left-leaning teens and twentysomethings scattered across the US came together to organize QuitGPT in late January. They range from pro-democracy activists and climate organizers to techies and self-proclaimed cyber libertarians, many of them seasoned grassroots campaigners. They were inspired by a viral video posted by Scott Galloway, a marketing professor at New York University and host of The Prof G Pod. He argued that the best way to stop ICE was to persuade people to cancel their ChatGPT subscriptions. Denting OpenAI’s subscriber base could ripple through the stock market and threaten an economic downturn that would nudge Trump, he said.
“We make a big enough stink for OpenAI that all of the companies in the whole AI industry have to think about whether they’re going to get away enabling Trump and ICE and authoritarianism,” says an organizer of QuitGPT who requested anonymity because he feared retaliation by OpenAI, citing the company’s recent subpoenas against advocates at nonprofits. OpenAI made for an obvious first target of the movement, he says, but “this is about so much more than just OpenAI.”
Simon Rosenblum-Larson, a labor organizer in Madison, Wisconsin, who organizes movements to regulate the development of data centers, joined the campaign after hearing about it through Signal chats among community activists. “The goal here is to pull away the support pillars of the Trump administration. They’re reliant on many of these tech billionaires for support and for resources,” he says.
QuitGPT’s website points to new campaign finance reports showing that Greg Brockman and his wife each donated $12.5 million to MAGA Inc., making up nearly a quarter of the roughly $102 million it raised over the second half of 2025. The information that ICE uses a résumé screening tool powered by ChatGPT-4 came from an AI inventory published by the Department of Homeland Security in January.
QuitGPT is in the mold of Galloway’s own recently launched campaign, Resist and Unsubscribe. The movement urges consumers to cancel their subscriptions to Big Tech platforms, including ChatGPT, for the month of February, as a protest to companies “driving the markets and enabling our president.”
“A lot of people are feeling real anxiety,” Galloway told MIT Technology Review. “You take enabling a president, proximity to the president, and an unease around AI,” he says, “and now people are starting to take action with their wallets.” Galloway says his campaign’s website can draw more than 200,000 unique visits in a day and that he receives dozens of DMs every hour showing screenshots of canceled subscriptions.
The consumer boycotts follow a growing wave of pressure from inside the companies themselves. In recent weeks, tech workers have been urging their employers to use their political clout to demand that ICE leave US cities, cancel company contracts with the agency, and speak out against the agency’s actions. CEOs have started responding. OpenAI’s Sam Altman wrote in an internal Slack message to employees that ICE is “going too far.” Apple CEO Tim Cook called for a “deescalation” in an internal memo posted on the company’s website for employees. It was a departure from how Big Tech CEOs have courted President Trump with dinners and donations since his inauguration.
Although spurred by a fatal immigration crackdown, these developments signal that a sprawling anti-AI movement is gaining momentum. The campaigns are tapping into simmering anxieties about AI, says Rosenblum-Larson, including the energy costs of data centers, the plague of deepfake porn, the teen mental-health crisis, the job apocalypse, and slop. “It’s a really strange set of coalitions built around the AI movement,” he says.
“Those are the right conditions for a movement to spring up,” says David Karpf, a professor of media and public affairs at George Washington University. Brockman’s donation to Trump’s super PAC caught many users off guard, he says. “In the longer arc, we are going to see users respond and react to Big Tech, deciding that they’re not okay with this.”