The Download: Introducing: the new conspiracy age

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the new conspiracy age

Everything is a conspiracy theory now. Conspiracists are all over the White House, turning fringe ideas into dangerous policy. America’s institutions are crumbling under the weight of deep suspicion and the lasting effects of covid isolation. Online echo chambers are getting harder to escape, and generative AI is altering the fabric of truth. A mix of technology and politics has given an unprecedented boost to once-fringe ideas—but they are pretty much the same fantasies that have been spreading for hundreds of years.

MIT Technology Review helps break down how this moment is changing science and technology—and how we can make it through. We’re thrilled to present The New Conspiracy Age, a new series digging into how the present boom in conspiracy theories is reshaping science and technology. 

To kick us off, check out Dorian Lynskey’s fascinating piece explaining why it’s never been easier to be a conspiracy theorist. And stay tuned—we’ll be showcasing a different story from the package each day in the next few editions of The Download!

Four thoughts from Bill Gates on climate tech

Bill Gates doesn’t shy away or pretend modesty when it comes to his stature in the climate world today. “Well, who’s the biggest funder of climate innovation companies?” he asked a handful of journalists at a media roundtable event last week. “If there’s someone else, I’ve never met them.”

The former Microsoft CEO has spent the last decade investing in climate technology through Breakthrough Energy, which he founded in 2015. Ahead of the UN climate meetings kicking off next week, Gates published a memo outlining what he thinks activists and negotiators should focus on and how he’s thinking about the state of climate tech right now. Here’s what he had to say.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US Homeland Security shared false videos of immigration operations
They claimed to show recent operations but used footage that was old, or recorded thousands of miles away. (WP $)
+ ICE is scanning pedestrians’ faces to verify their citizenship. (404 Media)

2 Character.AI is banning under-18s from talking to its virtual companions
It’s currently facing several lawsuits from families who claim its chatbots have harmed their children. (NYT $)
+ The company says it’s introducing age assurance functionality. (FT $)
+ Teenage boys are using chatbots to roleplay as girlfriends. (The Guardian)
+ The looming crackdown on AI companionship. (MIT Technology Review)

3 Trump directed the Pentagon to resume nuclear weapons testing
America hasn’t conducted such tests for more than 30 years. (BBC)
+ The US President made multiple incorrect assertions in his statement. (The Verge)
+ He doesn’t seem to even know why he wants to resume the tests himself. (The Atlantic $)

4 A Google DeepMind AI model accurately predicted Hurricane Melissa’s severity
It’s the first time the US National Hurricane Center has deployed it. (Nature $)
+ Here’s how to actually help the people affected by its extensive damage. (Vox)
+ Google DeepMind’s new AI model is the best yet at weather forecasting. (MIT Technology Review)

5 A major record label has signed a deal with AI music firm Udio
Universal Music Group had previously sued it for copyright infringement. (WSJ $)
+ AI is coming for music, too. (MIT Technology Review)

6 Are companies using AI as a fig leaf to lay workers off?
It’s sure starting to look that way. (NBC News)
+ Big Tech is going to keep spending billions on AI, regardless. (WP $)

7 Meta Ray-Ban users are filming themselves in massage parlors
They’re harassing workers, who appear unaware they’re being recorded. (404 Media)
+ China’s smart glasses makers are keen to capture the market. (FT $)

8 Just three countries dominate the world’s space launches
What will it take to get some other nations in the mix? (Rest of World)

9 Why you shouldn’t hire an AI agent
Their freelancing capabilities are… limited. (Wired $)
+ The people paid to train AI are outsourcing their work… to AI. (MIT Technology Review)

10 This app’s AI-generated podcasting dog videos are a big hit 🐶🎙
But DogPack wants to make sure viewers know it’s not trying to trick them. (Insider $)

Quote of the day

“Zuck spent five years and $70 billion dollars to build a business that loses $4.4 billion/year to create only $470 million in revenue. So bad you can’t give it away, I guess.”

—Greg Linden, a former data scientist at Microsoft, pokes fun at Meta’s beleaguered Reality Labs’ earnings in a post on Bluesky.

One more thing

How scientists want to make you young again

A little over 15 years ago, scientists at Kyoto University in Japan made a remarkable discovery. When they added just four proteins to a skin cell and waited about two weeks, some of the cells underwent an unexpected and astounding transformation: they became young again. They turned into stem cells almost identical to the kind found in a days-old embryo, just beginning life’s journey.

At least in a petri dish, researchers using the procedure can take withered skin cells from a 101-year-old and rewind them so they act as if they’d never aged at all.

Now, after more than a decade of studying and tweaking so-called cellular reprogramming, a number of biotech companies and research labs say they have tantalizing hints that the process could be the gateway to an unprecedented new technology for age reversal. Read the full story

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ 2025’s Comedy Wildlife Award winners and finalists are classics of the genre.
+ This Instagram account shared the same video of Thomas the Tank Engine’s daring railway stunts every day, and I think that’s just beautiful.
+ How to get more of that elusive deep sleep.
+ Here’s an interesting take on why we still find dragons so fascinating 🐉

5 Shopify Mistakes That Kill Holiday Profits

Minor configuration mistakes during the busy Christmas selling season can seriously harm ecommerce profits.

Shopify remains the most popular ecommerce platform in the United States. Various sources report that the company has about 29% of the total U.S. hosted ecommerce platform market, or about 21% of the top 1 million ecommerce sites worldwide.

Shopify is popular, in part, because of its relative ease of use. The setup is speedy for online sellers willing to accept out-of-the-box themes and a default configuration.

Unfortunately, that configuration can also lead to mistakes. What follows are five Shopify setup errors that kill profits when holiday sales rev up.

Broken Feeds

Product feeds to Google and Meta can fail silently. A price mismatch, missing GTIN, or out-of-stock flag can lead to disapprovals that stop some ads from running even while the campaign spends otherwise.

Before the holiday rush, store owners and managers should review product feeds aimed at dynamic advertising.

In Shopify, this means checking “Google & YouTube,” “Facebook & Instagram,” and any other connected sales app. Each should show products approved and synchronized.

A merchant might also notice this when checking product sales channels. Shopify adds a warning if there is a feed problem.

Screenshot of the Shopify admin showing the feed error

A store manager checking for missing channels (below) might have noticed something wrong with the “Google & YouTube” feed.

Missing Channels

Shopify merchants sometimes assume that publishing a product to their online store automatically makes it visible on Google Shopping, Meta, the Shop app, and Pinterest.

It does not.

Shopify treats each sales channel separately, and products do not appear in a channel unless explicitly added. This may be true even when an app is installed correctly.

Channel exclusion happens frequently for stores selling print-on-demand products such as t-shirts. A shopkeeper might design and publish a t-shirt with Printful and add it to a Shopify collection, unaware that it wasn’t added to every channel.

To check, go to “Products” in Shopify and filter by excluded sales channels. Bulk-add any missing channels.

Visibility is a foundation of conversion. If an item is not syndicated to every active channel, it might as well not exist.

Shipping Gaps

Shopify’s shipping options are flexible and configurable. They can be confusing, too, prompting some ecommerce operations to select a few simple options.

Unfortunately, keeping it simple may not work during the holiday crunch.

Last-minute holiday shoppers often want lightning-fast delivery and are willing to pay for it. The trouble is that many stores never offer it.

In Shopify, go to “Settings” > “Shipping and Delivery.” Open each profile and make sure every product belongs to it. Add at least one two-day or overnight option using carrier-calculated rates.

Consider posting a clear cutoff message — such as “Order by Dec. 18 for Christmas delivery.” — on product detail pages.

Fast, flexible fulfillment often matters more than price in late December. Shoppers will not buy without a proper delivery option.

Discount Stacking

Shopify’s discount system is powerful and literal. Run multiple promotions at once, and the platform could apply them all unless told otherwise.

Screenshot of the Discount screen in Shopify admin

Ensure discounts work as expected.

To be clear, applying multiple discounts to a single order, product, or transaction simultaneously is a valid sales tactic. Plenty of retailers use this approach.

Hence discount stacking is built into Shopify.

A storewide 20% sale, combined with free shipping and a loyalty code, can materially boost holiday revenue and, potentially, instantly erase margins.

Before launching any campaign, open “Discounts” and check for overlapping date ranges or stackable coupons.

Consider limiting the store to one automatic or sitewide discount at a time — test checkouts with several products in the cart. Merchants on Shopify Plus may also want to audit their Scripts or Functions.

Tax Neglect

Strong Christmas sales can push a store past tax nexus thresholds — the point at which a merchant is legally required to collect and remit sales tax in a state.

Shopify doesn’t necessarily detect new tax requirements automatically. The store may owe taxes if its revenue or transaction volume crosses a given state’s threshold.

The problem usually shows up after December, when accountants discover a state in which the store met the threshold but never registered. That missed setup can mean back taxes, penalties, and months of cleanup.

Before peak season, review Shopify tax settings. Add every state where the business has physical presence, employees, warehouses, or meets an economic nexus threshold. That threshold is often $100,000 in annual sales or 200 transactions.

Enable Shopify Tax or a third-party app such as Avalara to maintain accurate rates. International stores should check “Markets” to confirm the correct VAT and customs settings.

How Agentic Browsers Will Change Digital Marketing via @sejournal, @DuaneForrester

The footprint of large language models keeps expanding. You see it in productivity suites, CRM, ERP, and now in the browser itself. When the browser thinks and acts, the surface you optimize for changes. That has consequences for how people find, decide, and buy.

Microsoft shows how quickly this footprint can spread across daily work. Microsoft says nearly 70% of the Fortune 500 now use Microsoft 365 Copilot. The company also reports momentum through 2025 customer stories and events. These numbers do not represent unique daily users across every product; rather, they signal reach into large enterprises where Microsoft already has distribution.

Google is pushing Gemini across Search, Workspace, and Cloud. Google highlights Gemini inside Search’s AI Mode and AI Overviews, and claims billions of monthly AI assists across Workspace. Google also points to customers putting Gemini to work across industries and reports average time savings in Workspace studies. In education, Google says Gemini for Education now reaches more than 10 million U.S. college students.

Salesforce and SAP are bringing agents into core enterprise flows. Salesforce announced Agentforce and the Agentic Enterprise, with updates in 2025 that focus on visibility and control for scaled agent deployments. SAP positioned Joule as its AI copilot and added collaborative AI agents across business processes at TechEd 2024, with ongoing releases in 2025.

And with all of that as the backdrop, should we be surprised that the browser is the next layer?

Agentic BrowsersImage Credit: Duane Forrester

What Is An Agentic Browser?

A traditional browser shows you pages and links. An agentic browser interprets the page, carries context, and can act on your behalf. It can read, synthesize, click, fill forms, and complete tasks. You ask for an outcome. It gets you there.

Perplexity’s Comet positions itself as an AI-first browser that works for you. Reuters covered its launch and the pitch to challenge Chrome’s dominance, and The Verge reports that Comet is now available to everyone for free, after a staged rollout.

Security has already surfaced as a real issue for agentic browsers. Brave’s research describes indirect prompt injection in Comet and Guardio’s work, and coverage in the trade press highlights risks of agent-led flows being manipulated.

Now OpenAI has launched ChatGPT Atlas, a browser with ChatGPT at the core and an Agent Mode for task execution.

Why This Matters To Marketing

If the browser acts, people click less and complete more tasks in place. That compresses discovery and decision steps. It raises the bar for how your content gets selected, summarized, and executed against. Martech’s analysis points to a redefined search and discovery experience when browsers bring agentic and conversational layers to the fore.

You should expect four big shifts.

Search And Discovery

Agentic flows reduce list-based searching. The agent decides which sources to read, how to synthesize, and what to do with the result. Your goal shifts from ranking to getting selected by an agent that is optimizing for the user’s preferences and constraints. That may lower raw click volumes and raise the value of being the canonical source for a clear, task-oriented answer.

Content And Experience

Content needs to be agent-friendly. That means clear structure, strong headings, accurate metadata, concise summaries, and explicit steps. You are writing for two audiences. The human who skims. The agent that must parse, validate, and act. You also need task artifacts. Checklists. How to flows. Short-form answers that are safe to act on. If your page is the long version, your agent-friendly artifact is the short version. Both matter.

CRM And First-Party Data

Agents may mediate more of the journey. You need earlier value exchanges to earn consent. You need clean APIs and structured data so agents can hand off context, initiate sessions, and trigger next best actions. You will also need to model events differently when some actions never hit your pages.

Attribution And Measurement

If an agent fills the cart or completes a form from the browser, you will not see traditional click paths. Define agent-mediated events. Track handoffs between browser agent and brand systems. Update your models so agent exposure and agent action can be credited. This is the same lesson marketers learned with assistants and chat surfaces. The browser now brings that dynamic to the mainstream.

What To Do Now

Start With Content

Audit your top 10 discovery and consideration assets. Tighten structure. Add short summaries and task snippets that an agent can lift safely. Add schema markup where it makes sense. Make dates and facts explicit. Your goal is clarity that a machine can parse and that a person can trust. Guidance on why this matters sits in the information above from the Martech article.

Build Better Machine Signals

Use schema.org where it helps understanding. Ensure feeds, sitemaps, Open Graph, and product data are complete and current. If you have APIs that expose inventory, pricing, appointments, or availability, document them clearly and make developer access straightforward.

Map Agent-First Journeys

Draft a simple flow for how your category works when the browser is the assistant. Query. Synthesis. Selection. Action. Handoff. Conversion. Then decide where you can add value. This is not only about SEO. It is about being callable by an agent to help someone finish a task with less friction.

Rethink Metrics

Define what counts as an agent impression and an agent conversion for your brand. Tag flows where the agent initiates the session. Set targets for assisted conversions that originate in agent environments. Treat this as a separate channel for planning.

Run Small Tests

Try optimizing one or two pages for agent selection and summarize ability. Instrument the flows. If there are early integrations or pilots available with agent browsers, get on the list and learn fast. For competitive context, it is useful to watch how quickly Atlas and Comet gain traction relative to incumbent browsers. Sources on current market share are below.

Why Timing Matters

We have seen how fast browsers can grow when they meet a new need. Google launched Chrome in 2008. Within a year, it was already climbing the charts. Ars Technica covered Chrome’s 1.0 release on December 11, 2008. StatCounter Press said Chrome exceeded 20% worldwide in June 2011, up from 2.8% in June 2009. By May 2012, StatCounter reported Chrome overtook Internet Explorer for the first full month. Annual StatCounter data for 2012 shows Chrome at 31.42%, Internet Explorer at 26.47%, and Firefox at 18.88%.

Firefox had its own rapid start earlier in the 2000s. Mozilla announced 50 million Firefox downloads in April 2005 and 100 million by October 2005, less than a year after 1.0. Contemporary reporting placed Firefox at roughly 9 to 10% market share by late 2005 and 18% by mid-2008.

Microsoft Edge entered later. Edge originally shipped in 2015, then relaunched on Chromium in January 2020. Edge has fluctuated. Recent coverage says Edge lost share over the summer of 2025 on desktop, citing StatCounter.

For an executive snapshot of the current landscape, StatCounter’s September 2025 worldwide totals show Chrome at about 71.8%, Safari at about 13.9%, Edge at about 4.7%, Firefox at about 2.2%, Samsung Internet at about 1.9%, and Opera at about 1.7%.

What This History Tells Us

Each major browser shift came with a clear promise. Netscape made the web accessible. Internet Explorer bundled it with the operating system. Firefox made it safer and more private. Chrome made it faster and more reliable. Every breakthrough paired capability with trust. That pattern will repeat here.

Agentic browsers can only scale if they prove both utility and safety. They must handle tasks faster and more accurately than people, without introducing new risks. Security research around Comet shows what happens when that balance tips the wrong way. If users see agentic browsing as unpredictable or unsafe, adoption slows. If it saves them time and feels dependable, adoption accelerates. History shows that trust, not novelty, drives the curves that turn experiments into standards.

For marketers, that means your work will increasingly live inside systems where trust and clarity are prerequisites. Agents will need unambiguous facts, consistent markup, and licensing that spells out how your content can be reused. Brands that make that easy will be indexed, quoted, and recommended. Brands that make it hard will vanish from the new surface before they even know it exists.

How To Position Your Brand For Agentic Browsing

Keep your approach simple and disciplined. Make your best content easy to select, summarize, and act on. Structure it tightly, keep data fresh, and ensure everything you publish can stand on its own when pulled out of context. Give agents clean, accurate snippets they can carry forward without risk of misrepresentation.

Expose the data and signals that let agents work with you. APIs, feeds, and machine-readable product information reduce guesswork. If agents can confirm availability, pricing, or location from a trusted feed, your brand becomes a reliable component in the user’s automated flow. Pair that with clear permissions on how your data can be displayed or executed, so platforms have a reason to include you without fear of legal exposure.

Treat agent-mediated activity as its own marketing channel. Name it. Measure it. Fund it. You are early, so your metrics will change as you learn, but the act of measuring will force better questions about what visibility and conversion mean when browsers complete tasks for users. The first teams to formalize this channel will understand its economics long before competitors notice the traffic shift.

Finally, stay close to the platform evolution. Watch every release of OpenAI’s Atlas and Perplexity’s Comet. Track Google’s response as it blends Gemini deeper into Chrome and Search. The pace will feel familiar (like the late 2000s browser race), but the consequences will be larger. When the browser becomes an agent, it doesn’t just display the web; it intermediates it. Every business that relies on discovery, trust, or conversion will feel that change.

The Takeaway

Agentic browsers will not replace marketing, but they will reshape how attention, trust, and action flow online. The winners will be brands that think like system integrators (clear data, structured content, and dependable facts) because those are the materials agents build with. This is the early moment before the inflection point, the time to experiment while risk is low and visibility is still yours to claim.

History shows that when browsers evolve, the web follows. This time, the web won’t just render pages. It will think, decide, and act. Your job is to make sure that when it does, it acts in your favor.

Looking ahead, even a modest 10 to 15% adoption rate for agentic browsers within three years would represent one of the fastest paradigm shifts since Chrome’s launch. For marketers, that scale means the agent layer will become a measurable channel, and every optimization choice made now – how your data is structured, how your content is summarized, how trust is signaled – will compound its impact later.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Ask An SEO: How To Manage Stakeholders When An Algorithm Update Hits via @sejournal, @HelenPollitt1

In this edition of Ask An SEO, we address a familiar challenge for marketers:

How do you keep stakeholders from abandoning SEO when algorithm updates cause traffic drops?

This is an all-too-common issue that SEOs will encounter. They have strong plans in place, the buy-in from their leadership, and are making great strides in their organic performance.

When disaster strikes – or, more specifically, a Google algorithm update – all of that goodwill and great results are lost overnight.

What’s worse is, rather than doubling down and trying to recoup lost visibility through data-led SEO work, leadership starts questioning if there is a faster way.

Check The Cause Of The Decline In Traffic

First of all, I would say the most critical step to take when you see a drastic traffic drop is to check that it is definitely the result of an algorithm update.

It’s very easy to ascribe the blame to an update, when it could be caused by a myriad of things. The timing might be suspicious, but before anything, you need to rule out other causes.

Is It Definitely The Result Of The Algorithm Update?

This means checking if there have been any development rollouts, SEO fixes set live, or changes in the SERPs themselves recently. Make sure that the traffic loss is genuine, and not a missing Google Analytics 4 tag. Check that you aren’t seeing the same seasonal dip that you saw this time last year.

Essentially, you need to run down every other possible cause before concluding that it is definitely the result of the algorithm update.

This is important. If it’s not the algorithm update, the loss could be reversible.

Identify Exactly What Has Been Impacted

You are unlikely to have seen rankings and traffic decimated across your entire site. Instead, there are probably certain pages, or topics that you have seen a decline in.

Begin your investigation with an in-depth look into which areas of your site have been impacted.

Look at the webpages that were favored in place of yours. Have they got substantially different content? Are they more topically aligned to the searcher’s intent than yours? Or has the entire SERP changed to favor a different type of SERP feature, or content type?

Why Are These Specific Pages Affected?

What is the commonality between the pages on your site that have seen the rankings and traffic drops? Look for similarities in the templates used, or the technical features of the pages. Investigate if they are all suffering from slow-loading or poor-quality content. If you can spot the common thread between the affected pages, it will help you to identify what needs to be done to recover their rankings.

Is The Impact As Disastrous As It First Appears?

Also, ask yourself if the affected pages are actually important to your business. The impulse might be to remedy what’s gone wrong with them to recover their rankings, but is that the best use of your time? Sometimes, we jump to trying to fix the impact of an algorithm update when, actually, the work would be better spent further improving the pages that are still performing well, because they are the ones that actually make money. If the pages that have lost rankings and traffic were not high-converting ones in the first place, stop and assess. Are the issues they have symptomatic of a wider problem that might affect your revenue-driving pages? If not, maybe don’t worry too much about their visibility loss.

This is good context to have when speaking to your stakeholders about the algorithm impact. Yes, you may have seen traffic go down, but that doesn’t necessarily mean you will see a revenue loss alongside it.

Educate Stakeholders On The Fluctuations In SEO

SEO success is rarely linear. We’ve all seen the fluctuations on the Google Search Console graphs. Do your stakeholders know that, too?

Take time to educate them on how algorithm updates, seasonality, and changing user behavior can affect SEO traffic. Remind them that traffic is not the end goal of SEO; conversions are. Explain to them how algorithm updates are not the end of the world, and just mean there is room for further improvement.

The Best Time To Talk About Algorithm Updates

Of course, this is a lot easier to do before the algorithm update decimates your traffic.

Before you get to the point where panic is ensuing, make sure you have a good process in place to identify the impact of an algorithm update and explain it to your stakeholders. This means that you will take a methodical approach to diagnosing the issues, and not a reactive one.

Let your stakeholders know a reasonable timeframe for that analysis, and that they can’t expect answers on day one of the update announcement. Remind them that the algorithm updates are not stable as they first begin to roll out. They can cause temporary fluctuations that may resolve. You need time and space to consider the cause and remedies of any suspected algorithm update generated traffic loss.

If you have seen this type of impact before, it would be prudent to show your stakeholders where recovery has happened and how. Help them to see that now is the time for further SEO investment, not less.

Reframe The Conversation Back To Long-Term Strategy

There is a very understandable tendency for SEOs to panic in the wake of an algorithm update and try to make quick changes to revert the traffic loss. This isn’t a good idea.

Instead, you need to look at your overarching SEO strategy and locate changes that might have a positive impact over time. For example, if you know that you have a problem with low-quality and duplicate content on your site that you had intended to fix through your SEO strategy, don’t abandon that plan now. Chances are, working to improve the quality of your content on the site will help with regaining that lost traffic.

Resist The Urge To Make Impulsive Changes And Instead Be Methodical About Your Recovery Plans

Don’t throw away your existing plans. You may need to modify them to address specific areas of the site that have been impacted negatively by the update. Carry out intensive investigations into exactly what has happened and to which keywords/topics/pages on your site. Using this information, you can refine your existing strategy.

Any work that is carried out without much thought to the long-term impacts will be unlikely to stand the test of time. You may see a temporary boost, which will placate your stakeholders for a period, but that traffic growth may only be short-lived. For example, buying links to point to the areas of the site most negatively affected by the algorithm update might give you the boost in authority needed to see rankings recover. Over time, though, they are unlikely to carry the same weight, and at worst, may see you further penalized in future algorithm updates or through manual actions.

In Summary

The best time to talk to your stakeholders about the steps to resolve a negative impact from an algorithm update is before it happens. Don’t wait until disaster strikes before communicating your investigation and recovery plans. Instead, let them know ahead of time what to expect and why it isn’t worth a panicked and reactive response.

If you do find your site on the receiving end of a ferocious algorithm update, then take a deep breath. Let your analytical head prevail. Spend time assessing the breadth and depth of the damage, and formulate a plan that yields dividends for the long-term and not just to placate a worried leadership team.

SEO is about the long game. Don’t let your stakeholders lose their nerve just because an algorithm update has happened.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Anthropic Research Shows How LLMs Perceive Text via @sejournal, @martinibuster

Researchers from Anthropic investigated Claude 3.5 Haiku’s ability to decide when to break a line of text within a fixed width, a task that requires the model to track its position as it writes. The study yielded the surprising result that language models form internal patterns resembling the spatial awareness that humans use to track location in physical space.

Andreas Volpini tweeted about this paper and made an analogy to chunking content for AI consumption. In a broader sense, his comment works as a metaphor for how both writers and models navigate structure, finding coherence at the boundaries where one segment ends and another begins.

This research paper, however, is not about reading content but about generating text and identifying where to insert a line break in order to fit the text into an arbitrary fixed width. The purpose of doing that was to better understand what’s going on inside an LLM as it keeps track of text position, word choice, and line break boundaries while writing.

The researchers created an experimental task of generating text with a line break at a specific width. The purpose was to understand how Claude 3.5 Haiku decides on words to fit within a specified width and when to insert a line break, which required the model to track the current position within the line of text it is generating.

The experiment demonstrates how language models learn structure from patterns in text without explicit programming or supervision.

The Linebreaking Challenge

The linebreaking task requires the model to decide whether the next word will fit on the current line or if it must start a new one. To succeed, the model must learn the line width constraint (the rule that limits how many characters can fit on a line, like in physical space on a sheet of paper). To do this the LLM must track the number of characters written, compute how many remain, and decide whether the next word fits. The task demands reasoning, memory, and planning. The researchers used attribution graphs to visualize how the model coordinates these calculations, showing distinct internal features for the character count, the next word, and the moment a line break is required.

Continuous Counting

The researchers observed that Claude 3.5 Haiku represents line character counts not as counting step by step, but as a smooth geometric structure that behaves like a continuously curved surface, allowing the model to track position fluidly (on the fly) rather than counting symbol by symbol.

Something else that’s interesting is that they discovered the LLM had developed a boundary head (an “attention head”) that is responsible for detecting the line boundary. An attention mechanism weighs the importance of what is being considered (tokens). An attention head is a specialized component of the attention mechanism of an LLM. The boundary head, which is an attention head, specializes in the narrow task of detecting the end of line boundary.

The research paper states:

“One essential feature of the representation of line character counts is that the “boundary head” twists the representation, enabling each count to pair with a count slightly larger, indicating that the boundary is close. That is, there is a linear map QK which slides the character count curve along itself. Such an action is not admitted by generic high-curvature embeddings of the circle or the interval like the ones in the physical model we constructed. But it is present in both the manifold we observe in Haiku and, as we now show, in the Fourier construction. “

How Boundary Sensing Works

The researchers found that Claude 3.5 Haiku knows when a line of text is almost reaching the end by comparing two internal signals:

  1. How many characters it has already generated, and
  2. How long the line is supposed to be.

The aforementioned boundary attention heads decide which parts of the text to focus on. Some of these heads specialize in spotting when the line is about to reach its limit. They do this by slightly rotating or lining up the two internal signals (the character count and the maximum line width) so that when they nearly match, the model’s attention shifts toward inserting a line break.

The researchers explain:

“To detect an approaching line boundary, the model must compare two quantities: the current character count and the line width. We find attention heads whose QK matrix rotates one counting manifold to align it with the other at a specific offset, creating a large inner product when the difference of the counts falls within a target range. Multiple heads with different offsets work together to precisely estimate the characters remaining. “

Final Stage

At this stage of the experiment, the model has already determined how close it is to the line’s boundary and how long the next word will be. The last step is use that information.

Here’s how it’s explained:

“The final step of the linebreak task is to combine the estimate of the line boundary with the prediction of the next word to determine whether the next word will fit on the line, or if the line should be broken.”

The researchers found that certain internal features in the model activate when the next word would cause the line to exceed its limit, effectively serving as boundary detectors. When that happens, the model raises the chance of predicting a newline symbol and lowers the chance of predicting another word. Other features do the opposite: they activate when the word still fits, lowering the chance of inserting a line break.

Together, these two forces, one pushing for a line break and one holding it back, balance out to make the decision.

Model’s Can Have Visual Illusions?

The next part of the research is kind of incredible because they endeavored to test whether the model could be susceptible to visual illusions that would cause trip it up. They started with the idea of how humans can be tricked by visual illusions that present a false perspective that make lines of the same length appear to be different lengths, one shorter than the other.

Screenshot Of A Visual Illusion

Screenshot of two lines with arrow lines on each end that are pointed in different directions for each line, one inward and the other outward. This gives the illusion that one line is longer than the other.

The researchers inserted artificial tokens, such as “@@,” to see how they disrupted the model’s sense of position. These tests caused misalignments in the model’s internal patterns it uses to keep track of position, similar to visual illusions that trick human perception. This caused the model’s sense of line boundaries to shift, showing that its perception of structure depends on context and learned patterns. Even though LLMs don’t see, they experience distortions in their internal organization similar to how humans misjudge what they see by disrupting the relevant attention heads.

They explained:

“We find that it does modulate the predicted next token, disrupting the newline prediction! As predicted, the relevant heads get distracted: whereas with the original prompt, the heads attend from newline to newline, in the altered prompt, the heads also attend to the @@.”

They wondered if there was something special about the @@ characters or would any other random characters disrupt the model’s ability to successfully complete the task. So they ran a test with 180 different sequences and found that most of them did not disrupt the models ability to predict the line break point. They discovered that only a small group of characters that were code related were able to distract the relevant attention heads and disrupt the counting process.

LLMs Have Visual-Like Perception For Text

The study shows how text-based features evolve into smooth geometric systems inside a language model. It also shows that models don’t only process symbols, they create perception-based maps from them. This part, about perception, is to me what’s really interesting about the research. They keep circling back to analogies related to human perception and how those analogies keep fitting into what they see going on inside the LLM.

They write:

“Although we sometimes describe the early layers of language models as responsible for “detokenizing” the input, it is perhaps more evocative to think of this as perception. The beginning of the model is really responsible for seeing the input, and much of the early circuitry is in service of sensing or perceiving the text similar to how early layers in vision models implement low level perception.”

Then a little later they write:

“The geometric and algorithmic patterns we observe have suggestive parallels to perception in biological neural systems. …These features exhibit dilation—representing increasingly large character counts activating over increasingly large ranges—mirroring the dilation of number representations in biological brains. Moreover, the organization of the features on a low dimensional manifold is an instance of a common motif in biological cognition. While the analogies are not perfect, we suspect that there is still fruitful conceptual overlap from increased collaboration between neuroscience and interpretability.”

Implications For SEO?

Arthur C. Clarke wrote that advanced technology is indistinguishable from magic. I think that once you understand a technology it becomes more relatable and less like magic. Not all knowledge has a utilitarian use and I think understanding how an LLM perceives content is useful to the extent that it’s no longer magical. Will this research make you a better SEO? It deepens our understanding of how language models organize and interpret content structure, makes it more understandable and less like magic.

Read about the research here:

When Models Manipulate Manifolds: The Geometry of a Counting Task

Featured Image by Shutterstock/Krot_Studio

Measuring Visibility When Rankings Disappear [Webinar] via @sejournal, @hethr_campbell

Learn How to Track What Really Matters in AI Search

Tools like ChatGPT, Perplexity, and Google’s AI Mode no longer deliver ranked results; they deliver answers. So what happens when traditional SEO metrics no longer apply?

Join AJ Ghergich, Global VP of AI and Consulting Services at Botify, and Frank Vitovitch, VP of Solutions Consulting at Botify, for a live webinar that reveals how to measure visibility in the new search era.

What You’ll Learn

Why Attend

This session will help you move beyond outdated ranking metrics and build smarter frameworks for measuring performance in AI search. You’ll walk away with a clear, data-driven approach to visibility that keeps your team ahead of change.

Register now to learn how to track success in AI search with confidence and clarity.

🛑 Can’t make it live? Register anyway and we’ll send you the on-demand recording.

Google Q3 Report: AI Mode, AI Overviews Lift Total Search Usage via @sejournal, @MattGSouthern

Google used its Q3 earnings call to argue that AI features are expanding search usage rather than cannibalizing it.

CEO Sundar Pichai described an “expansionary moment for Search,” adding that Google’s AI experiences “highlight the web” and send “billions of clicks to sites every day.”

Pichai said overall queries and commercial queries both grew year over year, and that the growth rate increased in Q3 versus Q2, largely driven by AI Overviews and AI Mode.

What Did Google Report In Its Q3 Earnings?

AI Mode & AI Overviews

Pichai reported “strong and consistent” week-over-week growth for AI Mode in the U.S., with queries doubling in the quarter.

He said Google rolled AI Mode out globally across 40 languages, reached over 75 million daily active users, and shipped more than 100 improvements in Q3.

He also said AI Mode is already driving “incremental total query growth for Search.”

Pichai reiterated that AI Overviews “drive meaningful query growth,” noting the effect was “even stronger” in Q3 and more pronounced among younger users.

Revenue: By The Numbers

Alphabet posted $102.3 billion in revenue, its first $100B quarter. “Google Search & other” revenue reached $56.6 billion, up from $49.4 billion a year earlier.

YouTube ads revenue reached $10.26 billion in Q3. Pichai said YouTube “has remained number one in streaming watch time in the U.S. for more than two years, according to Nielsen.”

Pichai added that in the U.S. “Shorts now earn more revenue per watch hour than traditional in-stream.”

The quarter also included a $3.5 billion European Commission fine that Alphabet notes when discussing margins. Excluding that charge, operating margin was 33.9%.

Why It Matters

Google is telling Wall Street that AI surfaces expand search rather than replace it. If that holds, the company has reason to put AI Mode and AI Overviews in front of more queries.

The near-term implication for marketers is a distribution shift inside Google, not a pullback from search.

What’s missing is as important as what was said. Google didn’t share outbound click share from AI experiences or new reporting to track them. Expect adoption to grow while measurement lags. Teams will be relying on their own analytics to judge impact.

The revenue backdrop supports continued investment. “Search & other” rose year over year and Google highlighted growth in commercial queries. Paid budgets are likely to remain with Google as AI-led sessions take up a larger share of usage.

Looking Ahead

Google plans to keep pushing AI-led search surfaces. Pichai said the company is “looking forward to the release of Gemini 3 later this year,” which would give AI Mode and AI Overviews a stronger model foundation if the timing holds.

Google described Chrome as “a browser powered by AI” with deeper integrations to Gemini and AI Mode and “more agentic capabilities coming soon.”

The company also raised 2025 capex guidance to $91–$93 billion to meet AI demand, which supports continued investment in search infrastructure and features.


Featured Image: Photo Agency/Shutterstock

DeepSeek may have found a new way to improve AI’s ability to remember

<div data-chronoton-summary="

  • Memory Through Images: DeepSeek’s new OCR model stores information as visual rather than text tokens, a technique that allows it to retain more data. This approach could drastically reduce computing costs and carbon footprint while improving AI’s ability to ‘remember’.
  • Addressing Context Rot: The model works a bit like human memory, storing older or less critical information in slightly blurred form to save space. This could help address the fact current AI systems forget or muddle information over long conversations, a problem dubbed “context rot.”
  • DeepSeek Disruption: DeepSeek shocked the AI industry with its efficient DeepSeek-R1 reasoning model in January, and is again pushing boundaries. The OCR system can generate over 200,000 training data pages daily on a single GPU, potentially addressing the industry’s severe shortage of quality training text.

” data-chronoton-post-id=”1126932″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. 

OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek’s new model performs on par with top models on key benchmarks.

But researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves memories. Improving how AI models “remember” information could reduce the computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. 

Currently, most large language models break text down into thousands of tiny units called tokens. This turns the text into representations that models can understand. However, these tokens quickly become expensive to store and compute with as conversations with end users grow longer. When a user chats with an AI for lengthy periods, this challenge can cause the AI to forget things it’s been told and get information muddled, a problem some call “context rot.”

The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found. 

Essentially, the OCR model is a test bed for these new methods that permit more information to be packed into AI models more efficiently. 

Besides using visual tokens instead of just text tokens, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue, this compressed content can still remain accessible in the background while maintaining a high level of system efficiency.

Text tokens have long been the default building block in AI systems. Using visual tokens instead is unconventional, and as a result, DeepSeek’s model is quickly capturing researchers’ attention. Andrej Karpathy, the former Tesla AI chief and a founding member of OpenAI, praised the paper on X, saying that images may ultimately be better than text as inputs for LLMs. Text tokens might be “wasteful and just terrible at the input,” he wrote. 

Manling Li, an assistant professor of computer science at Northwestern University, says the paper offers a new framework for addressing the existing challenges in AI memory. “While the idea of using image-based tokens for context storage isn’t entirely new, this is the first study I’ve seen that takes it this far and shows it might actually work,” Li says.

The method could open up new possibilities in AI research and applications, especially in creating more useful AI agents, says Zihan Wang, a PhD candidate at Northwestern University. He believes that since conversations with AI are continuous, this approach could help models remember more and assist users more effectively.

The technique can also be used to produce more training data for AI models. Model developers are currently grappling with a severe shortage of quality text to train systems on. But the DeepSeek paper says that the company’s OCR system can generate over 200,000 pages of training data a day on a single GPU.

The model and paper, however, are only an early exploration of using image tokens rather than text tokens for AI memorization. Li says she hopes to see visual tokens applied not just to memory storage but also to reasoning. Future work, she says, should explore how to make AI’s memory fade in a more dynamic way, akin to how we can recall a life-changing moment from years ago but forget what we ate for lunch last week. Currently, even with DeepSeek’s methods, AI tends to forget and remember in a very linear way—recalling whatever was most recent, but not necessarily what was most important, she says. 

Despite its attempts to keep a low profile, DeepSeek, based in Hangzhou, China, has built a reputation for pushing the frontier in AI research. The company shocked the industry at the start of this year with the release of DeepSeek-R1, an open-source reasoning model that rivaled leading Western systems in performance despite using far fewer computing resources. 

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Just about all businesses these days seem to be pivoting to AI, even when they don’t seem to know exactly why they’re investing in it—or even what it really does. “Optimization,” “scaling,” and “maximizing efficiency” are convenient buzzwords bandied about to describe what AI can achieve in theory, but for most of AI companies’ eager customers, the hundreds of billions of dollars they’re pumping into the industry aren’t adding up. And maybe they never will.

This month’s news doesn’t exactly cast the technology in a glowing light either. A bunch of NGOs and aid agencies are using AI models to generate images of fake suffering people to guilt their Instagram followers. AI translators are pumping out low-quality Wikipedia pages in the languages most vulnerable to going extinct. And thanks to the construction of new AI data centers, lots of neighborhoods living in their shadows are getting forced into their own sort of pivots—fighting back against the power blackouts and water shortages the data centers cause. How’s that for optimization?

The Download: Boosting AI’s memory, and data centers’ unhappy neighbors

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

DeepSeek may have found a new way to improve AI’s ability to remember

The news: An AI model released by Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

How it works: The optical character recognition model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools.

Why it matters: Researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves data. Improving how AI models “remember” could reduce how much computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. Read the full story.

—Caiwei Chen

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

Roundtables: seeking climate solutions in turbulent times

Yesterday we held a subscriber-only conversation exploring how companies are pursuing climate solutions amid political shifts in the US.

Our climate reporters James Temple and Casey Crownhart sat down with our science editor Mary Beth Griggs to dig into the most promising climate technologies right now. Watch the session back here!

MIT Technology Review Narrated: Supershoes are reshaping distance running

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress—for both the field as a whole and for athletes’ bodies. Still, some argue that they’ve changed the sport too quickly.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Hurricane Melissa may be the Atlantic Ocean’s strongest on record
There’s little doubt in scientists’ minds that human-caused climate change is to blame. (New Scientist $)+ While Jamaica is largely without power, no deaths have been confirmed. (BBC)
+ The hurricane is currently sweeping across Cuba. (NYT $)
+ Here’s what we know about hurricanes and climate change. (MIT Technology Review)

2 Texas is suing Tylenol over the Trump administration’s autism claims
Even though the scientific evidence is unfounded. (NY Mag $)
+ The lawsuit claims the firm violated Texas law by claiming the drug was safe. (WP $)

3 Two US Senators want to ban AI companions for minors
They want AI companies to implement age-verification processes, too. (NBC News)
+ The looming crackdown on AI companionship. (MIT Technology Review)

3 Trump’s “golden dome” plan is seriously flawed 
It’s unlikely to offer anything like the protection he claims it will. (WP $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

4 The Trump administration is backing new nuclear plants
To—surprise surprise—power the AI boom. (NYT $)
+ The grid is straining to support the excessive demands for power. (Reuters)+ Can nuclear power really fuel the rise of AI? (MIT Technology Review)

5 Uber’s next fleet of autonomous cars will contain Nvidia’s new chips 
Which could eventually make it cheaper to hail a robotaxi. (Bloomberg $)
+ Nvidia is also working with a company called Lucid to bring autonomous cars to consumers. (Ars Technica)

6 Weight loss drugs are becoming more commonplace across the world
Semaglutide patents are due to expire in Brazil, China and India next year. (Economist $)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

7 More billionaires hail from America than any other nation
The majority of them have made their fortunes working in technology. (WSJ $)
+ China is closing in on America’s global science lead. (Bloomberg $)

8 Australian police are developing an AI tool to decode Gen Z slang
It’s in a bid to combat the rising networks of young men targeting vulnerable girls online. (The Guardian)

9 This robot housekeeper is controlled remotely by a human 🤖
Nothing weird about that at all… (WSJ $)
+ The humans behind the robots. (MIT Technology Review)

10 Cameo is suing OpenAI
It’s unhappy about Sora’s new Cameo feature. (Reuters)

Quote of the day

“I don’t believe we’re in an AI bubble.”

—Jensen Haung, Nvidia’s CEO, conveniently dismisses the growing concerns around the AI hype train, Bloomberg reports.

One more thing

How to befriend a crow

Crows have become minor TikTok celebrities thanks to CrowTok, a small but extremely active niche on the social video app that has exploded in popularity over the past two years. CrowTok isn’t just about birds, though. It also often explores the relationships that corvids—a family of birds including crows, magpies, and ravens—develop with human beings.

They’re not the only intelligent birds around, but in general, corvids are smart in a way that resonates deeply with humans. But how easy is it to befriend them? And what can it teach us about attention, and patience, in a world that often seems to have little of either? Read the full story.

—Abby Ohlheiser

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Congratulations to Flava Flav, who’s been appointed Team USA’s official hype man for the 2026 Winter Olympics!
+ Why are Spirographs so hypnotic? Answers on a postcard.
+ I love this story—and beautiful photos—celebrating 50 years of the World Gay Rodeo.
+ Axolotls really are remarkable little creatures.