Google Offers AI Certificate Free For Eligible U.S. Small Businesses via @sejournal, @MattGSouthern

Google has launched the Google AI Professional Certificate, a self-paced program covering data analysis, content creation, research, and vibe coding.

Every participant receives three months of free access to Google AI Pro. Eligible U.S. small businesses can access the entire program at no cost through a separate application (more on eligibility below).

The certificate is available now on Coursera, Google Skills, and Udemy. In the U.S. and Canada, the subscription costs $49 per month.

What The Certificate Covers

The program consists of seven modules, each of which can be completed in about an hour. No prior AI experience is required.

Participants complete more than 20 hands-on activities. These include creating presentations and marketing materials, conducting deep research, building infographics, analyzing data, and building custom apps without writing code.

After completing all seven modules, participants earn a Google certificate they can add to LinkedIn and share with employers.

Free Access For Eligible U.S. Small Businesses

Google is offering the certificate at no cost to eligible U.S. small and medium-sized businesses with 500 or fewer employees. The offer also includes three months of free Google Workspace Business Standard (for new Workspace customers, up to 300 seats).

To qualify, businesses must be registered in the U.S. and submit their Employer Identification Number (EIN) through a dedicated application on Coursera. Coursera said the verification process takes 5-7 business days.

Businesses can also apply at grow.google/small-business. Google said it is working with the U.S. Chamber of Commerce and America’s Small Business Development Centers to distribute the program.

How This Helps

The program builds on Google AI Essentials, which has become the most popular course on Coursera. The AI Professional Certificate goes further, focusing on applied use cases rather than introductory concepts.

The certificate focuses on tools like Gemini, NotebookLM, and Google AI Studio, so the skills are tied to Google’s ecosystem. Google launched a separate Generative AI Leader certification for Google Cloud in May 2025, though that program focused on non-technical business leaders and required a $99 exam fee. The new AI Professional Certificate has no exam fee.

Looking Ahead

The Google AI Professional Certificate is available now on Coursera, Google Skills, and Udemy. Eligible U.S. small businesses can apply for no-cost access at grow.google/small-business.

For professionals already familiar with Google’s AI tools through earlier training programs, this certificate adds structured, employer-recognized credentials to practical skills you may already be developing on your own.

Why AI Misreads The Middle Of Your Best Pages via @sejournal, @DuaneForrester

The middle is where your content dies, and not because your writing suddenly gets bad halfway down the page, and not because your reader gets bored. But because large language models have a repeatable weakness with long contexts, and modern AI systems increasingly squeeze long content before the model even reads it.

That combo creates what I think of as dog-bone thinking. Strong at the beginning, strong at the end, and the middle gets wobbly. The model drifts, loses the thread, or grabs the wrong supporting detail. You can publish a long, well-researched piece and still watch the system lift the intro, lift the conclusion, then hallucinate the connective tissue in between.

This is not theory as it shows up in research, and it also shows up in production systems.

Image Credit: Duane Forrester

Why The Dog-Bone Happens

There are two stacked failure modes, and they hit the same place.

First, “lost in the middle” is real. Stanford and collaborators measured how language models behave when key information moves around inside long inputs. Performance was often highest when the relevant material was at the beginning or end, and it dropped when the relevant material sat in the middle. That’s the dog-bone pattern, quantified.

Second, long contexts are getting bigger, but systems are also getting more aggressive about compression. Even if a model can take a massive input, the product pipeline frequently prunes, summarizes, or compresses to control cost and keep agent workflows stable. That makes the middle even more fragile, because it is the easiest segment to collapse into mushy summary.

A fresh example: ATACompressor is a 2026 arXiv paper focused on adaptive, task-aware compression for long-context processing. It explicitly frames “lost in the middle” as a problem in long contexts and positions compression as a strategy that must preserve task-relevant content while shrinking everything else.

So you were right if you ever told someone to “shorten the middle.” Now, I’d offer this refinement:

You are not shortening the middle for the LLM so much as engineering the middle to survive both attention bias and compression.

Two Filters, One Danger Zone

Think of your content going through two filters before it becomes an answer.

  • Filter 1: Model Attention Behavior: Even if the system passes your text in full, the model’s ability to use it is position-sensitive. Start and end tend to perform better, middle tends to perform worse.
  • Filter 2: System-Level Context Management: Before the model sees anything, many systems condense the input. That can be explicit summarization, learned compression, or “context folding” patterns used by agents to keep working memory small. One example in this space is AgentFold, which focuses on proactive context folding for long-horizon web agents.

If you accept those two filters as normal, the middle becomes a double-risk zone. It gets ignored more often, and it gets compressed more often.

That is the balancing logic with the dog-bone idea. A “shorten the middle” approach becomes a direct mitigation for both filters. You are reducing what the system will compress away, and you are making what remains easier for the model to retrieve and use.

What To Do About It Without Turning Your Writing Into A Spec Sheet

This is not a call to kill longform as longform still matters for humans, and for machines that use your content as a knowledge base. The fix is structural, not “write less.”

You want the middle to carry higher information density with clearer anchors.

Here’s the practical guidance, kept tight on purpose.

1. Put “Answer Blocks” In The Middle, Not Connective Prose

Most long articles have a soft, wandering middle where the author builds nuance, adds color, and tries to be thorough. Humans can follow that. Models are more likely to lose the thread there. Instead, make the middle a sequence of short blocks where each block can stand alone.

An answer block has:
A clear claim. A constraint. A supporting detail. A direct implication.

If a block cannot survive being quoted by itself, it will not survive compression. This is how you make the middle “hard to summarize badly.”

2. Re-Key The Topic Halfway Through

Drift often happens because the model stops seeing consistent anchors.

At the midpoint, add a short “re-key” that restates the thesis in plain words, restates the key entities, and restates the decision criteria. Two to four sentences are often enough here. Think of this as continuity control for the model.

It also helps compression systems. When you restate what matters, you are telling the compressor what not to throw away.

3. Keep Proof Local To The Claim

Models and compressors both behave better when the supporting detail sits close to the statement it supports.

If your claim is in paragraph 14, and the proof is in paragraph 37, a compressor will often reduce the middle into a summary that drops the link between them. Then the model fills that gap with a best guess.

Local proof looks like:
Claim, then the number, date, definition, or citation right there. If you need a longer explanation, do it after you’ve anchored the claim.

This is also how you become easier to cite. It is hard to cite a claim that requires stitching context from multiple sections.

4. Use Consistent Naming For The Core Objects

This is a quiet one, but it matters a lot. If you rename the same thing five times for style, humans nod, but models can drift.

Pick the term for the core thing and keep it consistent throughout. You can add synonyms for humans, but keep the primary label stable. When systems extract or compress, stable labels become handles. Unstable labels become fog.

5. Treat “Structured Outputs” As A Clue For How Machines Prefer To Consume Information

A big trend in LLM tooling is structured outputs and constrained decoding. The point is not that your article should be JSON. The point is that the ecosystem is moving toward machine-parseable extraction. That trend tells you something important: machines want facts in predictable shapes.

So, inside the middle of your article, include at least a few predictable shapes:
Definitions. Step sequences. Criteria lists. Comparisons with fixed attributes. Named entities tied to specific claims.

Do that, and your content becomes easier to extract, easier to compress safely, and easier to reuse correctly.

How This Shows Up In Real SEO Work

This is the crossover point. If you are an SEO or content lead, you are not optimizing for “a model.” You are optimizing for systems that retrieve, compress, and synthesize.

Your visible symptoms will look like:

  • Your article gets paraphrased correctly at the top, but the middle concept is misrepresented. That’s lost-in-the-middle plus compression.
  • Your brand gets mentioned, but your supporting evidence does not get carried into the answer. That’s local proof failing. The model cannot justify citing you, so it uses you as background color.
  • Your nuanced middle sections become generic. That’s compression, turning your nuance into a bland summary, then the model treating that summary as the “true” middle.
  • Your “shorten the middle” move is how you reduce these failure rates. Not by cutting value, but by tightening the information geometry.

A Simple Way To Edit For Middle Survival

Here’s a clean, five-step workflow you can apply to any long piece, and it’s a sequence you can run in an hour or less.

  1. Identify the midpoint and read only the middle third. If the middle third can’t be summarized in two sentences without losing meaning, it’s too soft.
  2. Add one re-key paragraph at the start of the middle third. Restate: the main claim, the boundaries, and the “so what.” Keep it short.
  3. Convert the middle third into four to eight answer blocks. Each block must be quotable. Each block must include its own constraint and at least one supporting detail.
  4. Move proof next to claim. If proof is far away, pull a compact proof element up. A number, a definition, a source reference. You can keep the longer explanation later.
  5. Stabilize the labels. Pick the name for your key entities and stick to them across the middle.

If you want the nerdy justification for why this works, it is because you are designing for both failure modes documented above: the “lost in the middle” position sensitivity measured in long-context studies, and the reality that production systems compress and fold context to keep agents and workflows stable.

Wrapping Up

Bigger context windows do not save you. They can make your problem worse, because long content invites more compression, and compression invites more loss in the middle.

So yes, keep writing longform when it is warranted, but stop treating the middle like a place to wander. Treat it like the load-bearing span of a bridge. Put the strongest beams there, not the nicest decorations.

That’s how you build content that survives both human reading and machine reuse, without turning your writing into sterile documentation.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Collagery/Shutterstock

35-Year SEO Veteran: Great SEO Is Good GEO — But Not Everyone’s Been Doing Great SEO via @sejournal, @theshelleywalsh

As SEOs, we are used to being adaptable to changing algorithms, so LLM optimization should be a simple extension of that process.

To discuss the industry debates surrounding the differences between SEO and GEO and clarify whether they are the same or different, I spoke with SEO veteran Grant Simmons.

Grant has over 30 years of experience helping brands grow and has spent decades focused on meaning, intent, and topical authority long before LLMs entered the conversation.

I spoke with Grant about signal alignment, how Google’s latest continuation patents reveal the mechanics of LLM citations, and what SEOs are getting wrong about topical focus.

We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.” – Grant Simmons

You can watch the full interview with Grant on IMHO below, or continue reading the article summary.

Great SEO Is Good GEO

At Google Search Live in December 2025, John Mueller said, “Good SEO is good GEO.”

I asked Grant what he thought were the differences between optimizing for search engines and for machines, and if he thought there were any overlaps.

Grant’s approach echoes what John Mueller said, but “Not everyone has been doing great SEO,” he explained. “Great SEO was always about building topical authority.”

He continued to say, “Essentially, machines (whether it’s Google or whether it’s an LLM) have to understand the underlying meaning of the content so they can present the best answer.

They have to understand the query or the prompt, then they have to send the best answer. So in that way, it’s very similar.”

Where Grant sees divergence is in how the systems evaluate content. Google has historically ranked pages, and even with passage ranking, it still considers the page and the site as a whole. LLMs operate differently.

“LLMs are looking more at that passage side, you know, something that’s easily extractable, something that has value semantically related to the query or the prompt. And so there’s that fundamental difference.”

Grant also stressed that great SEO has always been holistic, touching social media, PR, content, and brand messaging. Having brand awareness, brand visibility, and brand consistency across all channels is a significant factor in LLM representation. And this is exactly the kind of work that the best SEOs do.

“We’re marketers. We should make sure, not just from a standpoint of what we do in SEO and GEO for our clients, which is connecting a need and intent to the product or service that satisfies that intent, we’re also doing the same in our own marketing. We have to understand what our clients are looking for.

“[GEO] is the same [as SEO] if you’re doing it well. It’s not the same if you weren’t. And of course, there’s nuance.”

My thoughts are that SEOs who have been in the industry the longest are experiencing less disruption because they have seen it all before. They learned to be adaptable in the early years when there was so much flux as we progressed from multiple search engines to just one. Whereas for anyone new to the industry, they don’t have the same background points of reference.

Why Consensus Matters To Be Surfaced By LLMs

I went on to ask Grant about Google’s latest continuation patents, which describe two distinct systems that work together.

The first is what Grant describes as a response confidence engine. This system evaluates whether a passage can be corroborated, whether the information has consensus across the web.

“If they return a passage and they can corroborated that it is true, and when we say true, it’s true in the sense of more than one person is saying it, that doesn’t mean it’s true, but it means the consensus is there,” Grant explained. “The consensus generally wins out.”

The second system is what Grant calls a linkifying engine. Once a passage has been confirmed through consensus, this engine determines whether a specific sentence or sub-element within that passage, what Grant calls a “chunklet,” can be matched and linked to a source.

“Consensus decides whether it’s surfaced in the first place. The linkify engine actually decides whether it’s linkable, whether a citation is actually going to happen,” Grant said.

Getting mentioned by an LLM is one thing. Getting an actual link back to your content requires that the specific passage is both verifiable through consensus and uniquely attributable to your source.

Golden Knowledge Content Wins

So, what kind of content earns this kind of AI visibility? Grant described it as “golden knowledge,” content that is unique in some meaningful way.

“Generally, data-driven, your own data, your own opinion that’s proof-backed, evidence-backed. Taking a different view of things,” Grant said. “But in the same way of taking a different view, there still has to be some kind of consensus. If other people are agreeing with you, that is really important. Your content needs the uniqueness and the data-driven aspect, but it still has to align with the overall consensus on the web.”

Grant was also clear that while we often talk about writing for machines, the orientation should remain human-centered: “We talk about writing for the machines, but we’re really writing for human need because it’s all driven by the prompt or the query.”

This balance between uniqueness and consensus is perhaps the most actionable takeaway. Content that simply restates what everyone else is saying won’t stand out. But content that takes a position without corroboration elsewhere won’t pass the confidence threshold to be surfaced. The sweet spot is original, data-driven insight that others can and do validate.

The Biggest Mistakes SEOs Make With Topical Focus

When I asked Grant about the most common mistakes he sees with topical diversification on pages, his answer was clear: trying to be everything to everyone.

“When you think about intent, suddenly you understand that pages have a right to exist,” Grant said. “I call it path to satisfaction. Understanding who the audience is and what they need to find, you have to provide a path to that satisfaction.”

Grant pointed out that most SEOs inherit existing sites rather than building from scratch. The temptation is to focus on the surface-level optimizations, such as title tags, meta descriptions, and headers, without reviewing whether a page is actually focused on a specific intent or whether it has what he calls “drift.”

“What they won’t do is fundamentally review the page and understand whether that page is focused on a specific intent or whether it has this drift,” Grant explained. “Cleaning out those outliers, topics that you’re covering when you don’t really mean to, is essentially diffusing what the page means. Those are the things that I think SEOs miss out on.”

This ties directly back to LLM citability. If a page lacks clear topical focus, it becomes harder for AI systems to extract a self-contained passage that answers a specific query. Tightening that focus isn’t just good SEO; it’s the foundation of being visible in AI-generated responses.

Grant’s Strategy Recommendation For 2026

I finished by asking Grant what he’s recommending to his clients right now.

“Let’s double down on what’s working,” Grant said. “LLM traffic is so small today that optimizing for LLMs is important for the future but not for today’s metrics. Let’s improve our SEO. Let’s get to that great SEO level. And as we’re doing that, we are incorporating the elements that will help you show up for GEO, that will help show up on these other surfaces.”

His focus is on great content, topical authority, uniqueness, data-driven approaches, citations, and digital PR. In Grant’s words: “Getting content so good that LLMs can’t ignore you, Google can’t ignore you, and publications can’t ignore you.”

It’s the Steve Martin philosophy applied to SEO: “Be so good they can’t ignore you,” and, coincidence or not, the rule I have applied for the last 15 years in SEO.

Watch the full interview with Grant Simmons here:

Thank you to Grant Simmons for offering his insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

Why Do Budgets Overspend Even With A Target ROAS or CPA? – Ask A PPC via @sejournal, @navahf

This month’s Ask a PPC explores a common advertiser question: Why budgets sometimes overspend even when a target ROAS or target CPA is in place.

Understanding this behavior requires separating two concepts that are often conflated: budgets and goals. While they work together, they serve very different functions within auction‑based ad platforms. In this post, we’ll walk through how budgets and goals operate, why target ROAS can sometimes increase spend, and which levers advertisers can use to keep budgets under control.

Disclaimer: I am a Microsoft employee. The examples below reference Microsoft Advertising, but the underlying principles apply to any platform that uses automated or goal‑based bidding.

The Difference Between Budgets And Goals

When you set a daily budget, the ad platform averages across approximately 30.4 days. While there are daily fluctuations, the platform’s objective is to meet that average over the course of the period rather than strictly adhere to the number each day.

As a result, a daily budget of $50 can spend up to $100 on a given day. Here are the core reasons for “over” spending:

  • Under spending too many days during the 30.4-day period.
  • Average CPCs don’t align with the daily budget.

Goals function differently. A target ROAS or target CPA is not a spending limit. Instead, it is an optimization instruction.

A target ROAS asks the platform to achieve a specified return based on the conversion values being passed in. A target CPA instructs the platform to drive conversions at or below a certain cost, regardless of differences in conversion value.

Because goals are optimization signals rather than caps, the platform may spend more budget if it believes that doing so will help reach the target.

Why Target ROAS Can Increase Spend

Target ROAS is often perceived as a conservative bidding approach, but in practice, it can drive higher spend under certain conditions.

One common scenario involves high CPCs relative to budget size. If the average CPC exceeds roughly 10% of the daily budget, the platform may need to stretch spending in order to secure enough eligible clicks to meet the ROAS goal.

Overspending can also occur when there has been underspending earlier in the month. Since budgets are averaged, the platform may increase spend later in the period to compensate for missed opportunities. This behavior can look abrupt from an advertiser perspective, but it aligns with how budget pacing operates.

Image from author, February 2026

Accurate conversion values are critical in these situations. When incorrect or inflated values are passed to the platform, the system may believe it is driving strong returns when it is not. That misunderstanding can lead to increased spend in pursuit of perceived performance.

Another important consideration is how conversion actions are classified. Primary conversions influence bidding and reporting, while secondary conversions are observed but excluded from optimization logic. When too many conversion actions are set as primary, particularly if they overlap, the platform may double-count success and bias spend toward certain keywords, audiences, or signals.

Microsoft Conversion View (Image from author, January 2026)
Google Conversion View (Image from author, February 2026)

How Advertisers Can Protect Against Overspending

Advertisers do have meaningful controls available to manage spend behavior.

The first is aligning budgets with auction realities. A practical guideline is ensuring that a daily budget can support at least 10 clicks at the average CPC. For non‑branded search, a 10% conversion rate is unusually strong. Without sufficient click volume, the platform may either restrict spend to high‑cost opportunities or over‑allocate budget to lower‑quality traffic to meet pacing expectations.

The second lever is being realistic about conversion trust. Many advertisers have inconsistent attribution models or partial tracking implementations, which reduces confidence in reported conversion data. When conversion data is not reliable, aggressive ROAS or CPA targets can be counterproductive.

In those cases, advertisers may choose to set more conservative goals or opt for a bid strategy that better matches the quality of available data. For example, if conversion values are inconsistent, target CPA may be more appropriate. Conversely, if certain conversions are significantly more valuable than others, a purely CPA‑based approach may lead to inefficient spend allocation.

A final lever that is often underutilized is ad scheduling. Restricting campaigns to specific hours of the day can reduce volatility and improve budget efficiency. When budget pressure exists, running ads during a focused three‑to‑six‑hour window rather than all day can provide stronger control without turning automation off entirely.

Closing Thoughts

When budgets overspend in goal‑based bidding strategies, it is rarely the result of a platform error. More often, it reflects a mismatch between budgets, goals, and the quality of data being supplied.

Careful attention to conversion accuracy, realistic budget sizing, and thoughtful use of controls such as ad scheduling can significantly reduce unexpected spend behavior. Automated bidding is most effective when inputs are intentional and aligned with actual business value.

More Resources:


Featured Image: Paulo Bobita.Search Engine Journal

The robots who predict the future

To be human is, fundamentally, to be a forecaster. Occasionally a pretty good one. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid being hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival. Indeed, as the tools of divination have changed over the centuries, from tea leaves to data sets, our conviction that the future can be known (and therefore controlled) has only grown stronger. 

Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. As I write this sentence, algorithms on some remote server are busy trying to guess my next word based on those I have already typed. If you’re reading this online, a separate set of algorithms has likely already served you an ad deemed to be one you are most likely to click. (To the die-hards reading this story on paper, congratulations! You have escaped the algorithms … for now.)

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here.

So how did all this happen? People’s desire for reliable forecasting is understandable. Still, nobody signed up for an omnipresent, algorithmic oracle mediating every aspect of their life. A trio of new books tries to make sense of our future-­focused world—how we got here, and what this change means. Each has its own prescriptions for navigating this new reality, but they all agree on one thing: Predictions are ultimately about power and control.

cover of The Means of Prediction
The Means of Prediction: How AI Really Works (and Who Benefits)
Maximilian Kasy
UNIVERSITY OF CHICAGO PRESS, 2025

In The Means of Prediction: How AI Really Works (and Who Benefits), the Oxford economist Maximilian Kasy explains how most predictions in our lives are based on the statistical analysis of patterns in large, labeled data sets—what’s known in AI circles as supervised learning. Once “trained” on such data sets, algorithms for supervised learning can be presented with all kinds of new information and then deliver their best guess as to some specific future outcome. Will you violate your parole, pay off your mortgage, get promoted if hired, perform well on your college exams, be in your home when it gets bombed? More and more, our lives are shaped (and, yes, occasionally shortened) by a machine’s answer to these questions.

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here. This arrangement is leading to a crueler, blander, more instrumentalized world, one where life’s possibilities are foreclosed, age-old prejudices are entrenched, and everyone’s brain seems to be actively turning into goo. It’s an outcome, according to Kasy, that was entirely predictable. 

AI adherents might frame those consequences as “unintended,” or mere problems of optimization and alignment. Kasy, on the other hand, argues that they represent the system working as intended. “If an algorithm selecting what you see on social media promotes outrage, thereby maximizing engagement and ad clicks,” he writes, “that’s because promoting outrage is good for profits from ad sales.” The same holds true for an algorithm that nixes job candidates “who are likely to have family-care responsibilities outside the workplace,” and the ones that “screen out people who are likely to develop chronic health problems or disabilities.” What’s good for a company’s bottom line may not be good for your job-hunting prospects or life expectancy.

Where Kasy differs from other critics is that he doesn’t think working to create less biased, more equitable algorithms will fix any of this. Trying to rebalance the scales can’t change the fact that predictive algorithms rely on past data that’s often racist, sexist, and flawed in countless other ways. And, he says, the incentives for profit will always trump attempts to eliminate harm. The only way to counter this is with broad democratic control over what Kasy calls “the means of prediction”: data, computational infrastructure, technical expertise, and energy.  

A little more than half of The Means of Prediction is devoted to explaining how this might be accomplished—through mechanisms including “data trusts” (collective public bodies that make decisions about how to process and use data on behalf of their contributors) and corporate taxing schemes that try to account for the social harm AI inflicts. There’s a lot of economist talk along the way, about how “agents of change” might help achieve “value alignment” in order to “maximize social welfare.” Reasonable, I guess, though a skeptic might point out that Kasy’s rigorous, systematic approach to building new public-serving institutions comes at a time when public trust in institutions has never been lower. Also, there’s the brain goo problem. 

To his credit, Kasy is a realist here. He doesn’t presume that any of these proposals will be easy to implement. Or that it will happen overnight, or even in the near future. The troubling question at the end his book is: Do we have that kind of time?

Reading Kasy’s blueprint for seizing control of the means of prediction raises another pressing question. How on earth did we reach a point where machine-mediated prediction is more or less inescapable? Capitalism, might be Marx’s pithy response. Fine, as far as it goes, but that doesn’t explain why the same kinds of algorithms that currently model climate change are for some reason also deciding whether you get a new kidney or I get a car loan.

The Irrational Decision: How We Gave Computers the Power to Choose for Us
Benjamin Recht
PRINCETON UNIVERSITY PRESS, 2026

If you ask Benjamin Recht, author of The Irrational Decision: How We Gave Computers the Power to Choose for Us, he’d likely tell you our current predicament has a lot to do with the idea and ideology of decision theory—or what economists call rational choice theory. Recht, a polymathic professor in UC Berkeley’s Department of Electrical Engineering and Computer Science, prefers the term “mathematical rationality” to describe the narrow, statistical conception that stoked the desire to build computers, informed how they would eventually work, and influenced the kinds of problems they would be good at solving. 

This belief system goes all the way back to the Enlightenment, but in Recht’s telling, it truly took hold at the tail end of World War II. Nothing focuses the mind on risk and quick decision-making like war, and the mathematical models that proved especially useful in the fight against the Axis powers convinced a select group of scientists and statisticians that they might also be a logical basis for designing the first computers. Thus was born the idea of a computer as an ideal rational agent, a machine capable of making optimal decisions by quantifying uncertainty and maximizing utility.

Intuition, experience, and judgment gave way, says Recht, to optimization, game theory, and statistical prediction. “The core algorithms developed in this period drive the automated decisions of our modern world, whether it be in managing supply chains, scheduling flight times, or placing advertisements on your social media feeds,” he writes. In this optimization-­driven reality, “every life decision is posed as if it were a round at an imaginary casino, and every argument can be reduced to costs and benefits, means and ends.”

Today, mathematical rationality (wearing its human skin) is best represented by the likes of the pollster Nate Silver, the Harvard psychologist Steven Pinker, and an assortment of Silicon Valley oligarchs, says Recht. These are people who fundamentally believe the world would be a better place if more of us adopted their analytic mindset and learned to weigh costs and benefits, estimate risks, and plan optimally. In other words, these are people who believe we should all make decisions like computers. 

How might we demonstrate that (unquantifiable) human intuition, morality, and judgment are better ways of addressing some of the world’s most important and vexing problems?

It’s a ridiculous idea for multiple reasons, he says. To name just one, it’s not as if humans couldn’t make evidence-based decisions before automation. “Advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950,” Recht writes. “From the late 1800s to the early 1900s, we had world-changing scientific breakthroughs in physics, including new theories of thermodynamics, quantum mechanics, and relativity.” We also managed to build cars and airplanes without a formal system of rationality and somehow came up with societal innovations like modern democracy without optimal decision theory. 

So how might we convince the Pinkers and Silvers of the world that most decisions we face in life are not in fact grist for the unrelenting mill of mathematical rationality? Moreover, how might we demonstrate that (unquantifiable) human intuition, morality, and judgment might be better ways of addressing some of the world’s most important and vexing problems?

cover of Prophecy
Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI
Carissa Véliz
DOUBLEDAY, 2026

One might start by reminding the rationalists that any prediction, computational or otherwise, is really just a wish—but one with a powerful tendency to self-fulfill. This idea animates Carissa Véliz’s wonderfully wide-ranging polemic Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI

A philosopher at the University of Oxford, Véliz sees a prediction as “a magnet that bends reality toward itself.” She writes, “When the force of the magnet is strong enough, the prediction becomes the cause of its becoming true.” 

Take Gordon Moore. While he doesn’t come up in Prophecy, he does figure somewhat prominently in Recht’s history of mathematical rationality. A cofounder of the tech giant Intel, Moore is famous for his 1965 prediction that the density of transistors in integrated circuits would double every two years. “Moore’s Law” turned out to be true, and remains true today, although it does seem to be running out of steam thanks to the physical size limits of the silicon atom.

One story you can tell yourself about Moore’s Law is that Gordon was just a prescient guy. His now-classic 1965 opinion piece “Cramming More Components onto Integrated Circuits,” for Electronics magazine, simply extrapolated what computing trends might mean for the future of the semiconductor industry. 

Another story—the one I’m guessing Véliz might tell—is that Moore put an informed prediction out into the world, and an entire industry had a collective interest in making it come true. As Recht makes clear, there were and remain obvious financial incentives for companies to make faster and smaller computer chips. And while the industry has likely spent billions of dollars trying to keep Moore’s Law alive, it’s undoubtedly profited even more from it. Moore’s Law was a helluva strong magnet. 

Predictions don’t just have a habit of making themselves come true, says Véliz. They can also distract us from the challenges of the here and now. When an AI boomer promises that artificial general intelligence will be the last problem humanity needs to solve, it not only shapes how we think about AI’s role in our lives; it also shifts our attention away from the very real and very pressing problems of the present day—problems that in many cases AI is causing.

In this sense, the questions around predictions (Who’s making them? Who has the right to make them?) are also fundamentally about power. It’s no accident, Véliz says, that the societies that rely most heavily on prediction are also the ones that tend toward oppression and authoritarianism. Predictions are “veiled prescriptive assertions—they tell us how to act,” she writes. “They are what philosophers call speech acts. When we believe a prediction and act in accordance with it, it’s akin to obeying an order.”

As much as tech companies would like us to believe otherwise, technology is not destiny. Humans make it and choose how to use it … or not use it. Maybe the most appropriate (and human) thing we can do in the face of all the uninvited daily predictions in our lives is to simply defy them. 

Bryan Gardiner is a writer based in Oakland, California.

Welcome to the dark side of crypto’s permissionless dream

“We’re out of airspace now. We can do whatever we want,” Jean-Paul Thorbjornsen tells me from the pilot’s seat of his Aston Martin helicopter. As we fly over suburbs outside Melbourne, Australia, it’s becoming clear that doing whatever he wants is Thorbjornsen’s MO. 

Upper-middle-class homes give way to vineyards, and Thorbjornsen points out our landing spot outside a winery. People visiting for lunch walk outside. “They’re going to ask for a shot now,” he says, used to the attention drawn by his luxury helicopter, emblazoned with the tail letters “BTC” for bitcoin (the price tag of $5 million in Australian dollars—$3.5 million in US dollars today—was perhaps reasonable for someone who claims a previous crypto project made more than AU$400 million, although he also says those funds were tied up in the company). 

Thorbjornsen is a founder of THORChain, a blockchain through which users can swap one cryptocurrency for another and earn fees from making those swaps. THORChain is permissionless, so anyone can use it without getting prior approval from a centralized authority. As a decentralized network, the blockchain is built and run by operators located across the globe, most of whom use pseudonyms. 

During its early days, Thorbjornsen himself hid behind the pseudonym “leena” and used an AI-generated female image as his avatar. But around March 2024, he revealed that he, an Australian man in his mid-30s, with a rural Catholic upbringing, was the mind behind the blockchain. More or less. 

If there is a central question around THORChain, it is this: Exactly who is responsible for its operations? Blockchains as decentralized as THORChain are supposed to offer systems that operate outside the centralized leadership of corruptible governments and financial institutions. If a few people have outsize sway over this decentralized network—one of a handful that operate at such a large scale—it’s one more blemish on the legacy of bitcoin’s promise, which has already been tarnished by capitalistic political frenzy.   

Who’s responsible for THORChain matters because in January last year, its users lost more than $200 million worth of their cryptocurrency in US dollars after THORChain transactions and accounts were frozen by a singular admin override, which users believed was not supposed to be possible given the decentralized structure. When the freeze was lifted, some users raced to pull their money out. The following month, a team of North Korean hackers known as the Lazarus Group used THORChain to move roughly $1.2 billion of stolen ethereum taken in the infamous hack of the Dubai-based crypto exchange Bybit. 

Thorbjornsen explains away THORChain’s inability to stop the movement of stolen funds, or prevent a bank run, as a function of its decentralized and permissionless nature. The lack of executive powers means that anyone can use the network for any reason, and arguably there’s no one to hold accountable when even the worst goes down.

But when the worst did go down, nearly everyone in the THORChain community, and those paying attention to it in channels like X, pointed their fingers at Thorbjornsen. A lawsuit filed by the THORChain creditors who lost millions in January 2025 names him. A former FBI analyst and North Korea specialist, reflecting on the potential repercussions for helping move stolen funds, told me he wouldn’t want to be in Thorbjornsen’s shoes.

THORChain was designed to make decisions based on votes by node operators, where two-thirds majority rules.

That’s why I traveled to Australia—to see if I could get a handle on where he sees himself and his role in relation to the network he says he founded.

According to Thorbjornsen, he should not be held responsible for either event. THORChain was designed to make decisions based on votes by node operators—people with the computer power, and crypto stake, to run a cluster of servers that process the network’s transactions. In those votes, a two-thirds majority rules.

Then there’s the permissionless part. Anyone can use THORChain to make swaps, which is why it’s been a popular way for widely sanctioned entities such as the government of North Korea to move stolen money. This principle goes back to the cypherpunk roots of bitcoin, a currency that operates outside of nation-states’ rules. THORChain is designed to avoid geopolitical entanglements; that’s what its users like about it.

But there are distinct financial motivations for moving crypto, stolen or not: Node operators earn fees from the funds running through the network. In theory, this incentivizes them to act in the network’s best interests—and, arguably, Thorbjornsen’s interests too, as many assume his wealth is tied to the network’s profits. (Thorbjornsen says it is not, and that it comes instead from “many sources,” including “buying bitcoin back in 2013.”)

Now recent events have raised critical questions, not just about Thorbjornsen’s outsize role in THORChain’s operations, but also about the blockchain’s underlying nature.

If THORChain is decentralized, how was a single operator able to freeze its funds a month before the Bybit hack? Could someone have unilaterally decided to stop the stolen Bybit funds from coming through the network, and chosen not to? 

Thorbjornsen insists THORChain is helping realize bitcoin’s original purpose of enabling anyone to transact freely outside the reach of purportedly corrupt governments. Yet the network’s problems suggest that an alternative financial system might not be much better.

Decentralized? 

On February 21, 2025, Bybit CEO Ben Zhou got an alarming call from the company’s chief financial officer. About $1.5 billion US of the exchange’s ethereum token, ETH, had been stolen. 

The FBI attributed the theft to the Lazarus Group. Typically, criminals will want to convert ETH to bitcoin, which is much easier to convert in turn to cash. Knowing this, the FBI issued a public service announcement on February 26 to “exchanges, bridges … and other virtual asset service providers,” encouraging them to block transactions from accounts related to the hack. 

Someone posted that announcement in THORChain’s private, invite-only developer channel on Discord, a chat app used widely by software engineers and gamers. While other crypto exchanges and bridges (which facilitate transactions across different blockchains) heeded the warning, THORChain’s node operators, developers, and invested insiders debated about whether or not to close the trading gates, a decision requiring a majority vote.

“Civil war is a very strong term, but there was a strong rift in the community,” says Boone Wheeler, a US-based crypto enthusiast. In 2021, Wheeler purchased some rune, THORChain’s Norse-mythology-themed native token, and he has been paid to write articles about the network to help advertise it. The rift formed “between people who wanted to stay permissionless,” he says, “and others who wanted to blacklist the funds.”

Wheeler, who says he doesn’t run a node or code for THORChain, fell on the side of remaining permissionless. However, others spoke up for blocking the transfers. THORChain, they argued, wasn’t decentralized enough to keep those running the network safe from law enforcement—especially when those operators were identifiable by their IP addresses, some based in the US.

“We are not the morality police,” someone with the username @Swing_Pop wrote on February 27 in the developer Discord.

THORChain’s design includes up to 120 nodes whose operators manage transactions on the network through a voting process. Anyone with hosting hardware can become an operator by funding nodes with rune as collateral, which provides the network with liquidity. Nodes can respond to a transaction by validating it or doing nothing. While individual transactions can’t be blocked, trading can be halted by a two-thirds majority vote. 

Nodes are also penalized for not participating in voting, which the system labels as “bad behavior.” Every 2.5 days, THORChain automatically “churns” nodes out to ensure that no one node gains too much control. The nodes that chose not to validate transactions from the Bybit hack were automatically “churned” out of the system. Thorbjornsen says about 20 or 30 nodes were booted from the network in this way. (Node operators can run multiple nodes, and 120 are rarely running simultaneously; at the time of writing, 55 unique IDs operated 103 nodes.)

By February 27, some node operators were prepared to leave the network altogether. “It’s personally getting beyond my risk tolerance,” wrote @Runetard in the dev Discord. “Sorry to those of the community that feel otherwise. There are a bunch of us holding all the risk and some are getting ready to walk away.”

Even so, the financial incentive for the network operators who remained was significant. As one member of the dev Discord put it earlier that day, $3 million had been “extracted as commission” from the theft by those operating THORChain. “This is wrong!” they wrote.

Thorbjornsen weighed in on this back-and-forth, during which nodes paused and unpaused the network. He now says there was a right and wrong way for node operators to have behaved. “The correct way of doing things,” he says, was for node operators who opposed processing stolen funds to “go offline and … get [themselves] kicked out” rather than try to police who could use THORChain. He also says that while operators could discuss stopping transactions, “there was simply no design in the code that allowed [them] to do that.” However, a since-deleted post from his personal X account on March 3, 2025, stated: “I pushed for all my nodes to unhalt trading [keep trading]. Threatened to yank bond if they didn’t comply. Every single one.” (Thorbjornsen says his social media team ran this account in 2025.) 

In an Australian 7 News Spotlight documentary last June, Thorbjornsen estimated that THORChain earned between $5 million and $10 million from the heist.

When asked in that same documentary if he received any of those fees, he replied, “Not directly.” When we spoke, I asked him to elaborate. He said he’s “not a recipient” of any funds THORChain sets aside for developers or marketers, nor does he operate any nodes. He was merely speaking generally, he told me: “All crypto holders profit indirectly off economic activity on any chain.”

a character in a hooded sweatshirt at a computer station

KAGAN MCLEOD

Most important to Thorbjornsen was that, despite “huge pressure to shut the protocol down and stop servicing these swaps,” THORChain chugged along. He also notes that the hackers’ tactics, moving fast and splitting funds across multiple addresses, made it difficult to identify “bad swaps.”

Blockchain experts like Nick Carlsen, a former FBI analyst at the blockchain intelligence company TRM Labs, don’t buy this assessment. Other services similar to THORChain were identifying and rejecting these transactions. Had THORChain done the same, Carlsen adds, the stolen funds could have been contained on the Ethereum network, which “would have basically denied North Korea the ability to kick off this laundering process.” 

And while THORChain touts its decentralization, in “practical applications” like the Lazarus Group’s theft, “most de-fi [decentralized finance] protocols are centralized,” says Daren Firestone, an attorney who represents crypto industry whistleblowers, citing a 2023 US Treasury Department risk assessment on illicit finance. 

With centralization comes culpability, and in these cases, Firestone adds, that comes down to “who profits from [the protocol], so who creates it? But most importantly, who controls it?” Is there someone who can “hit an emergency off switch? … Direct nodes?”

Many answer these questions with Thorbjornsen’s name. “Everyone likes to pass the blame,” he says, even though he wasn’t alone in building THORChain. “​​In the end, it all comes back to me anyway.”

THORChain origins

According to Thorbjornsen, his story goes like this.

The third of 10 homeschooled children in a “traditional” Catholic household in rural Australia, he spent his days learning math, reading, writing, and studying the Bible. As he got older, he was also responsible for instructing his younger siblings. Wednesday was his day to move the solar panels that powered their home. His parents “installed” a mango and citrus orchard, more to keep nine boys busy than to reap the produce, he says.

“We lived close to a local airfield,” Thorbjornsen says, “and I was always mesmerized by these planes.” He joined the Australian air force and studied engineering, but he says the military left him feeling like “a square peg in a round hole.” He adds that doing things his own way got him frequently “pulled aside” by superiors.

“That’s when I started looking elsewhere,” he says, and in 2013, he found bitcoin. It appealed because it existed “outside the system.”

During the 2017 crypto bull run, Thorbjornsen raised AU$12 million in an initial coin offering for CanYa, a decentralized marketplace he cofounded. CanYa ultimately “died” in 2018, and Thorbjornsen pivoted to a “decentralized liquidity” project that would become THORChain.

He worked with a couple of different developer teams, and then, in 2019, he clicked with an American developer, Chad Barraford, at a hackathon in Germany. Barraford (who declined to be interviewed for this story) was an early public face of THORChain. 

Around this time, Thorbjornsen says, “a couple of us helped manage the payroll and early investment funds.” In a 2020 interview, Kai Ansaari, identified as a THORChain “project lead,” wrote, “We’re all contributors … There’s no real ‘lead,’ ‘CEO,’ ‘founder,’ etc.”

In interviews conducted since he came out from behind the “leena” account in 2024, Thorbjornsen has positioned himself as a key lead. He now says his plan had always been to hand over the account, along with command powers and control of THORChain social media accounts, once the blockchain had matured enough to realize its promise of decentralization.

In 2021, he says, he started this process, first by ceasing to use his own rune to back node operators who didn’t have enough to supply their own funding (this can be a way to influence node votes without operating a node yourself). That year, the protocol suffered multiple hacks that resulted in millions of dollars in losses. Nine Realms, a US-incorporated coding company, was brought on to take over THORChain’s development. Thorbjornsen says he passed “leena” over to “other community members” and “left crypto” in 2021, selling “a bunch of bitcoin” and buying the helicopter. 

Despite this crypto departure, he came back onto the scene with gusto in 2024 when he revealed himself as the operator of the “leena” account. “​​For many years, I stayed private because I didn’t want the attention,” he says now. 

By early 2024 Thorbjornsen considered the network to be sufficiently decentralized and began advertising it publicly. He started regularly posting videos on his TikTok and YouTube channels (“Two sick videos every week,” in the words of one caption) that showed him piloting his helicopter wearing shirts that read “Thor.”

By November 2024, Thorbjornsen, who describes himself as “a bit flamboyant,” was calling himself THORChain’s CEO (“chief energy officer”) and the “master of the memes” in a video from Binance Blockchain Week, an industry conference in Dubai. You need “strong memetic energy,” he says in the video, “to create the community, to create the cult.” Cults imply centralized leadership, and since outing himself as “leena,” Thorbjornsen has publicly appeared to helm the project, with one interviewer deeming him the “THORChain Satoshi” (an allusion to the pseudonymous creator of bitcoin). 

One consequence of going public as a face of the protocol: He’s received death threats. “I stirred it up. Do I regret it? Who knows?” he said when we met in Australia. “It’s caused a lot of chaos.” 

But, he added, “this is the bed that I’ve laid.” When we spoke again, months later, he backtracked, saying he “got sucked into” defending THORChain in 2024 and 2025 because he was involved from 2018 to 2021 and has “a perspective on how the protocol operates.”

Centralized? 

Ryan Treat, a retired US Army veteran, woke up one morning in January 2025 to some disturbing activity on X. “My heart sank,” he says. THORFi, the THORChain program he’d used to earn interest on the bitcoin he’d planned to save for his retirement, had frozen all accounts—but that didn’t make sense.

THORFi featured a lending and saving program said to give users “complete control” and self-custody of their crypto, meaning they could withdraw it at any time. 

Treat was no crypto amateur. He bought his first bitcoin at around “$5 apiece,” he says, and had always kept it off centralized exchanges that would maintain custody of his wallets. He liked THORChain because it claimed to be decentralized and permissionless. “I got into bitcoin because I wanted to have government-less money,” he says. 

We were told it was decentralized. Then you wake up one morning and read this guy had an admin mimir.

Many who’d used THORFi lending and saving programs felt similarly. Users I interviewed differentiated THORChain from centralized lending platforms like BlockFi and Celsius, both of which offered extraordinarily high yields before filing for bankruptcy in 2022. “I viewed THORChain as a decentralized system where it was safer,” says Halsey Richartz, a Florida-based THORFi creditor, with “vanilla, 1% passive yield.” Indeed, users I spoke with hadn’t felt the need to monitor their THORFi deposits. “Only your key can be used to withdraw your funds,” the product’s marketing materials insisted. “Savers can withdraw their position to native assets at any time.”

So on January 9, when the “leena” account announced that an admin key had been used to pause withdrawals, it took THORFi users by surprise—and seemed to contradict the marketing messaging around decentralization. “We were told that it was decentralized, and you wake up one morning and read an article that says ‘This guy, JP, had an admin mimir,’” says Treat, referring to Thorbjornsen, “and I’m like, ‘What the fuck is an admin mimir?’”

The admin mimir was one of “a bunch of hard-coded admin keys built into the base code of the system,” says Jonathan Reiter, CEO of the blockchain intelligence company ChainArgos. Those with access to the keys had the ability to make executive decisions on the blockchain—a function many THORChain users didn’t realize could supersede the more democratic decisions made by node votes. These keys had been in THORChain’s code for years and “let you control just about anything,” Reiter adds, including the decision to pause the network during the hacks in 2021 that resulted in a loss of more than $16 million in assets. 

Thorbjornsen says that one key was given to Nine Realms, while another was “shared around the original team.” He told me at least three people had them, adding, “I can neither confirm nor deny having access to that mimir key, because there’s no on-chain registry of the keys.”

Regardless of who had access, Thorbjornsen maintains that the admin mimir mechanism was “widely known within the community, and heavily used throughout THORChain’s history” and that any action taken using the keys “could be largely overruled by the nodes.” Indeed, nodes voted to open withdrawals back up shortly after the admin key was used to pause them. By then, those burned by THORFi argue, the damage had already been done. The executive pause to withdrawals, for some, signaled that something was amiss with THORFi. This led to a bank run after the pause was lifted, until the nodes voted to freeze withdrawals permanently (which Thorbjornsen had suggested in a since-deleted post on X), separating users from crypto worth around $200 million in US dollars on January 23. THORFi users were then offered a token called TCY (THORChain Yield), which they could claim with the idea that, when its price rose to $1, they would be made whole. (The price, as of writing, sits at $0.16.)

Who used the key? Thorbjornsen maintains he didn’t do it, but he claims he knows who did and won’t say. He says he’d handed over the “leena” account and doesn’t “have access to any of the core components of the system,” nor has he for “at least three years.” He implies that whoever controlled “leena” at the time used the admin key to pause network withdrawals.

A video released by Nine Realms on February 20, 2025, names Thorbjornsen as the activator of the key, stating, “JP ended up pausing lenders and savers, preventing withdrawals so that we can work out … [a] payback plan on them.” Thorbjornsen told me the video was “not factual.”

Multiple blockchain analysts told me it would be extremely difficult to determine who used the admin mimir key. A month after it was used to pause the network, THORChain said the key had been “removed from the network.” At least you can’t find the words “admin mimir” in THORChain’s base code; I’ve looked. 

Culpability

After the debacle of the THORFi withdrawal freeze, Richartz says, he tried to file reports with the Miami-Dade Police Department, the Florida Department of Law Enforcement, the FBI, the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Federal Trade Commission, and Interpol. When we spoke in November, he still hadn’t been able to file with the city of Miami. They told him to try small claims court.

“I was like, no, you don’t understand … a post office box in Switzerland is the company address,” he says. “It underscored to me how little law enforcement even knows about these crimes.” 

As for the Bybit hack, at least one government has retaliated against those who facilitate blockchain projects. Last April German authorities shut down eXch, an exchange suspected of using THORChain to process funds Lazarus stole from Bybit, says Julia Gottesman, cofounder and head of investigations at the cybersecurity group zeroShadow. Australia, she adds, where Thorbjornsen was based, has been “slow to try to engage with the crypto community, or any regulations.”

a character with his pockets turned out shrugs next to his helicopter while wearing meme sunglasses

KAGAN MCLEOD

In response to requests for comment, Australia’s Department of Home Affairs wrote that at the end of March 2026, the country’s regulatory powers will expand to include “exchanges between the same type of cryptocurrency and transfers between different types.” They did not comment on specific investigations.

Crypto and finance experts disagree about whether THORChain engaged in money laundering, defined by the UN as “the processing of criminal proceeds to disguise their illegal origin.” But some think it fits the definition.

Shlomit Wagman, a Harvard fellow and former head of Israel’s anti-money-laundering agency and its delegation to the Financial Action Task Force (FATF), thinks the Bybit activity was money laundering because THORChain helped the hackers “transfer the funds in an unsupervised manner, completely outside of the scope of regulated or supervised activity.” 

And by helping with conversions, Carlsen says, THORChain enabled bad actors to turn stolen crypto into usable currency. “People like [Thorbjornsen] have a personal degree of culpability in sustaining the North Korean government,” he says. Thorbjornsen counters that THORChain is “open-source infrastructure.”

Meanwhile, just days after the hack, Bybit issued a 10% bounty on any funds recovered. As of mid-January this year, between $100 million and $500 million worth of those funds in US dollars remain unaccounted for, according to Gottesman of zeroShadow, which was hired by Bybit to recover funds following the hack.

Thorbjornsen hacked

For Thorbjornsen, it’s just another day at the casino. That’s the comparison he made during his regrettable 7 News Spotlight interview about the Bybit heist, and he repeated it when we met. “You go to a casino, you play a few games, you expect to lose,” he told me. “When you do actually go to zero, don’t cry.”

Thorbjornsen, it should be noted, has lost at the casino himself.

In September, he says, he got a Telegram message from a friend, inviting him to a Zoom meeting. He accepted and participated in a call with people who had “American voices.”

Ultimately, Thorbjornsen describes himself as a guy who’s had a bad year, fending off “threat vectors” left and right.

After the meeting, Thorbjornsen learned that his friend’s Telegram had been hacked. Whoever was responsible had used the Zoom link to remotely install software on Thorbjornsen’s computer, which “got access to everything”—his email, his crypto wallets, a bitcoin-based retirement fund. It cost him at least $1.2 million. The blockchain sleuth known as ZachXBT traced the funds and attributed the hack to North Korea. 

ZachXBT called it “poetic.”

Ultimately, Thorbjornsen describes himself as a guy who’s had a bad year. He says he had to liquidate his crypto assets because he’s dealing with the fallout of a recent divorce. He also feels he is fending off “threat vectors” left and right. More than once, he asked if I was a private investigator masquerading as a journalist.

Still, his many contradictions don’t inspire confidence. He doesn’t have any more crypto assets, he says. However, the crypto wallet he shared with me so I could pay him back for lunch showed that it contained assets worth more than $143,000 in US dollars. He now says it wasn’t his wallet. He says he doesn’t control THORChain’s social media, but he’d also told me that he runs the @THORChain X account (later backtracking and saying the account is “delegated” to him for trickier questions).

He insists that he does not care about money. He says that in the robot future, the AI-powered hive mind will become our benevolent overlord, rendering money obsolete, so why give it much thought? Yet as we flew back from the vineyard, he pointed out his new house from the helicopter. It resembles a compound. He says he lives there with his new wife. 

Multiple people I spoke with about Thorbjornsen before I met him warned me he wasn’t trustworthy, and he’s undeniably made fishy statements. For instance, the presence of a North Korean flag in a row of decals on the tail of his helicopter suggested an affinity with the country for which THORChain has processed so much crypto. Thorbjornsen insists he had requested the flag of Australia’s Norfolk Island, calling the mix-up “a complete coincidence.” The flags were gone by the time of our flight, apparently removed during a recent repair.

“Money is a meme,” he says. “Money does not exist.” Meme or not, it’s had real repercussions for those who have interacted with THORChain, and those who wound up losing have been looking for someone to blame. 

When I spoke with Thorbjornsen again in January, he appeared increasingly concerned that he is that someone. He’s spending more time in Singapore, he told me. Singapore happens to have historically denied extraditions to the US for money-laundering prosecutions. 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.

The Download: a blockchain enigma, and the algorithms governing our lives

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Welcome to the dark side of crypto’s permissionless dream

Jean-Paul Thorbjornsen, an Australian man in his mid-30s, with a rural Catholic upbringing, is a founder of THORChain, a blockchain through which users can swap one cryptocurrency for another and earn fees from making those swaps.

THORChain is permissionless, so anyone can use it without getting prior approval from a centralized authority. As a decentralized network, the blockchain is built and run by operators located across the globe. During its early days, Thorbjornsen himself hid behind the pseudonym “leena” and used an AI-generated female image as his avatar. But around March 2024, he revealed his true identity as the mind behind the blockchain. More or less.

If there is a central question around THORChain, it is this: Exactly who is responsible for its operations? It matters because in January last year, its users lost more than $200 million worth of their cryptocurrency in US dollars after THORChain transactions and accounts were frozen by a singular admin override, which users believed was not supposed to be possible given the decentralized structure.

Thorbjornsen insists THORChain is helping realize bitcoin’s original purpose of enabling anyone to transact freely outside the reach of purportedly corrupt governments. Yet the network’s problems suggest that an alternative financial system might not be much better. Read the full story.

—Jessica Klein

The robots who predict the future

To be human is, fundamentally, to be a forecaster. Occasionally a pretty good one. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid being hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival.

Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. People’s desire for reliable forecasting is understandable. Still, nobody signed up for an omnipresent, algorithmic oracle mediating every aspect of their life. A trio of new books tries to make sense of our future-­focused world—how we got here, and what this change means. Each has its own prescriptions for navigating this new reality, but they all agree on one thing: Predictions are ultimately about power and control. Read the full story.

—Bryan Gardiner

These stories are both from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 

MIT Technology Review Narrated: Stratospheric internet could finally start taking off this year

Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Mark Zuckerberg is due to give evidence in a major social media addiction trial
He’ll face questioning over whether Meta does enough to protect young users. (CNN)

2 Perplexity has abandoned ads inside its chatbot responses
Because advertising can erode trust in AI, it reasons. (FT $)
+ It’s a pretty big U-turn considering its previous stance. (The Verge)

3 The US is being battered by a range of wild weather
From critical wildfire risks in some states, to winter storms in others. (WP $)

4 Microsoft plans to spend $50 billion bringing AI to the Global South by 2030
India is one of the fastest growing markets for the technology. (Reuters)
+ One native startup has announced a new AI model for 22 Indian languages. (Bloomberg $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)

5 AI-powered private schools are failing students
Models are being used to generate faulty lesson plans. (404 Media)

6 Land owners are selling out to data center builders
Land previously earmarked for housing is being sold off to the highest bidder. (WSJ $)

7 Tesla has agreed to stop using the term “autopilot” in California
The DMV had previously also questioned its use of “Full Self-Driving.” (SF Chronicle $)

8 A new weight-loss drug may work a little too well
Participants in a trial are dropping out at a much higher rate than normal. (NYT $)
+ Intermittent fasting may not help us to shed the pounds after all. (New Scientist $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

9 Is anyone still using Grindr?
Bots and AI have rendered it virtually unusable for some people. (Vox)

10 How to hack your dreams
Neuroscientists are figuring out new ways to influence what we dream about. (New Scientist $)
+ I taught myself to lucid dream. You can too. (MIT Technology Review)

Quote of the day

“I voted for this administration and didn’t really think about [AI] until it started to affect me.”

—Lisa Garrett, a grandmother living in the city of Independence, Missouri, reflects on the Trump administration’s decision to embrace AI, the Financial Times reports.

One more thing

Hydrogen trains could revolutionize how Americans get around

Like a mirage speeding across the dusty desert outside Pueblo, Colorado, the first hydrogen-fuel-cell passenger train in the United States is getting warmed up on its test track. It will soon be shipped to Southern California, where it is slated to carry riders on San Bernardino County’s Arrow commuter rail service before the end of the year.

The best way to decarbonize railroads is the subject of growing debate among regulators, industry, and activists. The debate is partly technological, revolving around whether hydrogen fuel cells, batteries, or overhead electric wires offer the best performance for different railroad situations. But it’s also political: a question of the extent to which decarbonization can, or should, usher in a broader transformation of rail transportation.

In the insular world of railroading, this hydrogen-powered train is a Rorschach test. To some, it represents the future of rail transportation. To others, it looks like a big, shiny distraction. Read the full story.

—Benjamin Schneider

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ How to quickly declutter your home by being brutally honest with yourself.
+ The filming locations for A Knight of the Seven Kingdoms are pretty breathtaking.
+ Why a unicyclist decided to start juggling flaming torches in the middle of a Colorado pedestrian crossing is anyone guess, but good luck to him.
+ How pepper took over the world (deservedly)

Google DeepMind wants to know if chatbots are just virtue signaling

<div data-chronoton-summary="Moral scrutiny of AI chatbots
Google DeepMind researchers are calling for rigorous evaluation of large language models’ moral reasoning capabilities. They want to distinguish between genuine ethical understanding and mere performance.

Unreliable moral responses
Studies reveal LLMs can dramatically change moral stances based on minor formatting changes or user disagreement. This suggests their ethical responses may be superficial rather than deeply reasoned.

Proposed research techniques
Researchers suggest developing tests that push models to maintain consistent moral positions across different scenarios. Techniques like chain-of-thought monitoring and mechanistic interpretability could help understand AI’s moral decision-making process.

Cultural complexity of ethics
The team acknowledges the challenge of developing AI with moral competence across diverse global belief systems. They propose potential solutions like creating models that can produce multiple acceptable answers or switch between different moral frameworks.” data-chronoton-post-id=”1133299″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Google DeepMind is calling for the moral behavior of large language models—such as what they do when called on to act as companions, therapists, medical advisors, and so on—to be scrutinized with the same kind of rigor as their ability to code or do math.

As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people’s behalf. LLMs may be able to influence human decision-making. And yet nobody knows how trustworthy this technology really is at such tasks.

With coding and math, you have clear-cut, correct answers that you can check, William Isaac, a research scientist at Google DeepMind, told me when I met him and Julia Haas, a fellow research scientist at the firm, for an exclusive preview of their work, which is published in Nature today. That’s not the case for moral questions, which typically have a range of acceptable answers: “Morality is an important capability but hard to evaluate,” says Isaac.

“In the moral domain, there’s no right and wrong,” adds Haas. “But it’s not by any means a free-for-all. There are better answers and there are worse answers.”

The researchers have identified several key challenges and suggested ways to address them. But it is more a wish list than a set of ready-made solutions. “They do a nice job of bringing together different perspectives,” says Vera Demberg, who studies LLMs at Saarland University in Germany.

Better than “The Ethicist”

A number of studies have shown that LLMs can show remarkable moral competence. One study published last year found that people in the US scored ethical advice from OpenAI’s GPT-4o as being more moral, trustworthy, thoughtful, and correct than advice given by the (human) writer of “The Ethicist,” a popular New York Times advice column.  

The problem is that it is hard to unpick whether such behaviors are a performance—mimicking a memorized response, say—or evidence that there is in fact some kind of moral reasoning taking place inside the model. In other words, is it virtue or virtue signaling?

This question matters because multiple studies also show just how untrustworthy LLMs can be. For a start, models can be too eager to please. They have been found to flip their answer to a moral question and say the exact opposite when a person disagrees or pushes back on their first response. Worse, the answers an LLM gives to a question can change in response to how it is presented or formatted. For example, researchers have found that models quizzed about political values can give different—sometimes opposite—answers depending on whether the questions offer multiple-choice answers or instruct the model to respond in its own words.

In an even more striking case, Demberg and her colleagues presented several LLMs, including versions of Meta’s Llama 3 and Mistral, with a series of moral dilemmas and asked them to pick which of two options was the better outcome. The researchers found that the models often reversed their choice when the labels for those two options were changed from “Case 1” and “Case 2” to “(A)” and “(B).”

They also showed that models changed their answers in response to other tiny formatting tweaks, including swapping the order of the options and ending the question with a colon instead of a question mark.

In short, the appearance of moral behavior in LLMs should not be taken at face value. Models must be probed to see how robust that moral behavior really is. “For people to trust the answers, you need to know how you got there,” says Haas.

More rigorous tests

What Haas, Isaac, and their colleagues at Google DeepMind propose is a new line of research to develop more rigorous techniques for evaluating moral competence in LLMs. This would include tests designed to push models to change their responses to moral questions. If a model flipped its moral position, it would show that it hadn’t engaged in robust moral reasoning. 

Another type of test would present models with variations of common moral problems to check whether they produce a rote response or one that’s more nuanced and relevant to the actual problem that was posed. For example, asking a model to talk through the moral implications of a complex scenario in which a man donates sperm to his son so that his son can have a child of his own might produce concerns about the social impact of allowing a man to be both biological father and biological grandfather to a child. But it should not produce concerns about incest, even though the scenario has superficial parallels with that taboo.

Haas also says that getting models to provide a trace of the steps they took to produce an answer would give some insight into whether that answer was a fluke or grounded in actual evidence. Techniques such as chain-of-thought monitoring, in which researchers listen in on a kind of internal monologue that some LLMs produce as they work, could help here too.

Another approach researchers could use to determine why a model gave a particular answer is mechanistic interpretability, which can provide small glimpses inside a model as it carries out a task. Neither chain-of-thought monitoring nor mechanistic interpretability provides perfect snapshots of a model’s workings. But the Google DeepMind team believes that combining such techniques with a wide range of rigorous tests will go a long way to figuring out exactly how far to trust LLMs with certain critical or sensitive tasks.  

Different values

And yet there’s a wider problem too. Models from major companies such as Google DeepMind are used across the world by people with different values and belief systems. The answer to a simple question like “Should I order pork chops?” should differ depending on whether or not the person asking is vegetarian or Jewish, for example.

There’s no solution to this challenge, Haas and Isaac admit. But they think that models may need to be designed either to produce a range of acceptable answers, aiming to please everyone, or to have a kind of switch that turns different moral codes on and off depending on the user.

“It’s a complex world out there,” says Haas. “We will probably need some combination of those things, because even if you’re taking just one population, there’s going to be a range of views represented.”

“It’s a fascinating paper,” says Danica Dillion at Ohio State University, who studies how large language models handle different belief systems and was not involved in the work. “Pluralism in AI is really important, and it’s one of the biggest limitations of LLMs and moral reasoning right now,” she says. “Even though they were trained on a ginormous amount of data, that data still leans heavily Western. When you probe LLMs, they do a lot better at representing Westerners’ morality than non-Westerners’.”

But it is not yet clear how we can build models that are guaranteed to have moral competence across global cultures, says Demberg. “There are these two independent questions. One is: How should it work? And, secondly, how can it technically be achieved? And I think that both of those questions are pretty open at the moment.”

For Isaac, that makes morality a new frontier for LLMs. “I think this is equally as fascinating as math and code in terms of what it means for AI progress,” he says. “You know, advancing moral competency could also mean that we’re going to see better AI systems overall that actually align with society.”

New Ecommerce Tools: February 18, 2026

Our rundown this week of new services for merchants includes livestreaming, product videos, Reddit Ads, predictive analytics, AI-powered ads, discounted shipping, B2B, account-to-account transactions, and AI voice agents.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Intuit Mailchimp releases data-driven tools for ecommerce marketing. Intuit has announced a set of new products for Mailchimp. Site Tracking Pixel connects to review platforms such as Yotpo and Judge.me to pull consented-to ecommerce and sentiment data. Predictive analytics spot high-value and at-risk customers. AI-powered tools build on-brand content, use reusable templates, and integrate with ChatGPT. Additional features include expanded SMS and transactional messaging, an omnichannel marketing dashboard, and enhanced migration tools.

Home page of Intuit Mailchimp

Intuit Mailchimp

Klarna launches on Google Pay in the U.K. Klarna, a digital bank and buy-now pay-later provider, is now available on Google Pay in the U.K. Google Pay users can choose Klarna’s interest-free payment options at checkout and then manage deliveries, returns, and repayments in the Klarna app.

Xnurta and Front Row partner on Amazon advertising and retail media. Xnurta, an AI-powered advertising platform, and Front Row, an ecommerce agency, have partnered to accelerate performance in Amazon advertising and retail media. Front Row will leverage Xnurta’s agentic AI ad management platform to empower brands with advanced automation, performance insights, and AI-assisted campaign management across retail media.

eBay Live launches in Canada. eBay Live, an interactive shopping experience, has launched in Canada. Shoppers can ask questions, see items up close, and shop instantly on mobile, desktop, or the eBay app. Shoppers can preview the eBay Live programming schedule and sign up for reminders when each stream starts. Launched in the U.S. in 2022, eBay Live has since expanded to Australia, France, Germany, Italy, and the U.K.

Shirofune introduces Reddit Ads integration. Shirofune, an advertising automation platform, has announced the addition of Reddit Ads to its supported channels. The integration allows advertisers and agencies to plan, manage, and optimize Reddit campaigns within the same unified Shirofune interface they use for search, social, and ecommerce media. Advertisers can monitor and adjust Reddit campaigns from a central dashboard, automate budgets and bids, and combine data with other platforms.

Home page of Shirofune

Shirofune

Amazon launches Pay by Bank in the U.K. Amazon.co.uk has announced Pay by Bank, an account-to-account payment method that allows customers to complete purchases by connecting to their banks. Security is maintained through customers’ own banking app, using their established biometric authentication or PIN verification systems. The service includes expedited refund processing — returning funds within minutes after Amazon confirms receipt of returned items.

ROAS Suite launches video ad platform for ecommerce. ROAS Suite, an AI ad production platform, has launched a video ad generation tool. The platform uses a store’s URL to produce structured ad variants for Meta and YouTube. It analyzes the storefront to build brand-specific creative — extracting logos, fonts, color palettes, product catalog data, and market positioning. The platform then generates ad assets tailored to customer segments and funnel stages.

Splio launches AI-powered CRM. Splio, an omnichannel marketing and loyalty platform, has launched its AI-enabled CRM platform powered by Tinyclues, a predictive AI solution for personalized communications across channels, including email, text, and WhatsApp. Splio has also unveiled Ask My CRM, an AI agent designed as an intelligent marketing copilot, plugged into each brand’s customer data for CRM management. By making prediction the heart of its AI-powered CRM, the platform helps brands to drive personalization at scale, according to Splio.

Easyship launches discounted FedEx shipping for Canadian merchants. Easyship, a multi-carrier shipping software and cross-border API platform for ecommerce, has launched discounted FedEx shipping services for Canadian merchants. The offering includes access to FedEx Ground for domestic shipments and FedEx International Connect Plus for cross-border deliveries, along with a suite of premium FedEx services, available directly through the Easyship platform with no minimum volume or separate FedEx requirements.

Home page of Easyship

Easyship

OroCommerce partners with Azilen Technologies on B2B commerce. OroCommerce, a B2B commerce platform, has partnered with Azilen Technologies, a software developer. According to OroCommerce, the collaboration strengthens its partner network with an engineering-led firm capable of delivering large-scale, high-complexity B2B commerce software across North America, Europe, and APAC. Azilen will leverage OroCommerce’s architecture and automation capabilities to help organizations unify B2B and B2C models, streamline high-touch sales processes, and enhance customer self-service.

Google unveils shopping ad format in AI Mode. Google has introduced a shopping ad format for AI Mode, its conversational search experience where users can compare products, brands, and stores. According to Google, the new ad format is an opportunity for retailers to enter the conversation and appear in key moments of discovery. The format will roll out soon for Shopping and Performance Max campaigns, per Google.

Newo raises $25 million to scale AI voice infrastructure for small businesses. Newo, a startup building human-like AI-powered voice agents, announced a $25 million funding round led by Ratmir Timashev, co-founder of Veeam Software and Oh.io. Newo plans to use the capital to accelerate product development, expand its partner ecosystem, and scale go-to-market efforts to meet growing demand from SMB-focused service providers.

Salesforce acquires Cimulate for AI-powered product discovery. Salesforce has agreed to acquire Cimulate, an AI-powered intent-aware context engine for retail. Cimulate’s platform combines real and simulated shopper journey data to understand intent, enabling relevant search results and personalized discovery experiences. Salesforce states the acquisition will strengthen its Agentforce Commerce by accelerating improvements to search and discovery.

Home page of Cimulate

Cimulate

ChatGPT Search Often Switches To English In Fan-Out Queries: Report via @sejournal, @MattGSouthern

When ChatGPT Search builds an answer, it can generate background web queries to find sources. A new report from AI search analytics firm Peec AI found that a large share of those background queries run in English, even when the original prompt was in another language.

Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across all non-English prompts analyzed, the company reports that 43% of the fan-out steps were conducted in English.

What Are Fan-Out Queries

OpenAI’s ChatGPT Search documentation describes fan-out queries. When a user asks a question, ChatGPT Search “typically rewrites your query into one or more targeted queries” and sends them to search partners. After reviewing initial results, “ChatGPT search may send additional, more specific queries to other search providers.”

Peec AI refers to these rewritten sub-queries as “fan-outs.” The company’s report tracked which languages ChatGPT used when generating them.

OpenAI’s documentation does not describe how language is chosen for rewritten queries.

What Peec AI Found

Peec AI filtered its data to include only cases where the IP location matched the prompt language. Polish-language prompts from Polish IP addresses, German-language prompts from German IPs, and Spanish-language prompts from Spanish IPs. Mixed signals, such as German-language prompts from UK IP addresses, were excluded.

The filtered data showed that 78% of non-English prompt runs included at least one English-language fan-out query.

Turkish-language prompts included English fan-outs most often, at 94%. Spanish-language prompts were lowest, at 66%. No non-English language in Peec AI’s dataset fell below 60%.

Peec AI’s data showed a consistent pattern across languages. ChatGPT typically starts its fan-out queries in the prompt’s language, then adds English-language queries as it builds the response.

Examples From The Report

Peec AI’s blog post included several examples showing how the pattern can play out in practice.

When prompted in Polish from a Polish IP address about the best auction portals, ChatGPT either omitted or buried Allegro.pl in favor of eBay and other global platforms. Peec AI describes Allegro as Poland’s dominant ecommerce platform.

When prompted in German about German software companies, Peec AI reported the response listed no German companies. When prompted in Spanish about cosmetics brands, no Spanish brands appeared.

In the Spanish cosmetics example, Peec AI showed ChatGPT’s actual fan-out queries. The first ran in English. The second ran in Spanish but added the word “globales” (global), a qualifier the original prompt never used. The system appears to have interpreted a Spanish-language prompt from a Spanish IP address as a request for global brands.

These are individual examples from Peec AI’s testing, not necessarily representative of all ChatGPT Search behavior.

Why This Matters

SEO and content teams operating in non-English markets may face a disadvantage in ChatGPT’s source selection that may not map cleanly to traditional ranking signals. In Peec AI’s examples, English-language fan-out queries surfaced English-language sources that favored global brands over local competitors.

We’ve been covering ChatGPT’s citation patterns for over a year now, from SE Ranking’s report on citation factors to the Tow Center’s attribution accuracy findings. Those earlier reports showed which signals predict whether a source gets cited. Peec AI’s data suggests the language of the background query may filter which sources are even considered, before citation signals come into play.

Methodology Notes

Peec AI is a vendor in the AI search analytics space. The company’s documentation describes its data collection method as running customer-defined prompts daily via browser automation, interacting with AI platforms through their web interfaces rather than APIs. The 10 million prompts in this report came from Peec AI’s platform, not from a panel of consumer ChatGPT sessions.

The report didn’t detail the composition of those prompts, what categories or industries they covered, or how representative they are of broader ChatGPT usage patterns.

Tomek Rudzki, the report’s author, is presented by Peec AI as a “GEO Expert” on its blog. He is a well-known technical SEO practitioner who has spoken at BrightonSEO and SMX Munich and contributed to publications such as Moz.

Looking Ahead

OpenAI’s public ChatGPT Search docs describe query rewriting and follow-up queries but don’t explain how language is chosen for those queries. Whether the English fan-out pattern Peec AI identified is an intentional design choice or an emergent behavior of the system remains unclear.

The report raises a question worth monitoring. Will building English-language content become part of AI search optimization strategies, or will AI search platforms adjust their source selection to better reflect local markets?


Featured Image: arda savasciogullari/Shutterstock