Expat Money CEO on Moving Abroad

In “How to Leave the U.S.A.,” the venerable New Yorker magazine recently addressed what many residents have apparently considered.

Yet Mikkel Thorup has lived outside of his native Canada for 25 years. He’s visited 120 countries and resided in nine of them. His business, Expat Money, helps others do the same while protecting assets and lifestyle.

Why relocate overseas? What are the risks and the rewards? Mikkel answered those questions and more in our recent conversation.

Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: What do you do?

Mikkel Thorup: I am the founder and CEO of Expat Money, a consulting firm that helps people relocate to a foreign country. We focus on international tax planning, immigration, foreign investment, and global structuring, as well as the lifestyle adjustments that come with living abroad.

I’ve been an expatriate for 25 years, visited 120 countries, lived in nine, and circled the globe many times. My family, business, and hobbies are all international. I love the work and am excited to talk about it.

I was born and raised in Canada, which does not impose worldwide taxation. Once you leave and cut ties with the Canada Revenue Agency, you’re free to live abroad without ongoing tax obligations.

For Americans, it’s very different. The IRS levies taxes based on citizenship, not residency. No matter where you live or how long you’re gone, the IRS wants a portion of every dollar you earn. Only two countries tax this way: the United States and Eritrea, a small African nation.

Americans can sometimes avoid tax if they earn below standard thresholds, but anyone with meaningful income — whether living in the U.S. or abroad — remains subject to the IRS.

Renouncing citizenship is an option if you want to end all U.S. reporting requirements, but it’s a deeply personal decision and not something I generally recommend. Some people choose it, and we assist clients with the process, but most of our work does not require giving up citizenship.

We help Americans move overseas all the time, and there are legal tools that can significantly reduce their tax burden. I’m not giving individual tax advice here, but there are viable strategies available. Still, at higher wealth levels, those tools eventually hit limits, so it’s important to understand what’s possible.

Everything we do follows the law. My goal is to help people gain more freedom, not less, and that means full compliance with the IRS, U.S. Treasury, and all reporting rules. I have no interest in ending up in an orange jumpsuit, and I don’t want that for clients either.

Bandholz: Do people come to you mainly for freedom or to reduce obligations?

Thorup: Most clients want a “Plan B,” an economic backup. They’re productive people, typically in two groups: about half are highly paid professionals such as doctors, lawyers, and accountants, and the other half are business owners or entrepreneurs, such as consultants and Amazon sellers.

For many, the goal is preparing an exit option in case things get bad enough that they want to leave. Others feel things are already bad and choose to relocate now, often to the Caribbean or Latin America for more freedom, lower taxes, safer communities, and better weather. When they make that move, opportunities open up quickly.

But leaving isn’t required. Plenty stay in the U.S., Canada, the U.K., or elsewhere while setting up offshore components — bank accounts, property, company structures, or residency options. Others go all-in and decide to work from a beach somewhere. My job is to create those legal, compliant structures so they have choices, whether they stay or go.

Around 90% of my clients are from the U.S. and Canada; the remaining 10% are mainly from Europe or Australia. Latin America and the Caribbean are the top destinations because that’s where people often find the most freedom — pro-business, low taxes, and governments that welcome foreign investment.

Eric Bandholz: How can someone protect assets if a government freezes accounts?

Thorup: Bitcoin is one tool — specifically, self-custody Bitcoin, not coins held on exchanges such as Kraken or Binance. If you don’t control the keys, you don’t control the coins. I’ve used Bitcoin since 2016, and it’s useful, but it’s not the only solution.

Offshore bank accounts are another strong option. That means holding a bank account in a country where you’re not a resident. Debanking — where financial institutions terminate services —  happens more often than people think, even in one’s own country.

Every adult, company, or trust should have bank accounts in three countries, each with a different currency and legal system. If a home-country bank freezes or closes your account, you have alternatives.

Properly structured offshore accounts make it much harder for lawsuits or government actions to reach your money. Asset forfeiture and account freezes happen, and they’ll continue to happen, so planning is essential.

Bandholz: Where do people typically open offshore bank accounts?

Thorup: Offshore banking usually means choosing a country with low or zero taxes, strong asset-protection laws, and political stability. There’s no point banking in a place where you can’t reliably move money in or out. The most common offshore jurisdictions are in the Caribbean, the British Channel Islands, and the Isle of Man. In Europe, Liechtenstein, Luxembourg, and Switzerland serve that role. Hong Kong, Macau, and Singapore are popular in Asia.

Central America also has several strong options, such as Panama, where I live. It has no tax on foreign-sourced income, a stable banking sector, a U.S. dollar economy, and access to both the Caribbean and the Pacific.

Bandholz: Where should people start if they want to explore international options?

Thorup: I recommend three key fronts.

First, get a second citizenship or permanent residency. If you have European ancestry, you might qualify for citizenship by descent. If not, consider citizenship by investment or naturalization through long-term residency. If citizenship isn’t an option, permanent residency is fast, affordable, and effective in Paraguay, Costa Rica, or Panama.

Second, secure a second home. Even a small property provides a place to live if needed. Ideally, it can generate rental income. In Latin America, condos start around $65,000 and beachfront homes around $100,000, paid in cash, with clear property titles. These are long-lasting, tangible assets that protect wealth outside stocks or business accounts.

Third, hold capital offshore, whether in a bank, precious metals, or other assets. This ensures access to your money if domestic accounts are frozen or restricted due to politics or other issues.

Bandholz: Where can people follow you, connect with you?

Thorup: ExpatMoney.com. Follow my YouTube channel and connect on X or LinkedIn.

Google Updates Search Live With Gemini Model Upgrade via @sejournal, @martinibuster

Google has updated Search Live with Gemini 2.5 Flash Native Audio, upgrading how voice functions inside Search while also extending the model’s use across translation and live voice agents. The update introduces more natural spoken responses in Search Live and reflects Google’s effort to improve natural voice queries, treating voice as a core interface as a way for users to get everything they can get from regular search plus enabling them to ask questions about the physical world around them and receive immediate voice translations between two people speaking different languages.

The new updated voice capabilities, rolling out this week in the  United States, will enable Google’s voice responses to sound more natural and can even be slowed down for instructional content.

According to Google:

“When you go Live with Search, you can have a back-and-forth voice conversation in AI Mode to get real-time help and quickly find relevant sites across the web. And now, thanks to our latest Gemini model for native audio, the responses on Search Live will be more fluid and expressive than ever before.”

Broader Gemini Native Audio Rollout

This Search upgrade is part of a broader update to Gemini 2.5 Flash Native Audio rolling out across Google’s ecosystem, including Gemini Live (in the Gemini App), Google AI Studio, and Vertex AI. The model processes spoken audio in real time and produces fluid spoken responses, reducing barriers to natural conversation, reducing friction in live interactions. Although Google’s announcement didn’t say that the model was a speech-to-speech model (as opposed to speech-to-text then text-to-speech), this update follows Google’s October announcement of “Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries.”

These changes show Google treating native audio as a core capability across consumer-facing products, making it easier for users to ask and receive information about the physical world around them in a natural manner that wasn’t previously possible.

Improvements For Voice-Based Systems

For developers and enterprises building voice-based systems, Google says the updated model improves reliability in several areas. Gemini 2.5 Flash Native Audio more consistently triggers external functions during conversations, follows complex instructions, and maintains context across multiple turns. These improvements make live voice agents more dependable in real-world workflows, where misinterpreted instructions or broken conversational flow reduce usability.

Smooth Conversational Translation

Beyond Search and voice agents, the update introduces native support for “live speech-to-speech translation.” Gemini translates spoken language in real time, either by continuously translating ambient speech into a target language or by handling conversations between speakers of different languages in both directions. The system preserves vocal characteristics such as speech rhythm and emphasis, supporting translation that sounds smoother and conversational.

Google highlights several capabilities supporting this translation feature, including broad language coverage, automatic language detection, multilingual input handling, and noise filtering for everyday environments. These features reduce setup friction and allow translation to occur passively during conversation rather than through manual controls. The result is a translation experience that behaves much like an actual person in the middle translating between two people.

Voice Search Realizing Google’s Aspirations

The update reflects Google’s continued iteration of voice search toward an ideal that was originally inspired by the science fiction voice interactions between humans and computers in the popular Star Trek television and movie series.

Read More:

Google Announces A New Era For Voice Search

You can now have more fluid and expressive conversations when you go Live with Search.

Improved Gemini audio models for powerful voice interactions

Gemini Live

5 ways to get real-time help by going Live with Search

Featured Image by Shutterstock/Jackbin

How People Use Copilot Depends On Device, Microsoft Says via @sejournal, @MattGSouthern

How people use Microsoft Copilot depends on whether they’re at a desk or on their phone.

That is the core theme in the company’s analysis of 37.5 million Copilot conversations sampled between January and September.

The research examines consumer Copilot usage patterns across device types and time of day. The authors say they used machine-based classifiers to categorize conversations by topic and intent without any human review of the messages.

What The Report Says

On mobile, Health and Fitness is the most common topic throughout the day

The authors summarize the split this way:

“On mobile, health is the dominant topic, which is consistent across every hour and every month we observed, with users seeking not just information but also advice.”

Desktop usage follows a different rhythm. Technology leads as the top topic overall, but the researchers report that work-related conversations rise during business hours.

They describe “three distinct modes of interaction: the workday, the constant personal companion, and the introspective night.”

During the workday, the paper says:

  • Between 8 a.m. and 5 p.m., “Work and Career” overtakes “Technology” as the top topic on desktop.
  • Education and science topics rise during business hours compared to nighttime.

Outside business hours, the paper describes a shift toward more personal and reflective topics. For example, it reports that “Religion and Philosophy” rises in rank during late-night hours through dawn.

Programming conversations are more common on weekdays, while gaming rises on weekends. They also note a spike in relationship conversations on Valentine’s Day.

Methodology Caveats

A few limitations are worth keeping in mind.

This is a preprint, so it hasn’t been peer reviewed. It also focuses on consumer Copilot usage and excludes enterprise-authenticated traffic, so it doesn’t describe how Copilot is used inside Microsoft 365 at work.

Finally, the topic and intent labels come from automated classifiers, which means the results reflect how Microsoft’s system groups conversations, not a human-coded review.

Why This Matters

This paper suggests that the use of AI chatbots varies with context. The researchers describe mobile behavior as consistently health-oriented, while desktop behavior is more tied to the workday.

The researchers connect the mobile health pattern to how people use their phones. They write:

“This suggests a device-specific usage pattern where the phone serves as a constant confidant for physical well-being, regardless of the user’s schedule.”

The big takeaway is that “Copilot usage” is not one uniform behavior. Device and time of day appear to shape what people ask for, and how they ask it.

Looking Ahead

Enterprise usage patterns may look different, especially inside Microsoft 365. Any follow-up research that includes workplace contexts, or that validates these patterns outside Microsoft’s own tooling and taxonomy, would help clarify how broadly these findings apply.

SEO Pulse: December Core Update, Preferred Sources & Social Data via @sejournal, @MattGSouthern

The December 2025 core update is the main story this week.

Google confirmed a new broad ranking update, clarified how often core changes happen, expanded Preferred Sources in Top Stories, and started testing social performance data in Search Console Insights.

Here’s what matters for your work.

Google Releases December 2025 Core Update

Google has released the December 2025 core update, its third core update of the year.

Key Facts

The rollout started on December 11, and Google says it may take up to three weeks to complete. This follows the March and June core updates and comes two days after Google refreshed its core updates documentation to explain smaller, ongoing changes.

Why SEOs Should Pay Attention

If you see big swings in rankings or traffic over the next few weeks, this update is probably the cause.

Core updates are broad changes to how Google evaluates content. Pages can move up or down even if you haven’t changed anything on the site, because Google is reassessing your content against everything else in the index.

The timing matters. Earlier in the week, Google reminded everyone that smaller core updates happen all the time. The December core update sits on top of that layer. You’re dealing with both a visible event and quieter, continuous adjustments running underneath.

Right now, the best move is to watch your data rather than panic. Mark the rollout dates in your reporting. Track when things start to move for your key sections. Compare this behavior with what you saw during the March and June updates. That helps you separate core-update effects from seasonality, technical issues, or campaign changes.

Over the longer term, this is another nudge toward content that shows clear expertise, purpose, and useful detail. The documentation change earlier in the week suggests those improvements can be recognized over time, not only when Google names a new core update.

What SEO Professionals Are Saying

Reactions on X focused on timing, expectations, and the kind of content that might come out ahead.

Some SEO professionals leaned into the holiday angle, joking that Google’s “Christmas update” could either deliver a gift or push sites “off a cliff” right before peak season. Others used the announcement to talk about human-written work, saying they hope this is the update where stronger, human-generated content gets more visibility.

There were also practical reads. A few people tied the update to recent delays in Search Console data, saying the backlog now makes more sense. Others pointed out that this is the third broad update in a year where Google is also investing heavily in AI systems, and that core updates now sit inside a bigger stack of changes rather than defining everything on their own.

Read our full coverage: Google Releases December 2025 Core Update

Google Confirms Smaller Core Updates Happen Continuously

Earlier in the week, Google updated its core updates documentation to spell out that ranking changes can happen between the named core updates.

Key Facts

The documentation now says Google makes smaller core updates on an ongoing basis, alongside the larger core updates it announces a few times a year. Google explained that this change is meant to clarify that sites can see ranking gains after making improvements without waiting for the next big announcement.

Smaller core updates were mentioned in a 2019 blog post, but this is the first time the concept appears directly in the core updates documentation.

Why SEOs Should Pay Attention

This answers a question that has been hanging over SEO for years. Recovery isn’t limited to moments when Google announces a core update. The new wording confirms that Google can reward improvements at any time as smaller updates roll out in the background.

If you’ve been holding back on site fixes or content work until “the next core update,” this is a good time to drop that pattern. You can ship improvements now, knowing there’s more than one window where Google might reassess your content.

The timing is interesting given this year’s release pattern. Until this week, the only named core updates in 2025 were the March and June releases, with several months between them. For sites hit early in the year, those gaps made it hard to know when changes might start to pay off. The December update adds another obvious checkpoint, but the documentation makes it clear that it isn’t the only one.

For reporting and communication, this supports a change from “wait for the next update” to “improve steadily and monitor continuously.” You still don’t need to chase every drop, but you can be more confident that sustained work has more than one chance to show up in the data.

What SEO Professionals Are Saying

Former Google search team member Pedro Dias summed up one common read, saying he thinks Google has finally reached a place where it doesn’t need to announce every core update separately. Others have connected the change to Google’s move toward layered ranking systems, where visible events are only one part of an ongoing stream of tweaks.

For you, that supports a slower, steadier approach. Instead of waiting for one moment to “fix” everything, you can keep tuning content and UX, and treat named core updates as checkpoints rather than the only chance to move.

Read our full coverage: Google Confirms Smaller Core Updates Happen Continuously

Google Expands Preferred Sources In Top Stories

Google is expanding Preferred Sources globally for English-language users, giving people more control over which outlets show up in Top Stories and similar news surfaces.

Key Facts

Preferred Sources lets people pick specific outlets they want to see more often when they browse news in Google Search. The feature is now rolling out to English-language users worldwide, with other supported languages planned for early next year. Google says people have already selected close to 90,000 different sources, from local blogs to large international publishers, and that users who mark a site as preferred tend to click through to it about twice as often.

Why SEOs Should Pay Attention

Preferred Sources gives you a direct way to turn casual readers into regulars inside Google’s own interfaces. If your site publishes timely coverage, you can now build a segment of people who have chosen to see more of your work in Top Stories.

That makes “choose us as a preferred source” another call to action you can test alongside email sign-ups and follow buttons. Some publishers are already creating simple guides that show readers how to add them and what changes once they do. You can take a similar approach, especially if you already have a loyal audience on site or through newsletters.

It’s also a signal that Google wants users to have more say in which outlets they see. For you, that means brand perception, clarity of coverage, and consistency matter a bit more, because people are deciding which sources they want in their feed instead of relying on a default mix.

What SEO Professionals Are Saying

On LinkedIn, several SEO professionals and content strategists pointed out that Preferred Sources mostly reinforces behavior that already exists.

Garrett Sussman notes that people tend to stick with outlets they trust. This feature simply makes that choice more visible and gives publishers another growth lever inside Google’s ecosystem.

If you work on news or frequently updated content, you can start treating Preferred Sources selection as its own metric. Watch how often people choose you, which articles tend to drive that choice, and how those readers behave over time.

Read our full coverage: Google Expands Preferred Sources & Publisher AI Partnerships

Google Tests Social Channel Insights In Search Console

Search Console is testing a feature that shows how your social channels perform in Google Search results.

Key Facts

Google announced a new experimental feature in Search Console that adds social performance data to the Search Console Insights report. It covers social profiles that Google has automatically associated with your site. For each connected profile, you can see clicks, impressions, top queries, trending content, and audience location.

The experiment is limited to a small set of properties, and you can’t manually add profiles. The feature only appears if Search Console detects your channels and prompts you to link them.

Why SEOs Should Pay Attention

Up to now, you’ve probably watched search performance for your site and your social channels in separate tools. This experiment pulls both into one place, which can save time and make it easier to see how people move between your website and your social profiles.

The new data shows which queries lead people to your social profiles, which posts tend to surface in search, and which markets use Google to find you on social platforms. That’s useful if you run campaigns where organic search, social content, and creator work all overlap.

The main limitation is access. If you don’t see a prompt in Search Console Insights asking you to connect detected social channels, your site isn’t in the initial test group. Still, it’s worth logging as a feature to watch, especially if you already spend time explaining how social content shows up for branded and navigational queries.

What SEO Professionals Are Saying

Reactions on LinkedIn focused on two main points. People liked the idea of a single view of website and social performance, and they quickly started asking when similar data might be available for AI Overviews, AI Mode, and other search experiences.

Others raised questions about coverage. Some practitioners want to know whether this data will stay limited to Google-owned properties or expand to platforms like Instagram, LinkedIn, and X. There’s also curiosity about how Google detects and links social profiles in the first place, and whether structured data or Knowledge Graph entities play a role.

Read our full coverage: Google Tests Social Channel Insights In Search Console

Theme Of The Week: Core Updates At Two Speeds

The common thread this week is movement at two speeds.

At one speed, you have the December 2025 core update. It’s a visible event with a clear start date, a multi-week rollout, and a lot of attention. At the other speed, you have the quieter changes around it.

Google has now said directly that smaller core updates happen all the time. Preferred Sources gives users more control over which outlets they see. Social insights start to connect website and social performance in one view.

For you, this means there’s no single moment when everything gets decided. Core updates still matter and can cause sharp movements, but they sit inside an environment where improvements can pay off gradually and where readers are making more explicit choices about who they want to hear from.

The practical response is to treat this as an ongoing feedback loop. Keep improving content and UX. Watch how those changes behave during calm periods and during core updates. Encourage your most engaged readers to mark you as a preferred source where they can. Keep an eye on how search and social interact for your brand. That way, you’re ready for both speeds.


Top Stories Of The Week

More Resources


Featured Image: Pixel-Shot/Shutterstock

PPC Pulse: Google Data Manager API, YouTube Shorts, LinkedIn Reserved Ads via @sejournal, @brookeosmundson

The PPC platforms rolled out a few meaningful updates this week that shape how we measure, plan, and buy media.

Google introduced a new API that makes it easier to bring first party data into Ads. YouTube shared improvements to the Shorts advertising experience. LinkedIn launched Reserved Ads to give advertisers more control over pricing and delivery.

Here is what stood out and why these updates matter for day-to-day execution.

Google Launches Data Manager API

Google announced the Data Manager API, a new way for advertisers to push their offline conversions and business data directly into Google Ads. The goal is to make measurement setups simpler and more reliable, especially as more teams rely on modeled conversions.

According to Google, the API helps advertisers turn first party data into performance signals that Smart Bidding can use. It also removes some of the friction that previously made offline tracking complicated.

Ginny Marvin, Google Ads Liaison, added helpful context on LinkedIn where she noted that this update is designed to support more flexible measurement setups across platforms and internal systems.

Screenshot taken by author, December 2025

If you manage accounts with sales teams, long consideration cycles, or mixed online and offline activity, this is a welcome step. Better data pipelines usually translate to better bidding performance.

It also signals that Google is prioritizing easier paths for advertisers who have struggled to adopt accurate conversion tracking.

Why this matters for advertisers

Platforms continue to raise the bar on first party data. Advertisers who rely on spreadsheets, uploads, or manual CRM processes will fall behind.

The API helps teams move closer to real time signals, which Smart Bidding depends on. It also reduces the gap between what actually happens in the business and what Google sees inside Ads.

This update gives advanced teams more flexibility, and it gives mid sized teams a way to clean up measurement issues that have slowed performance.

YouTube Shorts Rolls Out New Ad Experience

YouTube shared several updates to help advertisers get more out of Shorts during the holiday season.

Google highlighted Kantar research showing that YouTube Creator Ads on Shorts increase purchase intent by 8.8% on average and drive higher consumer intent to spend compared to competitors.

The new updates focus on making Shorts ads feel closer to the organic experience while giving brands more ways to guide user action. The main updates include:

  • Google is introducing comments on eligible Shorts ads so brands can respond to viewers in a more natural environment.
  • Shorts creators can now link directly to a brand’s website in branded content, which gives viewers a clearer path to learn more.
  • Google is also expanding Shorts ads to mobile web, which adds another surface for short form video placements across TV, web, desktop, and mobile app.

Why this matters for advertisers

Short form video still moves quickly, and advertisers need placements that offer both reach and some level of interaction.

These updates make Shorts more workable for teams that want clearer signals and more opportunities to understand how users respond. The added surfaces and creator linking options give brands more flexibility as they plan holiday and year-end campaigns.

LinkedIn Introduces Reserved Ads and New Creative Tools

LinkedIn announced a set of updates aimed at helping B2B marketers build awareness with more consistency and scale.

The platform is positioning these changes around brand building, noting that only a small percentage of buyers are in-market at any given time. The updates focus on giving advertisers more predictable visibility and more efficient ways to produce and personalize creative.

The biggest addition is Reserved Ads. This placement guarantees the first ad slot in the LinkedIn feed, which gives brands steady reach in a high-attention position.
LinkedIn describes it as a way to secure predictable impressions and a larger share of top-of-feed delivery. It supports multiple formats including Video Ads, Thought Leader Ads, Single Image Ads, and Document Ads.

LinkedIn also introduced ad personalization tools that allow marketers to tailor copy to individual members using profile-based fields like first name, job title, industry, or company name.

The goal is to make impressions feel more relevant without requiring one-off creative. These features are only available to managed accounts for now.
An important note is that Reserved Ads and Ad Personalization are only available to advertisers who have a LinkedIn Account Representative.

LinkedIn is also expanding its creative support with AI Ad Variants, which generate multiple copy versions from a single input, and a flexible ad creation workflow rolling out in early 2026.

Advertisers will be able to upload multiple images, videos, and copy variations, and LinkedIn will mix and match them across campaigns while shifting spend toward what performs best.

Why this matters for advertisers

LinkedIn continues to push deeper into brand advertising, and these updates reflect that direction.

Reserved Ads give marketers more certainty when planning top-of-funnel campaigns, something B2B teams often struggle to secure. Personalization and creative automation address a different challenge: producing enough message variation to keep performance stable across longer sales cycles.

For teams who rely on LinkedIn for both awareness and consideration, these tools may help streamline production and improve consistency without adding complexity.

The real value will come from how well these features integrate into existing campaign structures and how accurately they surface top-performing creative.

Theme of the Week: Platforms Are Reducing Friction

Across Google, YouTube, and LinkedIn, the updates had a similar goal. Each platform is trying to remove barriers that slow down planning, measurement, or creative production.

Google is making it easier to bring in first party data so advertisers can give better signals to their bidding strategies. YouTube is tightening tools around Shorts to help brands participate in short form video with fewer gaps in user flow. LinkedIn is focusing on predictability and creative efficiency so B2B marketers can maintain visibility without adding more operational work.

Each change supports a familiar goal: making it easier for advertisers to plan, measure, and adjust without unnecessary complexity. Folding these updates into your workflows can help create steadier execution and more reliable signals as planning continues into 2026.

More Resources:


Featured Image: Pixel-Shot/Shutterstock

When To Say No To PMax: Strategic Use Cases For Standard Shopping Campaigns

Google is “strongly recommending” Performance Max to advertisers. With its promise of automated optimization across all Google inventory and AI-driven functions, it’s easy to see why Google pushes it so heavily. But here’s the reality: Performance Max isn’t always the best choice, and blindly migrating from Standard Shopping campaigns can actually hurt your performance.

B2B And Low-Conversion Industries Need Different Approaches

The Problem With PMax For Complex Sales

Performance Max thrives on conversion data. Its machine learning algorithms need volume, lots of it, to optimize effectively. But what happens when you’re in an industry where conversions are rare, high-value, or take months to materialize?

B2B companies selling industrial equipment, luxury retailers, or businesses with extended sales cycles face a critical challenge: Performance Max’s algorithms don’t have enough conversion data to learn from. When you’re generating five to 10 conversions per month instead of 500, PMax has almost no signals to optimize for. It’s a constant “learning mode,” making bid decisions based on insufficient data, which might work here and there, but will overall and long-term lead to worse results.

Why Standard Shopping Wins Here

Standard Shopping campaigns allow you to:

  • Implement manual or target ROAS bidding based on your business intelligence, not Google’s incomplete picture.
  • Track and optimize for micro-conversions like quote requests, catalog downloads, or contact form submissions that actually drive B2B pipeline.

The Micro-Conversion Trap In Performance Max

While Performance Max technically supports micro-conversion tracking, it introduces significant risk. When you feed PMax lower-funnel actions like add-to-cart events, contact form submissions, or page views, the algorithm optimizes aggressively for volume, often at the expense of quality, but quality is what matters in B2B and most low-conversion industries.

The result? Your budget shifts toward Display and YouTube placements, where these micro-conversions are abundant but largely meaningless. Display networks excel at generating cheap engagement metrics: a user scrolling through their favorite blog might accidentally trigger an “engaged view” or click, registering as a conversion event without any genuine purchase intent.

The Channel Quality Problem

This creates a vicious cycle:

  • Display and YouTube generate high volumes of soft conversions (page views, brief site visits, accidental clicks).
  • Performance Max interprets this as success and allocates more budget to these channels.
  • Your spend shifts away from high-intent Shopping and Search traffic.
  • You’re optimizing for what amounts to noise conversions that rarely lead to actual revenue.
channel-quality-problem-with-pmax-111
Image from author, November 2025

This is a good example of an advertiser using many conversion types that had decent running campaigns for a long time, but all of a sudden, traffic shifted to display because of heavy soft-conversion usage.

Standard Shopping sidesteps this entirely. By maintaining channel focus on product-search traffic, you ensure that your optimization efforts target genuine business outcomes rather than vanity metrics that inflate Performance Max’s reported success while destroying your actual return on investment (ROI).

Preventing Channel Dilution: When You Need Feed-Only Traffic

The Expansion Problem

One of Performance Max’s most frustrating characteristics is its aggressive expansion across Google’s entire inventory. You might launch a PMax campaign expecting Shopping results, only to find your budget spend into Display banner ads, YouTube pre-rolls, and Discovery placements that deliver clicks but no conversions.

This isn’t always what advertisers want. Sometimes you know that Shopping and Search traffic converts, while Display traffic doesn’t work for your product or brand.

Maintaining Traffic Quality

Standard Shopping keeps you focused on high-intent, product-search traffic. When someone searches “stainless steel refrigerator 36 inch,” they’re ready to buy. That’s fundamentally different from someone scrolling YouTube who sees your ad.

Use Standard Shopping when:

  • Your products require high purchase intent: complex, considered purchases that need active research.
  • Display traffic consistently underperforms: you’ve tested it, and it doesn’t work for your category.
  • You want to avoid brand safety issues: maintaining control over where your ads appear matters for your brand.
  • Creative asset requirements are a burden: you don’t have the resources to create quality images, videos, and headlines for all placement types.

A niche outdoor gear retailer, for example, might find that their technical climbing equipment only converts from Shopping traffic. Display and YouTube placements generate cheap clicks from casual browsers who aren’t serious buyers. Standard Shopping lets them stay focused on the traffic that actually drives revenue.

The Brand-Building Misconception

Some argue that Performance Max’s cross-channel reach provides valuable brand-building benefits that justify lower-performing Display and YouTube placements. While brand building certainly has benefits for established brands with sufficient budgets, this argument falls apart under scrutiny.

True brand building requires strategic planning: dedicated creative campaigns, carefully selected ad formats, intentional media placement, brand lift studies, and proper measurement frameworks to assess impact on awareness, consideration, and perception. Professional brand campaigns are controlled, measurable, and designed with specific brand objectives in mind.

Performance Max offers none of this. Running PMax and claiming “it also helps with brand building” is marketing rationalization, not strategy. You’re essentially paying for uncontrolled, unmeasured brand exposure as a byproduct of what should be a performance campaign. For retailers operating on thin margins who need every dollar to drive measurable ROI, this unplanned brand spend isn’t a bonus; it’s budget waste disguised as a benefit.

If brand building is genuinely important to your business, invest in dedicated brand campaigns where you control the message, placements, and measurement. Don’t let Performance Max’s algorithmic drift into Display masquerade as brand strategy.

Granular Control With Portfolio Bid Strategies And Bid Caps

The Control Gap In Performance Max

Performance Max operates in a black box. You set a Target ROAS or Target CPA, and Google does … something. You can’t set maximum cost-per-click (CPC) bids, you can’t implement bid caps across product groups, and you can’t fine-tune performance at a granular level.

For businesses operating on tight margins or managing diverse product catalogs with different profitability profiles, this lack of control is a deal-breaker.

Strategic Bid Management

Standard Shopping campaigns support portfolio bid strategies, giving you powerful options:

  1. Bid Caps for Margin Protection: Set maximum CPC limits to ensure you never overpay for a click. If your margins can’t support more than $2 per click on certain products, you can enforce that hard limit. PMax might blow past that threshold in pursuit of its learning goals.
  2. Product-Level Optimization: Create separate campaigns or ad groups for:
  • High-margin vs. low-margin products.
  • Seasonal vs. evergreen items.
  • Different brands or product categories with varying profitability.

Real-World Application

Consider an electronics retailer with products ranging from 5% margin accessories to 40% margin premium headphones. With Standard Shopping:

  • High-margin products get their own campaign with aggressive bidding.
  • Low-margin items have strict bid caps to maintain profitability.
  • Clearance items run on manual CPC with rock-bottom bids.
  • Portfolio strategies ensure overall ROAS goals while respecting product-level economics.

Performance Max would treat everything as one bucket, potentially overspending on low-margin items while underbidding on your profit drivers. You could segment those products with PMax and dedicated ROAS settings, like giving low-margin items a 1,000-2,000% ROAS to force the algorithm to lower CPC’s, but in certain cases, you might want to make use of a hard bid cap to avoid any surprises.

The Fallback Strategy: Why You Need A Safety Net

Don’t Put All Your Eggs In One Basket

Here’s a scenario that plays out constantly: An advertiser migrates completely to Performance Max, pauses their Standard Shopping campaigns, and watches performance crater. PMax enters an extended learning period, traffic drops, and suddenly they’re scrambling to recover lost revenue.

Another example is when you heavily rely on custom labels and advanced segmentations. If something fails, your campaigns might be down. An always-on standard shopping campaign as a fallback can quickly jump in.

Maintaining Your Fallback

Smart advertisers maintain Standard Shopping campaigns as a strategic fallback:

During PMax Testing: Keep your proven Standard Shopping campaigns running at reduced budget (maybe 20-30%) while you test Performance Max. If PMax underperforms, you still have baseline traffic and conversions coming in.

Seasonal Insurance: Peak seasons (Black Friday, holiday shopping, back-to-school) are not the time to experiment. Many advertisers switch back to Standard Shopping during their most critical revenue periods, knowing exactly what performance to expect, but also have Standard Shopping as a backup, just in case anything happens to PMax campaigns.

Quick Recovery Option: If PMax goes sideways, and it can, having a Standard Shopping campaign ready to scale up means you can recover quickly rather than starting from scratch.

Preserving Campaign History: Years of optimization data, conversion history, and Quality Score built up in Standard Shopping campaigns have value. Once you delete them, that institutional knowledge is gone forever.

Strategy Over Automation

Performance Max represents Google’s vision of fully automated advertising, but automation without strategy is just expensive guesswork.

Standard Shopping campaigns remain essential tools for advertisers who need:

  • Control over bidding and budget allocation.
  • Transparency into what’s actually driving results.
  • Flexibility to optimize for their specific business model.
  • Protection against algorithmic overspending.

The key isn’t choosing one over the other; it’s understanding when each approach serves your business goals.

Before migrating to Performance Max, ask yourself:

  • Do I have sufficient conversion volume for machine learning?
  • Am I willing to sacrifice visibility for automation?
  • Does my business model require specific controls PMax doesn’t offer?
  • Do I have a fallback plan if performance drops?

If you answered yes to any of these questions, Standard Shopping campaigns deserve a permanent place in your account structure.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Solar geoengineering startups are getting serious

Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.

A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.

So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. What does it mean for geoengineering, and for the climate?

Researchers have considered the possibility of addressing planetary warming this way for decades. We already know that volcanic eruptions, which spew sulfur dioxide into the atmosphere, can reduce temperatures. The thought is that we could mimic that natural process by spraying particles up there ourselves.

The prospect is a controversial one, to put it lightly. Many have concerns about unintended consequences and uneven benefits. Even public research led by top institutions has faced barriers—one famous Harvard research program was officially canceled last year after years of debate.

One of the difficulties of geoengineering is that in theory a single entity, like a startup company, could make decisions that have a widespread effect on the planet. And in the last few years, we’ve seen more interest in geoengineering from the private sector. 

Three years ago, James broke the story that Make Sunsets, a California-based company, was already releasing particles into the atmosphere in an effort to tweak the climate.

The company’s CEO Luke Iseman went to Baja California in Mexico, stuck some sulfur dioxide into a weather balloon, and sent it skyward. The amount of material was tiny, and it’s not clear that it even made it into the right part of the atmosphere to reflect any sunlight.

But fears that this group or others could go rogue and do their own geoengineering led to widespread backlash. Mexico announced plans to restrict geoengineering experiments in the country a few weeks after that news broke.

You can still buy cooling credits from Make Sunsets, and the company was just granted a patent for its system. But the startup is seen as something of a fringe actor.

Enter Stardust Solutions. The company has been working under the radar for a few years, but it has started talking about its work more publicly this year. In October, it announced a significant funding round, led by some top names in climate investing. “Stardust is serious, and now it’s raised serious money from serious people,” as James puts it in his new story.

That’s making some experts nervous. Even those who believe we should be researching geoengineering are concerned about what it means for private companies to do so.

“Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding,” write David Keith and Daniele Visioni, two leading figures in geoengineering research, in a recent opinion piece for MIT Technology Review.

Stardust insists that it won’t move forward with any geoengineering until and unless it’s commissioned to do so by governments and there are rules and bodies in place to govern use of the technology.

But there’s no telling how financial pressure might change that, down the road. And we’re already seeing some of the challenges faced by a private company in this space: the need to keep trade secrets.

Stardust is currently not sharing information about the particles it intends to release into the sky, though it says it plans to do so once it secures a patent, which could happen as soon as next year. The company argues that its proprietary particles will be safe, cheap to manufacture, and easier to track than the already abundant sulfur dioxide. But at this point, there’s no way for external experts to evaluate those claims.

As Keith and Visioni put it: “Research won’t be useful unless it’s trusted, and trust depends on transparency.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: solar geoengineering’s future, and OpenAI is being sued

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Solar geoengineering startups are getting serious

Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.

A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.

So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. So what does it mean for geoengineering, and for the climate? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you’re interested in reading more about solar geoengineering, check out:

+ Why the for-profit race into solar geoengineering is bad for science and public trust. Read the full story.

+ Why we need more research—including outdoor experiments—to make better-informed decisions about such climate interventions.

+ The hard lessons of Harvard’s failed geoengineering experiment, which was officially terminated last year. Read the full story.

+ How this London nonprofit became one of the biggest backers of geoengineering research.

+ The technology could alter the entire planet. These groups want every nation to have a say.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is being sued for wrongful death
By the estate of a woman killed by her son after he engaged in delusion-filled conversations with ChatGPT. (WSJ $)
+ The chatbot appeared to validate Stein-Erik Soelberg’s conspiratorial ideas. (WP $)
+ It’s the latest in a string of wrongful death legal actions filed against chatbot makers. (ABC News)

2 ICE is tracking pregnant immigrants through specifically-developed smartwatches
They’re unable to take the devices off, even during labor. (The Guardian)
+ Pregnant and postpartum women say they’ve been detained in solitary confinement. (Slate $)
+ Another effort to track ICE raids has been taken offline. (MIT Technology Review)

3 Meta’s new AI hires aren’t making friends with the rest of the company
Tensions are rife between the AGI team and other divisions. (NYT $)
+ Mark Zuckerberg is keen to make money off the company’s AI ambitions. (Bloomberg $)
+ Meanwhile, what’s life like for the remaining Scale AI team? (Insider $)

4 Google DeepMind is building its first materials science lab in the UK
It’ll focus on developing new materials to build superconductors and solar cells. (FT $) 

5 The new space race is to build orbital data centers
And Blue Origin is winning, apparently. (WSJ $)
+ Plenty of companies are jostling for their slice of the pie. (The Verge)
+ Should we be moving data centers to space? (MIT Technology Review)

6 Inside the quest to find out what causes Parkinson’s
A growing body of work suggests it may not be purely genetic after all. (Wired $)

7 Are you in TikTok’s cat niche? 
If so, you’re likely to be in these other niches too. (WP $)

8 Why do our brains get tired? 🧠💤
Researchers are trying to get to the bottom of it.  (Nature $)

9 Microsoft’s boss has built his own cricket app 🏏
Satya Nadella can’t get enough of the sound of leather on willow. (Bloomberg $)

10 How much vibe coding is too much vibe coding? 
One journalist’s journey into the heart of darkness. (Rest of World)
+ What is vibe coding, exactly? (MIT Technology Review)

Quote of the day

“I feel so much pain seeing his sad face…I hope for a New Year’s miracle.”

—A child in Russia sends a message to the Kremlin-aligned Safe Internet League explaining the impact of the country’s decision to block access to the wildly popular gaming platform Roblox on their brother, the Washington Post reports.

 One more thing

Why it’s so hard to stop tech-facilitated abuse

After Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior. 

One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.

Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story

—Jessica Klein

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The New Yorker has picked its best TV shows of 2025. Let the debate commence!
+ Check out the winners of this year’s Drone Photo Awards.
+ I’m sorry to report you aren’t half as intuitive as you think you are when it comes to deciphering your dog’s emotions.
+ Germany’s “home of Christmas” sure looks magical.

How to Scale a Recommerce Business

The idea of selling used or overstock goods is not new. Secondhand and thrift shopping is as old as commerce itself.

What has changed is resale volume and the operational challenges that have emerged. Shops that want to sell used, refurbished, and overstock items should establish repeatable systems for handling sourcing, intake, authentication, grading, and pricing.

Repeatable Recommerce

Sourcing

The initial challenge is consistently finding desirable goods.

The aim is predictable systems for procuring products that turn over quickly and profitably.

  • Returns as inventory. Don’t overlook returned items. They are a reliable source of secondhand stock.
  • Customer trade-ins. Buy-back programs also provide a predictable supply of inventory and encourage repeat purchases. Merchants can let shoppers trade in and trade up apparel, outdoor gear, electronics, and luxury accessories. Carefully define what your business accepts and how credit is issued.
  • Liquidation sourcing. Platforms such as B-Stock, Bulq, and Liquidation.com offer bulk pallets from major retailers. The condition varies widely, often with incomplete manifests. Nonetheless, pallet sourcing remains a low-cost way to learn recommerce, especially in apparel and home goods.
  • Partnerships. Finally, many secondhand ecommerce businesses develop sourcing partnerships with manufacturers or other retailers to purchase clearance, end-of-season, or returned goods.

Intake

In circular commerce, intake drives goods toward a sale.

An effective intake workflow should move every item through a repeatable process, to:

  • Identify,
  • Clean,
  • Measure or test,
  • Document condition,
  • Photograph,
  • Authenticate,
  • Assign a grade,
  • List.

Each step is an opportunity to reduce the time from sourcing to sale. The better the intake process, the better the cash flow.

While each of these tasks is essential, the last three require extra attention.

Authenticate

Some categories of secondhand products require authentication or certification.

For example, a shop that lists a large Prada Galleria bag (which sells new in 2025 for $5,100) had better ensure it’s a genuine Prada. Counterfeits can kill a recommerce business.

Services such as Entrupy, Certilogo, and category-specific verification tools can help. In most cases, submitting photographs will be enough to authenticate an item.

Screenshot of a used Prada bag for sale

A buyer for a used Prada bag seeks quality and brand recognition.

Grading

Recommerce grading can take two forms.

First, the description for every item should address its condition. Grading could be as simple as “like new” or “fair.” For such subjective grades, try to have a repeatable standard. For example, apparel with stitching needs can only be labeled “fair.”

Mistake in grading — too much or too little — erodes trust.

A second form of grading applies to collectible goods. Books, for example, often have grades such as “mint,” “fine,” and “near fine,” each with a specific definition.

When products have a standard and accepted grading system, use it.

Listing

Where to list — offer or sale — a secondhand, refurbished, or overstock item requires market awareness and understanding, and a bit of skill.

The listing should be priced competitively for a given market. A refurbished Xbox juxtaposed with a new one on a retailer’s website may sell at a higher price than on Facebook Marketplace or eBay.

The price difference among markets should not discourage a seller from listing on all or many of them. Instead, it implies the need to use different listing strategies, each emphasizing different features or values.

Product descriptions on Amazon Renewed might focus on the expert refurbishing or like-new performance, while apparel listings on ThreadUp could stress environmental sustainability.

Screenshot of Amazon Renew web page

An Amazon Renew shopper likely differs from one on Threadup focused on environmental sustainability.

Recommerce Success

Recommerce can supplement a retailer’s primary sales channel by extracting value from returns, trade-ups, and overstock inventory.

It can also become a standalone business model, where merchants buy and sell across multiple marketplaces.

Success in either model depends on processes and workflows. Shops that standardize intake, grading, authentication, and listing practices earn consumer trust, resulting in faster turnover and lower returns.

Google Releases December 2025 Core Update via @sejournal, @MattGSouthern

Google has released the December 2025 core update, the company confirmed through its Search Status Dashboard.

The rollout began at 9:25 a.m. Pacific Time on December 11, 2025.

This marks Google’s third core update of 2025, following the March and June core updates earlier this year.

What’s New

Google lists the update as an “incident affecting ranking” on its status dashboard.

The company states the rollout “may take up to three weeks to complete.”

Core updates are broad changes to Google’s ranking systems designed to improve search results overall. Unlike specific updates targeting spam or particular ranking factors, core updates affect how Google’s systems assess content across the web.

2025 Core Update Timeline

The December update follows two previous core updates this year.

The March 2025 core update rolled out from March 13-27, taking 14 days to complete. Data from SEO tracking providers suggested volatility similar to the December 2024 core update.

The June 2025 core update ran from June 30 to July 17, lasting about 16 days. SEO data providers indicated it was one of the larger core updates in recent memory. Some sites previously hit by the September 2023 Helpful Content Update saw partial recoveries during this rollout.

Documentation Update On Continuous Changes

Two days before this core update, Google updated its core updates documentation with new language about ongoing algorithm changes.

The updated documentation now states:

“However, you don’t necessarily have to wait for a major core update to see the effect of your improvements. We’re continually making updates to our search algorithms, including smaller core updates. These updates are not announced because they aren’t widely noticeable, but they are another way that your content can see a rise in position (if you’ve made improvements).”

Google explained that the addition was meant to clarify that content improvements can lead to ranking changes without waiting for the next announced update.

Why This Matters

If you notice ranking fluctuations over the coming weeks, this update is likely a major factor.

Core updates can shift rankings for pages that weren’t doing anything wrong. Google has consistently stated that pages losing visibility after a core update don’t necessarily have problems to fix. The systems are reassessing content relative to what else is available.

The documentation update is a reminder that rankings can change between major updates as Google rolls out smaller core changes behind the scenes.

Looking Ahead

Google will update the Search Status Dashboard when the rollout is complete.

Monitor your rankings and traffic over the next three weeks. If you see changes, document when they occurred relative to the rollout timeline.

Based on 2025’s previous updates, completion typically takes two to three weeks. Google will confirm completion through the dashboard and its Search Central social accounts.