How people use Microsoft Copilot depends on whether they’re at a desk or on their phone.
That is the core theme in the company’s analysis of 37.5 million Copilot conversations sampled between January and September.
The research examines consumer Copilot usage patterns across device types and time of day. The authors say they used machine-based classifiers to categorize conversations by topic and intent without any human review of the messages.
What The Report Says
On mobile, Health and Fitness is the most common topic throughout the day
The authors summarize the split this way:
“On mobile, health is the dominant topic, which is consistent across every hour and every month we observed, with users seeking not just information but also advice.”
Desktop usage follows a different rhythm. Technology leads as the top topic overall, but the researchers report that work-related conversations rise during business hours.
They describe “three distinct modes of interaction: the workday, the constant personal companion, and the introspective night.”
During the workday, the paper says:
Between 8 a.m. and 5 p.m., “Work and Career” overtakes “Technology” as the top topic on desktop.
Education and science topics rise during business hours compared to nighttime.
Outside business hours, the paper describes a shift toward more personal and reflective topics. For example, it reports that “Religion and Philosophy” rises in rank during late-night hours through dawn.
Programming conversations are more common on weekdays, while gaming rises on weekends. They also note a spike in relationship conversations on Valentine’s Day.
Methodology Caveats
A few limitations are worth keeping in mind.
This is a preprint, so it hasn’t been peer reviewed. It also focuses on consumer Copilot usage and excludes enterprise-authenticated traffic, so it doesn’t describe how Copilot is used inside Microsoft 365 at work.
Finally, the topic and intent labels come from automated classifiers, which means the results reflect how Microsoft’s system groups conversations, not a human-coded review.
Why This Matters
This paper suggests that the use of AI chatbots varies with context. The researchers describe mobile behavior as consistently health-oriented, while desktop behavior is more tied to the workday.
The researchers connect the mobile health pattern to how people use their phones. They write:
“This suggests a device-specific usage pattern where the phone serves as a constant confidant for physical well-being, regardless of the user’s schedule.”
The big takeaway is that “Copilot usage” is not one uniform behavior. Device and time of day appear to shape what people ask for, and how they ask it.
Looking Ahead
Enterprise usage patterns may look different, especially inside Microsoft 365. Any follow-up research that includes workplace contexts, or that validates these patterns outside Microsoft’s own tooling and taxonomy, would help clarify how broadly these findings apply.
The December 2025 core update is the main story this week.
Google confirmed a new broad ranking update, clarified how often core changes happen, expanded Preferred Sources in Top Stories, and started testing social performance data in Search Console Insights.
Here’s what matters for your work.
Google Releases December 2025 Core Update
Google has released the December 2025 core update, its third core update of the year.
Key Facts
The rollout started on December 11, and Google says it may take up to three weeks to complete. This follows the March and June core updates and comes two days after Google refreshed its core updates documentation to explain smaller, ongoing changes.
Why SEOs Should Pay Attention
If you see big swings in rankings or traffic over the next few weeks, this update is probably the cause.
Core updates are broad changes to how Google evaluates content. Pages can move up or down even if you haven’t changed anything on the site, because Google is reassessing your content against everything else in the index.
The timing matters. Earlier in the week, Google reminded everyone that smaller core updates happen all the time. The December core update sits on top of that layer. You’re dealing with both a visible event and quieter, continuous adjustments running underneath.
Right now, the best move is to watch your data rather than panic. Mark the rollout dates in your reporting. Track when things start to move for your key sections. Compare this behavior with what you saw during the March and June updates. That helps you separate core-update effects from seasonality, technical issues, or campaign changes.
Over the longer term, this is another nudge toward content that shows clear expertise, purpose, and useful detail. The documentation change earlier in the week suggests those improvements can be recognized over time, not only when Google names a new core update.
What SEO Professionals Are Saying
Reactions on X focused on timing, expectations, and the kind of content that might come out ahead.
Some SEO professionals leaned into the holiday angle, joking that Google’s “Christmas update” could either deliver a gift or push sites “off a cliff” right before peak season. Others used the announcement to talk about human-written work, saying they hope this is the update where stronger, human-generated content gets more visibility.
There were also practical reads. A few people tied the update to recent delays in Search Console data, saying the backlog now makes more sense. Others pointed out that this is the third broad update in a year where Google is also investing heavily in AI systems, and that core updates now sit inside a bigger stack of changes rather than defining everything on their own.
Google Confirms Smaller Core Updates Happen Continuously
Earlier in the week, Google updated its core updates documentation to spell out that ranking changes can happen between the named core updates.
Key Facts
The documentation now says Google makes smaller core updates on an ongoing basis, alongside the larger core updates it announces a few times a year. Google explained that this change is meant to clarify that sites can see ranking gains after making improvements without waiting for the next big announcement.
Smaller core updates were mentioned in a 2019 blog post, but this is the first time the concept appears directly in the core updates documentation.
Why SEOs Should Pay Attention
This answers a question that has been hanging over SEO for years. Recovery isn’t limited to moments when Google announces a core update. The new wording confirms that Google can reward improvements at any time as smaller updates roll out in the background.
If you’ve been holding back on site fixes or content work until “the next core update,” this is a good time to drop that pattern. You can ship improvements now, knowing there’s more than one window where Google might reassess your content.
The timing is interesting given this year’s release pattern. Until this week, the only named core updates in 2025 were the March and June releases, with several months between them. For sites hit early in the year, those gaps made it hard to know when changes might start to pay off. The December update adds another obvious checkpoint, but the documentation makes it clear that it isn’t the only one.
For reporting and communication, this supports a change from “wait for the next update” to “improve steadily and monitor continuously.” You still don’t need to chase every drop, but you can be more confident that sustained work has more than one chance to show up in the data.
What SEO Professionals Are Saying
Former Google search team member Pedro Dias summed up one common read, saying he thinks Google has finally reached a place where it doesn’t need to announce every core update separately. Others have connected the change to Google’s move toward layered ranking systems, where visible events are only one part of an ongoing stream of tweaks.
For you, that supports a slower, steadier approach. Instead of waiting for one moment to “fix” everything, you can keep tuning content and UX, and treat named core updates as checkpoints rather than the only chance to move.
Google is expanding Preferred Sources globally for English-language users, giving people more control over which outlets show up in Top Stories and similar news surfaces.
Key Facts
Preferred Sources lets people pick specific outlets they want to see more often when they browse news in Google Search. The feature is now rolling out to English-language users worldwide, with other supported languages planned for early next year. Google says people have already selected close to 90,000 different sources, from local blogs to large international publishers, and that users who mark a site as preferred tend to click through to it about twice as often.
Why SEOs Should Pay Attention
Preferred Sources gives you a direct way to turn casual readers into regulars inside Google’s own interfaces. If your site publishes timely coverage, you can now build a segment of people who have chosen to see more of your work in Top Stories.
That makes “choose us as a preferred source” another call to action you can test alongside email sign-ups and follow buttons. Some publishers are already creating simple guides that show readers how to add them and what changes once they do. You can take a similar approach, especially if you already have a loyal audience on site or through newsletters.
It’s also a signal that Google wants users to have more say in which outlets they see. For you, that means brand perception, clarity of coverage, and consistency matter a bit more, because people are deciding which sources they want in their feed instead of relying on a default mix.
What SEO Professionals Are Saying
On LinkedIn, several SEO professionals and content strategists pointed out that Preferred Sources mostly reinforces behavior that already exists.
Garrett Sussman notes that people tend to stick with outlets they trust. This feature simply makes that choice more visible and gives publishers another growth lever inside Google’s ecosystem.
If you work on news or frequently updated content, you can start treating Preferred Sources selection as its own metric. Watch how often people choose you, which articles tend to drive that choice, and how those readers behave over time.
Google Tests Social Channel Insights In Search Console
Search Console is testing a feature that shows how your social channels perform in Google Search results.
Key Facts
Google announced a new experimental feature in Search Console that adds social performance data to the Search Console Insights report. It covers social profiles that Google has automatically associated with your site. For each connected profile, you can see clicks, impressions, top queries, trending content, and audience location.
The experiment is limited to a small set of properties, and you can’t manually add profiles. The feature only appears if Search Console detects your channels and prompts you to link them.
Why SEOs Should Pay Attention
Up to now, you’ve probably watched search performance for your site and your social channels in separate tools. This experiment pulls both into one place, which can save time and make it easier to see how people move between your website and your social profiles.
The new data shows which queries lead people to your social profiles, which posts tend to surface in search, and which markets use Google to find you on social platforms. That’s useful if you run campaigns where organic search, social content, and creator work all overlap.
The main limitation is access. If you don’t see a prompt in Search Console Insights asking you to connect detected social channels, your site isn’t in the initial test group. Still, it’s worth logging as a feature to watch, especially if you already spend time explaining how social content shows up for branded and navigational queries.
What SEO Professionals Are Saying
Reactions on LinkedIn focused on two main points. People liked the idea of a single view of website and social performance, and they quickly started asking when similar data might be available for AI Overviews, AI Mode, and other search experiences.
Others raised questions about coverage. Some practitioners want to know whether this data will stay limited to Google-owned properties or expand to platforms like Instagram, LinkedIn, and X. There’s also curiosity about how Google detects and links social profiles in the first place, and whether structured data or Knowledge Graph entities play a role.
The common thread this week is movement at two speeds.
At one speed, you have the December 2025 core update. It’s a visible event with a clear start date, a multi-week rollout, and a lot of attention. At the other speed, you have the quieter changes around it.
Google has now said directly that smaller core updates happen all the time. Preferred Sources gives users more control over which outlets they see. Social insights start to connect website and social performance in one view.
For you, this means there’s no single moment when everything gets decided. Core updates still matter and can cause sharp movements, but they sit inside an environment where improvements can pay off gradually and where readers are making more explicit choices about who they want to hear from.
The practical response is to treat this as an ongoing feedback loop. Keep improving content and UX. Watch how those changes behave during calm periods and during core updates. Encourage your most engaged readers to mark you as a preferred source where they can. Keep an eye on how search and social interact for your brand. That way, you’re ready for both speeds.
The PPC platforms rolled out a few meaningful updates this week that shape how we measure, plan, and buy media.
Google introduced a new API that makes it easier to bring first party data into Ads. YouTube shared improvements to the Shorts advertising experience. LinkedIn launched Reserved Ads to give advertisers more control over pricing and delivery.
Here is what stood out and why these updates matter for day-to-day execution.
Google Launches Data Manager API
Google announced the Data Manager API, a new way for advertisers to push their offline conversions and business data directly into Google Ads. The goal is to make measurement setups simpler and more reliable, especially as more teams rely on modeled conversions.
According to Google, the API helps advertisers turn first party data into performance signals that Smart Bidding can use. It also removes some of the friction that previously made offline tracking complicated.
Ginny Marvin, Google Ads Liaison, added helpful context on LinkedIn where she noted that this update is designed to support more flexible measurement setups across platforms and internal systems.
Screenshot taken by author, December 2025
If you manage accounts with sales teams, long consideration cycles, or mixed online and offline activity, this is a welcome step. Better data pipelines usually translate to better bidding performance.
It also signals that Google is prioritizing easier paths for advertisers who have struggled to adopt accurate conversion tracking.
Why this matters for advertisers
Platforms continue to raise the bar on first party data. Advertisers who rely on spreadsheets, uploads, or manual CRM processes will fall behind.
The API helps teams move closer to real time signals, which Smart Bidding depends on. It also reduces the gap between what actually happens in the business and what Google sees inside Ads.
This update gives advanced teams more flexibility, and it gives mid sized teams a way to clean up measurement issues that have slowed performance.
YouTube Shorts Rolls Out New Ad Experience
YouTube shared several updates to help advertisers get more out of Shorts during the holiday season.
Google highlighted Kantar research showing that YouTube Creator Ads on Shorts increase purchase intent by 8.8% on average and drive higher consumer intent to spend compared to competitors.
The new updates focus on making Shorts ads feel closer to the organic experience while giving brands more ways to guide user action. The main updates include:
Google is introducing comments on eligible Shorts ads so brands can respond to viewers in a more natural environment.
Shorts creators can now link directly to a brand’s website in branded content, which gives viewers a clearer path to learn more.
Google is also expanding Shorts ads to mobile web, which adds another surface for short form video placements across TV, web, desktop, and mobile app.
Why this matters for advertisers
Short form video still moves quickly, and advertisers need placements that offer both reach and some level of interaction.
These updates make Shorts more workable for teams that want clearer signals and more opportunities to understand how users respond. The added surfaces and creator linking options give brands more flexibility as they plan holiday and year-end campaigns.
LinkedIn Introduces Reserved Ads and New Creative Tools
LinkedIn announced a set of updates aimed at helping B2B marketers build awareness with more consistency and scale.
The platform is positioning these changes around brand building, noting that only a small percentage of buyers are in-market at any given time. The updates focus on giving advertisers more predictable visibility and more efficient ways to produce and personalize creative.
The biggest addition is Reserved Ads. This placement guarantees the first ad slot in the LinkedIn feed, which gives brands steady reach in a high-attention position. LinkedIn describes it as a way to secure predictable impressions and a larger share of top-of-feed delivery. It supports multiple formats including Video Ads, Thought Leader Ads, Single Image Ads, and Document Ads.
LinkedIn also introduced ad personalization tools that allow marketers to tailor copy to individual members using profile-based fields like first name, job title, industry, or company name.
The goal is to make impressions feel more relevant without requiring one-off creative. These features are only available to managed accounts for now. An important note is that Reserved Ads and Ad Personalization are only available to advertisers who have a LinkedIn Account Representative.
LinkedIn is also expanding its creative support with AI Ad Variants, which generate multiple copy versions from a single input, and a flexible ad creation workflow rolling out in early 2026.
Advertisers will be able to upload multiple images, videos, and copy variations, and LinkedIn will mix and match them across campaigns while shifting spend toward what performs best.
Why this matters for advertisers
LinkedIn continues to push deeper into brand advertising, and these updates reflect that direction.
Reserved Ads give marketers more certainty when planning top-of-funnel campaigns, something B2B teams often struggle to secure. Personalization and creative automation address a different challenge: producing enough message variation to keep performance stable across longer sales cycles.
For teams who rely on LinkedIn for both awareness and consideration, these tools may help streamline production and improve consistency without adding complexity.
The real value will come from how well these features integrate into existing campaign structures and how accurately they surface top-performing creative.
Theme of the Week: Platforms Are Reducing Friction
Across Google, YouTube, and LinkedIn, the updates had a similar goal. Each platform is trying to remove barriers that slow down planning, measurement, or creative production.
Google is making it easier to bring in first party data so advertisers can give better signals to their bidding strategies. YouTube is tightening tools around Shorts to help brands participate in short form video with fewer gaps in user flow. LinkedIn is focusing on predictability and creative efficiency so B2B marketers can maintain visibility without adding more operational work.
Each change supports a familiar goal: making it easier for advertisers to plan, measure, and adjust without unnecessary complexity. Folding these updates into your workflows can help create steadier execution and more reliable signals as planning continues into 2026.
B2B And Low-Conversion Industries Need Different Approaches
The Problem With PMax For Complex Sales
Performance Max thrives on conversion data. Its machine learning algorithms need volume, lots of it, to optimize effectively. But what happens when you’re in an industry where conversions are rare, high-value, or take months to materialize?
B2B companies selling industrial equipment, luxury retailers, or businesses with extended sales cycles face a critical challenge: Performance Max’s algorithms don’t have enough conversion data to learn from. When you’re generating five to 10 conversions per month instead of 500, PMax has almost no signals to optimize for. It’s a constant “learning mode,” making bid decisions based on insufficient data, which might work here and there, but will overall and long-term lead to worse results.
Why Standard Shopping Wins Here
Standard Shopping campaigns allow you to:
Implement manual or target ROAS bidding based on your business intelligence, not Google’s incomplete picture.
Track and optimize for micro-conversions like quote requests, catalog downloads, or contact form submissions that actually drive B2B pipeline.
The Micro-Conversion Trap In Performance Max
While Performance Max technically supports micro-conversion tracking, it introduces significant risk. When you feed PMax lower-funnel actions like add-to-cart events, contact form submissions, or page views, the algorithm optimizes aggressively for volume, often at the expense of quality, but quality is what matters in B2B and most low-conversion industries.
The result? Your budget shifts toward Display and YouTube placements, where these micro-conversions are abundant but largely meaningless. Display networks excel at generating cheap engagement metrics: a user scrolling through their favorite blog might accidentally trigger an “engaged view” or click, registering as a conversion event without any genuine purchase intent.
The Channel Quality Problem
This creates a vicious cycle:
Display and YouTube generate high volumes of soft conversions (page views, brief site visits, accidental clicks).
Performance Max interprets this as success and allocates more budget to these channels.
Your spend shifts away from high-intent Shopping and Search traffic.
You’re optimizing for what amounts to noise conversions that rarely lead to actual revenue.
Image from author, November 2025
This is a good example of an advertiser using many conversion types that had decent running campaigns for a long time, but all of a sudden, traffic shifted to display because of heavy soft-conversion usage.
Standard Shopping sidesteps this entirely. By maintaining channel focus on product-search traffic, you ensure that your optimization efforts target genuine business outcomes rather than vanity metrics that inflate Performance Max’s reported success while destroying your actual return on investment (ROI).
Preventing Channel Dilution: When You Need Feed-Only Traffic
The Expansion Problem
One of Performance Max’s most frustrating characteristics is its aggressive expansion across Google’s entire inventory. You might launch a PMax campaign expecting Shopping results, only to find your budget spend into Display banner ads, YouTube pre-rolls, and Discovery placements that deliver clicks but no conversions.
This isn’t always what advertisers want. Sometimes you know that Shopping and Search traffic converts, while Display traffic doesn’t work for your product or brand.
Maintaining Traffic Quality
Standard Shopping keeps you focused on high-intent, product-search traffic. When someone searches “stainless steel refrigerator 36 inch,” they’re ready to buy. That’s fundamentally different from someone scrolling YouTube who sees your ad.
Use Standard Shopping when:
Your products require high purchase intent: complex, considered purchases that need active research.
Display traffic consistently underperforms: you’ve tested it, and it doesn’t work for your category.
You want to avoid brand safety issues: maintaining control over where your ads appear matters for your brand.
Creative asset requirements are a burden: you don’t have the resources to create quality images, videos, and headlines for all placement types.
A niche outdoor gear retailer, for example, might find that their technical climbing equipment only converts from Shopping traffic. Display and YouTube placements generate cheap clicks from casual browsers who aren’t serious buyers. Standard Shopping lets them stay focused on the traffic that actually drives revenue.
The Brand-Building Misconception
Some argue that Performance Max’s cross-channel reach provides valuable brand-building benefits that justify lower-performing Display and YouTube placements. While brand building certainly has benefits for established brands with sufficient budgets, this argument falls apart under scrutiny.
True brand building requires strategic planning: dedicated creative campaigns, carefully selected ad formats, intentional media placement, brand lift studies, and proper measurement frameworks to assess impact on awareness, consideration, and perception. Professional brand campaigns are controlled, measurable, and designed with specific brand objectives in mind.
Performance Max offers none of this. Running PMax and claiming “it also helps with brand building” is marketing rationalization, not strategy. You’re essentially paying for uncontrolled, unmeasured brand exposure as a byproduct of what should be a performance campaign. For retailers operating on thin margins who need every dollar to drive measurable ROI, this unplanned brand spend isn’t a bonus; it’s budget waste disguised as a benefit.
If brand building is genuinely important to your business, invest in dedicated brand campaigns where you control the message, placements, and measurement. Don’t let Performance Max’s algorithmic drift into Display masquerade as brand strategy.
Granular Control With Portfolio Bid Strategies And Bid Caps
The Control Gap In Performance Max
Performance Max operates in a black box. You set a Target ROAS or Target CPA, and Google does … something. You can’t set maximum cost-per-click (CPC) bids, you can’t implement bid caps across product groups, and you can’t fine-tune performance at a granular level.
For businesses operating on tight margins or managing diverse product catalogs with different profitability profiles, this lack of control is a deal-breaker.
Strategic Bid Management
Standard Shopping campaigns support portfolio bid strategies, giving you powerful options:
Bid Caps for Margin Protection: Set maximum CPC limits to ensure you never overpay for a click. If your margins can’t support more than $2 per click on certain products, you can enforce that hard limit. PMax might blow past that threshold in pursuit of its learning goals.
Product-Level Optimization: Create separate campaigns or ad groups for:
High-margin vs. low-margin products.
Seasonal vs. evergreen items.
Different brands or product categories with varying profitability.
Real-World Application
Consider an electronics retailer with products ranging from 5% margin accessories to 40% margin premium headphones. With Standard Shopping:
High-margin products get their own campaign with aggressive bidding.
Low-margin items have strict bid caps to maintain profitability.
Clearance items run on manual CPC with rock-bottom bids.
Portfolio strategies ensure overall ROAS goals while respecting product-level economics.
Performance Max would treat everything as one bucket, potentially overspending on low-margin items while underbidding on your profit drivers. You could segment those products with PMax and dedicated ROAS settings, like giving low-margin items a 1,000-2,000% ROAS to force the algorithm to lower CPC’s, but in certain cases, you might want to make use of a hard bid cap to avoid any surprises.
The Fallback Strategy: Why You Need A Safety Net
Don’t Put All Your Eggs In One Basket
Here’s a scenario that plays out constantly: An advertiser migrates completely to Performance Max, pauses their Standard Shopping campaigns, and watches performance crater. PMax enters an extended learning period, traffic drops, and suddenly they’re scrambling to recover lost revenue.
Another example is when you heavily rely on custom labels and advanced segmentations. If something fails, your campaigns might be down. An always-on standard shopping campaign as a fallback can quickly jump in.
Maintaining Your Fallback
Smart advertisers maintain Standard Shopping campaigns as a strategic fallback:
During PMax Testing: Keep your proven Standard Shopping campaigns running at reduced budget (maybe 20-30%) while you test Performance Max. If PMax underperforms, you still have baseline traffic and conversions coming in.
Seasonal Insurance: Peak seasons (Black Friday, holiday shopping, back-to-school) are not the time to experiment. Many advertisers switch back to Standard Shopping during their most critical revenue periods, knowing exactly what performance to expect, but also have Standard Shopping as a backup, just in case anything happens to PMax campaigns.
Quick Recovery Option: If PMax goes sideways, and it can, having a Standard Shopping campaign ready to scale up means you can recover quickly rather than starting from scratch.
Preserving Campaign History: Years of optimization data, conversion history, and Quality Score built up in Standard Shopping campaigns have value. Once you delete them, that institutional knowledge is gone forever.
Strategy Over Automation
Performance Max represents Google’s vision of fully automated advertising, but automation without strategy is just expensive guesswork.
Standard Shopping campaigns remain essential tools for advertisers who need:
Control over bidding and budget allocation.
Transparency into what’s actually driving results.
Flexibility to optimize for their specific business model.
Protection against algorithmic overspending.
The key isn’t choosing one over the other; it’s understanding when each approach serves your business goals.
Before migrating to Performance Max, ask yourself:
Do I have sufficient conversion volume for machine learning?
Am I willing to sacrifice visibility for automation?
Does my business model require specific controls PMax doesn’t offer?
Do I have a fallback plan if performance drops?
If you answered yes to any of these questions, Standard Shopping campaigns deserve a permanent place in your account structure.
Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.
A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.
So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. What does it mean for geoengineering, and for the climate?
Researchers have considered the possibility of addressing planetary warming this way for decades. We already know that volcanic eruptions, which spew sulfur dioxide into the atmosphere, can reduce temperatures. The thought is that we could mimic that natural process by spraying particles up there ourselves.
The prospect is a controversial one, to put it lightly. Many have concerns about unintended consequences and uneven benefits. Even public research led by top institutions has faced barriers—one famous Harvard research program was officially canceled last year after years of debate.
One of the difficulties of geoengineering is that in theory a single entity, like a startup company, could make decisions that have a widespread effect on the planet. And in the last few years, we’ve seen more interest in geoengineering from the private sector.
Three years ago, James broke the story that Make Sunsets, a California-based company, was already releasing particles into the atmosphere in an effort to tweak the climate.
The company’s CEO Luke Iseman went to Baja California in Mexico, stuck some sulfur dioxide into a weather balloon, and sent it skyward. The amount of material was tiny, and it’s not clear that it even made it into the right part of the atmosphere to reflect any sunlight.
You can still buy cooling credits from Make Sunsets, and the company was just granted a patent for its system. But the startup is seen as something of a fringe actor.
Enter Stardust Solutions. The company has been working under the radar for a few years, but it has started talking about its work more publicly this year. In October, it announced a significant funding round, led by some top names in climate investing. “Stardust is serious, and now it’s raised serious money from serious people,” as James puts it in his new story.
That’s making some experts nervous. Even those who believe we should be researching geoengineering are concerned about what it means for private companies to do so.
“Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding,” write David Keith and Daniele Visioni, two leading figures in geoengineering research, in a recent opinion piece for MIT Technology Review.
Stardust insists that it won’t move forward with any geoengineering until and unless it’s commissioned to do so by governments and there are rules and bodies in place to govern use of the technology.
But there’s no telling how financial pressure might change that, down the road. And we’re already seeing some of the challenges faced by a private company in this space: the need to keep trade secrets.
Stardust is currently not sharing information about the particles it intends to release into the sky, though it says it plans to do so once it secures a patent, which could happen as soon as next year. The company argues that its proprietary particles will be safe, cheap to manufacture, and easier to track than the already abundant sulfur dioxide. But at this point, there’s no way for external experts to evaluate those claims.
As Keith and Visioni put it: “Research won’t be useful unless it’s trusted, and trust depends on transparency.”
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Solar geoengineering startups are getting serious
Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.
A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.
So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. So what does it mean for geoengineering, and for the climate? Read the full story.
—Casey Crownhart
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
If you’re interested in reading more about solar geoengineering, check out:
+ Why the for-profit race into solar geoengineering is bad for science and public trust. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI is being sued for wrongful death By the estate of a woman killed by her son after he engaged in delusion-filled conversations with ChatGPT. (WSJ $) + The chatbot appeared to validate Stein-Erik Soelberg’s conspiratorial ideas. (WP $) + It’s the latest in a string of wrongful death legal actions filed against chatbot makers. (ABC News)
2 ICE is tracking pregnant immigrants through specifically-developed smartwatches They’re unable to take the devices off, even during labor. (The Guardian) + Pregnant and postpartum women say they’ve been detained in solitary confinement. (Slate $) + Another effort to track ICE raids has been taken offline. (MIT Technology Review)
3 Meta’s new AI hires aren’t making friends with the rest of the company Tensions are rife between the AGI team and other divisions. (NYT $) + Mark Zuckerberg is keen to make money off the company’s AI ambitions. (Bloomberg $) + Meanwhile, what’s life like for the remaining Scale AI team? (Insider $)
4 Google DeepMind is building its first materials science lab in the UK It’ll focus on developing new materials to build superconductors and solar cells. (FT $)
5 The new space race is to build orbital data centers And Blue Origin is winning, apparently. (WSJ $) + Plenty of companies are jostling for their slice of the pie. (The Verge) + Should we be moving data centers to space? (MIT Technology Review)
6 Inside the quest to find out what causes Parkinson’s A growing body of work suggests it may not be purely genetic after all. (Wired $)
7 Are you in TikTok’s cat niche? If so, you’re likely to be in these other niches too. (WP $)
8 Why do our brains get tired? Researchers are trying to get to the bottom of it. (Nature $)
9 Microsoft’s boss has built his own cricket app Satya Nadella can’t get enough of the sound of leather on willow. (Bloomberg $)
10 How much vibe coding is too much vibe coding? One journalist’s journey into the heart of darkness. (Rest of World) + What is vibe coding, exactly? (MIT Technology Review)
Quote of the day
“I feel so much pain seeing his sad face…I hope for a New Year’s miracle.”
—A child in Russia sends a message to the Kremlin-aligned Safe Internet League explaining the impact of the country’s decision to block access to the wildly popular gaming platform Roblox on their brother, the Washington Post reports.
One more thing
Why it’s so hard to stop tech-facilitated abuse
After Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior.
One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.
Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story.
—Jessica Klein
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ The New Yorker has picked its best TV shows of 2025. Let the debate commence! + Check out the winners of this year’s Drone Photo Awards. + I’m sorry to report you aren’t half as intuitive as you think you are when it comes to deciphering your dog’s emotions. + Germany’s “home of Christmas” sure looks magical.
The idea of selling used or overstock goods is not new. Secondhand and thrift shopping is as old as commerce itself.
What has changed is resale volume and the operational challenges that have emerged. Shops that want to sell used, refurbished, and overstock items should establish repeatable systems for handling sourcing, intake, authentication, grading, and pricing.
Repeatable Recommerce
Sourcing
The initial challenge is consistently finding desirable goods.
The aim is predictable systems for procuring products that turn over quickly and profitably.
Returns as inventory. Don’t overlook returned items. They are a reliable source of secondhand stock.
Customer trade-ins. Buy-back programs also provide a predictable supply of inventory and encourage repeat purchases. Merchants can let shoppers trade in and trade up apparel, outdoor gear, electronics, and luxury accessories. Carefully define what your business accepts and how credit is issued.
Liquidation sourcing. Platforms such as B-Stock, Bulq, and Liquidation.com offer bulk pallets from major retailers. The condition varies widely, often with incomplete manifests. Nonetheless, pallet sourcing remains a low-cost way to learn recommerce, especially in apparel and home goods.
Partnerships. Finally, many secondhand ecommerce businesses develop sourcing partnerships with manufacturers or other retailers to purchase clearance, end-of-season, or returned goods.
Intake
In circular commerce, intake drives goods toward a sale.
An effective intake workflow should move every item through a repeatable process, to:
Identify,
Clean,
Measure or test,
Document condition,
Photograph,
Authenticate,
Assign a grade,
List.
Each step is an opportunity to reduce the time from sourcing to sale. The better the intake process, the better the cash flow.
While each of these tasks is essential, the last three require extra attention.
Authenticate
Some categories of secondhand products require authentication or certification.
For example, a shop that lists a large Prada Galleria bag (which sells new in 2025 for $5,100) had better ensure it’s a genuine Prada. Counterfeits can kill a recommerce business.
Services such as Entrupy, Certilogo, and category-specific verification tools can help. In most cases, submitting photographs will be enough to authenticate an item.
A buyer for a used Prada bag seeks quality and brand recognition.
Grading
Recommerce grading can take two forms.
First, the description for every item should address its condition. Grading could be as simple as “like new” or “fair.” For such subjective grades, try to have a repeatable standard. For example, apparel with stitching needs can only be labeled “fair.”
Mistake in grading — too much or too little — erodes trust.
A second form of grading applies to collectible goods. Books, for example, often have grades such as “mint,” “fine,” and “near fine,” each with a specific definition.
When products have a standard and accepted grading system, use it.
Listing
Where to list — offer or sale — a secondhand, refurbished, or overstock item requires market awareness and understanding, and a bit of skill.
The listing should be priced competitively for a given market. A refurbished Xbox juxtaposed with a new one on a retailer’s website may sell at a higher price than on Facebook Marketplace or eBay.
The price difference among markets should not discourage a seller from listing on all or many of them. Instead, it implies the need to use different listing strategies, each emphasizing different features or values.
Product descriptions on Amazon Renewed might focus on the expert refurbishing or like-new performance, while apparel listings on ThreadUp could stress environmental sustainability.
An Amazon Renew shopper likely differs from one on Threadup focused on environmental sustainability.
Recommerce Success
Recommerce can supplement a retailer’s primary sales channel by extracting value from returns, trade-ups, and overstock inventory.
It can also become a standalone business model, where merchants buy and sell across multiple marketplaces.
Success in either model depends on processes and workflows. Shops that standardize intake, grading, authentication, and listing practices earn consumer trust, resulting in faster turnover and lower returns.
Google has released the December 2025 core update, the company confirmed through its Search Status Dashboard.
The rollout began at 9:25 a.m. Pacific Time on December 11, 2025.
This marks Google’s third core update of 2025, following the March and June core updates earlier this year.
What’s New
Google lists the update as an “incident affecting ranking” on its status dashboard.
The company states the rollout “may take up to three weeks to complete.”
Core updates are broad changes to Google’s ranking systems designed to improve search results overall. Unlike specific updates targeting spam or particular ranking factors, core updates affect how Google’s systems assess content across the web.
2025 Core Update Timeline
The December update follows two previous core updates this year.
The March 2025 core update rolled out from March 13-27, taking 14 days to complete. Data from SEO tracking providers suggested volatility similar to the December 2024 core update.
The June 2025 core update ran from June 30 to July 17, lasting about 16 days. SEO data providers indicated it was one of the larger core updates in recent memory. Some sites previously hit by the September 2023 Helpful Content Update saw partial recoveries during this rollout.
Documentation Update On Continuous Changes
Two days before this core update, Google updated its core updates documentation with new language about ongoing algorithm changes.
“However, you don’t necessarily have to wait for a major core update to see the effect of your improvements. We’re continually making updates to our search algorithms, including smaller core updates. These updates are not announced because they aren’t widely noticeable, but they are another way that your content can see a rise in position (if you’ve made improvements).”
Google explained that the addition was meant to clarify that content improvements can lead to ranking changes without waiting for the next announced update.
Why This Matters
If you notice ranking fluctuations over the coming weeks, this update is likely a major factor.
Core updates can shift rankings for pages that weren’t doing anything wrong. Google has consistently stated that pages losing visibility after a core update don’t necessarily have problems to fix. The systems are reassessing content relative to what else is available.
The documentation update is a reminder that rankings can change between major updates as Google rolls out smaller core changes behind the scenes.
Looking Ahead
Google will update the Search Status Dashboard when the rollout is complete.
Monitor your rankings and traffic over the next three weeks. If you see changes, document when they occurred relative to the rollout timeline.
Based on 2025’s previous updates, completion typically takes two to three weeks. Google will confirm completion through the dashboard and its Search Central social accounts.
So many people spent 2025 arguing about whether SEO was dying. It was never dying. It was shifting into a new layer. Discovery continues to move from search boxes to AI systems. Answers now come from models that rewrite your work, summarize competitors, blend sources, and shape decisions before a browser window loads. In 2026, this shift becomes visible enough that executives and SEOs can no longer treat it like an edge case; percentages from sources will shift. The search stack that supported the last 20 years is now only one of several layers that shape customer decisions. (I talk about all this in my new book, “The Machine Layer” (non-affiliate link).)
This matters because the companies that win in 2026 will be the ones treating AI systems as new distribution channels. The companies that lose will be the ones waiting for their analytics dashboards to catch up. You no longer optimize for a single front door. You now optimize for many. Each one is powered by models that decide what to show, who to show it to, and how to describe you.
Here are 14 things that will define competitive advantage in 2026. Each one is already visible in real data. Together, they point to a year where discovery becomes more ambient, more conversational, and more dependent on how well a machine can parse and trust you. And at the end of this list is one heck of a prediction that I bet you didn’t see coming for next year! If I’m being honest, I’m sure a few of you did, but to this depth? Realizing it was all so close?
Grab a coffee or tea, find your favorite spot to read, and let’s get started!
Image Credit: Duane Forrester
1. AI Answer Surfaces Become The New Front Door
ChatGPT, Claude, Gemini, Meta AI, Perplexity, CoPilot, and Apple Intelligence now sit between customers and your website. More and more users ask questions inside these systems before they ever search. And the answers they get are inconsistent. BrightEdge’s analysis showed that AI engines disagree with each other 62% of the time. When engines disagree this much, brand visibility becomes unstable. Executives need reporting that reveals how often their brand appears inside these systems. SEOs need workflows that evaluate chunk retrieval, embedding strength, and citation presence across multiple answer engines.
2. Content Must Be Designed For Machine Retrieval
Microsoft’s 2025 Copilot study analyzed more than 200,000 work sessions. The most common AI-assisted tasks were gathering information, explaining information, and rewriting information. These are the core tasks modern content must support. AI models choose content that is structured, predictable, and easy to embed. If your content lacks clear sectioning, consistent patterns, or explicit definitions, it becomes harder for models to use. This impacts whether you appear in answers. In 2026, your formatting choices become ranking signals for machines.
3. On-Device LLMs Change How People Search
Apple Intelligence runs many tasks locally. It also rewrites queries in more natural conversational patterns. This pushes search activity away from browsers and deeper into the operating system. People will ask their device short, private questions that never hit the web. They will ask follow-up questions inside the OS. They will make decisions without ever visiting a page. This shifts both volume and structure. SEOs will need content designed for lightweight, edge device retrieval.
4. Wearables Start Steering The Discovery Funnel
Meta Ray Bans already support visual queries. The user points at something and asks what it is. Voice and camera replace typing. This increases micro queries tied to real-world context. Expect to see more identify this, what does this do, and how do I fix that queries. Wearables compress the distance between stimulus and search. Executives should invest in image quality, product clarity, and structured metadata. SEOs should treat visual search signals as core inputs.
5. Short-Form Video Becomes A Training Input For AI
Video is now a core training signal for modern multimodal models. V-JEPA 2 from Meta AI is trained on an unknown number of hours of raw video and images, but this still shows that large-scale video learning is becoming foundational for motion understanding, physical prediction, and video question answering. Gemini 2.5 from Google DeepMind explicitly supported video understanding, allowing the model to interpret video clips, extract visual and audio context, and reason over sequences. OpenAI’s Sora research demonstrates that state-of-the-art generative video models learn from diverse video inputs to understand motion, physical interactions, transitions, and real-world dynamics. In 2026, your short-form video becomes part of your broader signal footprint. Not only the transcript. The visuals, pacing, motion, and structure become vectors the model can interpret. When your video output and written content diverge, the model will default to whichever medium communicates more clearly and consistently.
6. Organic Search Signals Shift Toward Trust And Provenance
Traditional algorithms relied on links, keywords, and click patterns. AI systems shift that weight toward provenance and verification. Perplexity describes its model as retrieval-augmented, pulling from authoritative sources like articles, websites, and journals and surfacing citations to show where information comes from. Independent audits support this direction. A 2023 evaluation of generative search engines found that systems like Perplexity favored content that is factual, well-structured, and supported by external evidence when assembling cited answers. This remains true today as well. SEO industry analysis also shows that pages with clear metadata, consistent topical organization, and visible author identity are more likely to be cited. Naturally, all of this changes what trust looks like. Machines prioritize consistency, clarity, and verifiable sourcing. Executives should focus on data governance and content stability. SEOs should focus on structured citations, author attribution, and semantic coherence across their content ecosystem.
7. Real-Time Cohort Creation Replaces Static Personas
LLMs build temporary cohorts by clustering people with similar intent patterns. These clusters can form in seconds and dissolve just as fast. They are not tied to demographics or personas. They are based on what someone is trying to do right now. This is the basis of the experiential cohort concept. Marketers have not caught up yet. In 2026, cohort-based targeting will shift toward intent embeddings and away from persona documents. SEOs should tune content for intent patterns, not identity attributes.
8. Agent-To-Agent Commerce Becomes Real
Agents will schedule appointments, book travel, reorder supplies, compare providers, and negotiate simple agreements. Your content becomes instructions for another machine. To support that, it must be unambiguous. It must be explicit about requirements, constraints, availability, pricing rules, and exceptions. If you want an agent to pick your business, you need a content model that feeds the agent’s decision tree. Executives should map the top 10 agent-mediated tasks in their industry. SEOs should design content that makes those tasks easy for a machine to interpret.
9. Hardware Acceleration Pushes AI Into Every Routine
NVIDIA, Apple, and Qualcomm are all building hardware optimized for on-device and low-latency AI inference. These chips reduce friction, which increases the number of everyday questions people ask without ever opening a browser. NVIDIA’s data center inference platforms show how much compute is moving toward real-time model execution. Qualcomm’s AI Hub highlights how modern phones can run complex models locally, shrinking the gap between thought and action. Apple’s M-series chips include Neural Engines that support local model execution inside Apple Intelligence. Lower friction means people will ask more small, immediate questions as they move through their day instead of grouping everything into one session. SEOs should plan for discovery happening across many short, assistant-driven interactions rather than a single focused search moment.
10. Query Volume Expands As Voice And Camera Take Over
Voice input grows the long tail. Camera input grows contextual queries. The Microsoft Work Trend Index shows rising AI usage across everyday task categories, including personal information gathering. People ask more questions because speaking is easier than typing. The shape of demand widens, which increases ambiguity. SEOs need stronger intent classification workflows and a better understanding of how retrieval models cluster similar questions.
11. Brand Authority Becomes Machine Measurable
Models determine authority by measuring consistency across your content. They look for stable terminology, clear entity relationships, and patterns in how third parties reference you. They look for alignment between what you publish and how the rest of the web describes your work. This is not the old human quality framework. It is a statistical confidence score. Executives should invest in knowledge graphs. SEOs should map their entity network and tune the language around each entity for stability.
12. Zero-click Environments Become Your Primary Competitor
Answer engines pull from multiple sources and give the user a single synthesized answer. This reduces visits but increases influence. In 2026, the dominant competitors for organic attention are ChatGPT, Perplexity, Gemini, CoPilot, Meta AI, and Apple Intelligence. You do not win by resisting zero click. You win by being the source the engine prefers. Executives must adopt new performance metrics that reflect answer presence. SEOs should run monthly audits of brand visibility across all major platforms, tracking citations, mentions, paraphrases, and omissions.
13. Competitive Intelligence Shifts Into Prompt Space
Your competitors now live inside AI answers, whether they want to or not. Their content becomes part of the same retrieval memory that models use to answer your queries. In 2026, SEOs will evaluate competitor visibility by studying how platforms describe them. You will ask models to summarize competitors, benchmark capabilities, and compare offerings. The insights you get will shape strategy. This becomes a new research channel that executives can use for positioning and differentiation.
14. Your Website Becomes A Training Corpus
AI systems will digest your content many times before a human does. That means your site is now a data repository. It must be structured, stable, and consistent. Publishing sloppy structure or unaligned phrasing creates noise inside retrieval models. Executives should treat their content like a data pipeline. SEOs should think like information architects. The question shifts from how do we rank to how do we become the preferred reference source for a model.
The companies that succeed in 2026 will be the ones that understand this shift early. Visibility now lives in many places at once. Authority is measured by machines, not just people. Trust is earned through structure, clarity, and consistency. The winners will build for a world where discovery is ambient, and answers are synthesized. The losers will cling to dashboards built for a past that is not coming back.
Now, if you’ve read this far, thank you, and I have a surprise – an actual prediction for 2026! I think it’s a big, important one, so buckle up!
I’m calling this Latent Choice Signals, or these, I suppose, as it’s a grouping of signals that paint a picture for the platforms. From the consumer’s POV, this is the essential mental map they’re following: “I saw it, I felt something about it, and I decided not to continue.” This is the core. The user’s mind is making a choice, even if they never articulate it or click anything. That behavior generates meaning. And the system can interpret that meaning at scale. Let’s dig in…
The Prediction No One Sees Coming
By the end of 2026, AI systems will begin optimizing decisions around the patterns users never articulate. Not the queries they type. Not the questions they ask. But the choices they avoid.
This is the shift almost everyone misses, and you can see the edges of it forming across three different fields. When you pull them together, the picture becomes clearer.
First, operating system-level AI is already learning from behavior that is not explicitly expressed. Apple Intelligence is described as a personal intelligence layer that blends generative models with on device personal context to prioritize messages, summarize notifications, and suggest actions across apps. Apple built this for convenience and privacy, but it created something more important. The system must learn over time which suggestions people accept and which they quietly ignore. It sees which notifications get swiped away, which app actions never get used, and which prompts are abandoned. It does not need to read your mind. It only needs to see which proposed actions never earn a tap. Those patterns are already part of how it ranks what to surface next.
Second, recommender systems already treat non-actions as meaningful signals. You see it every time you skip a YouTube video, swipe past a TikTok in under a second, or close Netflix when the row of suggestions feels wrong. These platforms do not publish their exact mechanics, but implicit feedback is a well-established concept in the research world. Classic work on collaborative filtering for implicit feedback datasets shows how systems use viewing, skipping, and browsing behavior to model preference, even when users never rate anything directly. Newer work continues to refine how clicks, views, and avoidance patterns feed recommendation models at scale. It is reasonable to expect LLM-driven assistants to borrow from the same logic. The pattern is too useful to ignore. When you close an assistant, rephrase a question to avoid a certain brand, or scroll past a suggestion without engaging, that is data about what you did not want.
Third, alignment research already trains models to follow what humans prefer, not just what text predicts. OpenAI’s “Learning to summarize with human feedback” work shows how models can be tuned using human comparisons between outputs, with a reward model that learns which responses people think are better. This has been in play for years now. This kind of reinforcement learning from human feedback was built for tasks like summarization and style, but the underlying principle matters here. Models can be optimized around patterns of acceptance and rejection. Over time, conversational systems can extend this to live settings, where corrections, rewrites, and abandonments are treated as signals about what the user did not want, even when they never spell that out.
Put these three domains together, and a larger pattern emerges. As AI systems move into glasses, phones, laptops, cars, and operating systems, they will gain precise visibility into the choices people avoid. These avoidance patterns will become signals that inform how assistants rank options, choose providers, and recommend products.
This will not feel like surveillance. The model is not peeking into your private life. It is watching your interaction patterns with the system itself. It sees where you hesitate, which suggestions you skip, which tasks you hand off, which providers create follow-up questions, which prices cause users to pause, which explanations reduce confidence, and which interfaces break the chain of intent. These are all first-party behavioral signals the assistant is already allowed to use. And that platforms see these signals on a global scale.
In 2026, these Latent Choice Signals will become strong enough that they form a new optimization layer. A silent ranking system built around friction. If your brand generates hesitation, the assistant will reduce your visibility long before your analytics flag a problem. If your content creates confusion during synthesis, it will be bypassed during retrieval. If your policies trigger too many follow-up questions, the model will favor a competitor with clearer flows. The user will never know why. All they will see is the assistant presenting a different option.
This is the layer that will blindside executives. Dashboards will look normal. Rankings may appear stable. Traffic may hold steady. Yet conversions inside AI-mediated decisions will drift. Customers will stop choosing you, not because you lost traditional ranking signals, but because you introduced cognitive friction the model can detect and optimize against.
The winners will be the companies that treat avoidance as a measurable signal. They will analyze which parts of their product and content cause hesitation. They will refine policies to reduce ambiguity. They will simplify offerings. They will align explanations with how models process uncertainty. They will build experiences that reduce agent-level friction and improve confidence inside a retrieval sequence.
By late 2026, negative intent signals may become one of the strongest competitive filters in digital business. Not because users say anything, but because their silence now has structure the model can learn from. Anyone watching today’s data can see this shift forming, but almost no one is naming it. Yet the early indicators are already here, hiding between the interactions users never get far enough to complete.
This is the prediction that will define the next phase of AI-driven discovery. And the companies that understand it early will be the ones the assistants prefer.
I’m carefully watching the development of agentic SEO, as I believe over the next few years, as capabilities improve, agents will have a significant impact on the industry. I’m not suggesting this will be a seamless replacement of talent with a highly capable machine intelligence. There is going to be a lot of trial and error, but I do think we are going to see radical shifts in how the online space operates. Not unlike how automation transformed manufacturing.
Marie Haynes has long been a well-known expert in the industry who shared her learnings on E-E-A-T and Google’s algorithm through her popular Search News You Can Use newsletter.
A few years ago, Marie made the decision to retire her SEO agency and went all in on learning AI systems, as she believes we’re at the beginning of a profound transformation.
Marie wrote a recent article, “Hype or not, should you be investing in AI agents?” about what SEOs need to understand about this rapidly developing space. So, I invited her to IMHO to dive more into this topic.
Marie believes AI will radically change our world for the better, and she believes every business will have AI agents.
You can watch the full interview with Marie on the IMHO recording at the end, or continue reading the article summary.
“The idea that we optimize for appearing as one of the 10 blue links on Google is already gone.”
Experimenting With Gemini Gems
Marie’s practical advice for anyone wanting to understand agents is to start with Gems:
“If you take one thing from this conversation, it’s to try to create some Gemini Gems,” Marie emphasized. “Eventually I’m fairly certain that these gems will morph into agentic workflows.”
To illustrate, she shared a process she called her “originality Gem,” which contains a 500+ word prompt that captures how she evaluates content, along with examples of truly original content in its knowledge base.
“We’re not far from the day where all of my processes that I do for SEO can be handled by agentic workflows that occasionally pull on me for some advice,” Marie said.
The Power Of Chaining Agents
The next progression and real potential come from chaining agents together to create agentic workflows.
The power that this gives opportunity to is that we can use our knowledge and experience to teach AI like a team of assistants to do the work that can be automated.
We would then orchestrate the process and, like a conductor, sit and guide the agents to perform the work as we become the human-in-the-loop to review the output.
Once we have downloaded our knowledge to the agents, and the systems work, we can scale ourselves to handle exponential clients.
“Instead of me handling just a small handful of clients, all of a sudden I could have a hundred clients and do the same work because it’s all going through my workflow,” Marie said.
The challenge here is the skill in prompting the agents and constructing them to achieve the desired output.
“The future of our industry is not about optimizing for an engine, but about acting as the interface between businesses and technology, and we will be the human experts who teach, guide, and implement AI agents.”
Why Gemini Over ChatGPT
I asked Marie why she focuses on Gemini over ChatGPT, and her response was based on futureproofing: “The main reason why I use Gemini is not to accomplish things today, but to grow my skills in what’s coming tomorrow.”
Marie went on to explain that “Google’s got a whole ecosystem that you can see it coming together like right now,” and she believes that Google will be the winner in the AI race.
“I think that Google is going to win the game. I think it’s always been their game to win. So I make it a point to use Gemini as much as I can.”
Transformations Will Follow The Money
Marie’s prediction for the next few years is for workflows to become embedded. “Sundar Pichai, CEO of Google, said this way back in March, that, in two to four years, every agentic workflow will be deeply embedded into our day-to-day work.”
However, she thinks the real transformations will come when businesses start making money from agentic workflows.
“It’s wild how many trillions of dollars are being spent on developing AI, yet there’s not a whole lot of financial output at this point,” Marie noted, referencing a McKinsey study showing 95% of businesses using AI aren’t making money from it yet [Editor’s note: McKinsey was 80%; MIT said 95%].
“It’s very similar to SEO. There was a day where there were just a small handful of people who figured out how to improve on Google. Once people started making good money from understanding SEO, there was a lot of attention. Tools were created and a whole industry popped up. I think that’s going to happen again. Will it be within the next 12 months? I don’t know. I feel like it might be a little bit longer.”
What SEOs Should Do Now
Overwhelm is a real issue to be aware of, and with developments moving so quickly, there is a huge learning curve to essentially retrain. Even for those working on this full-time.
Marie made a commitment when she went all in on AI research. “I made it my full-time job to stay on top of what’s happening, and even I get overwhelmed with all the stuff that’s happening with AI,” she explained.
“The next time you go to do a task, try to create an agent that would do this for you,” she suggested. Even if you don’t finish, you’ll learn skills for the next attempt.
Also, persevere instead of taking the first failure. “Try to figure out what they can do, instead of just telling everybody, ‘Oh, it can’t do this.’ Find ways you can use it.”
For development teams, she recommends vibe coding with tools like Google’s Anti Gravity or AI Studio. “You can deploy a whole website without even knowing any HTML,” Marie said.
She also advocates for deep research reports using either Gemini or ChatGPT to analyze how competitors are using AI, providing immediate value to clients while building skills.
The Future Of SEO
Marie referenced Sundar Pichai calling AI technology more profound than fire or electricity in its impact on society. Despite acknowledging her bias after investing significant time in understanding AI, she maintains there’s going to be societal disruption.
“Being able to understand what’s happening in the world and distill it down to what’s important to your clients will be a superpower,” she said. Although, she does admit, there is still a lot of learning and grey areas to move through as we navigate the edge of technology.
“If you’re feeling lost, you’re not alone because imagine right now we’re sort of at the forefront of all of these changes happening.”
For those who do persevere, there will be significant rewards. Eventually, business owners will be clamoring for people who can explain AI and implement it. The professionals who develop these skills now will be extremely valuable in the future.
“The people who know how to use AI, know how to create agents, and know how to make money from AI are going to be extremely valuable in the future.”
Watch the full video interview with Marie Haynes here:
Thank you to Marie Haynes for offering her insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal