OpenAI launched shopping research in ChatGPT, a feature that creates personalized buyer’s guides by researching products across the web. The tool is rolling out today on mobile and web for logged-in users on Free, Go, Plus, and Pro plans.
The company is offering nearly unlimited usage through the holidays.
What’s New
Shopping research works differently from standard ChatGPT responses. Users describe what they need, answer clarifying questions about budget and preferences, and receive a buyer’s guide after a few minutes.
The feature pulls information including price, availability, reviews, specs, and images from across the web. You can guide the research by marking products as “Not interested” or “More like this” as options appear.
OpenAI’s announcement states:
“Shopping research is built for that deeper kind of decision-making. It turns product discovery into a conversation: asking smart questions to understand what you care about, pulling accurate, up-to-date details from high-quality sources, and bringing options back to you to refine the results.”
The company says the tool performs best in categories like electronics, beauty, home and garden, kitchen and appliances, and sports and outdoor.
Technical Details
Shopping research is powered by a shopping-specialized GPT-5 mini variant post-trained on GPT-5-Thinking-mini.
OpenAI’s internal evaluation shows shopping research reached 52% product accuracy on multi-constraint queries, compared with 37% for ChatGPT Search.
Product accuracy measures how well responses meet user requirements for attributes like price, color, material, and specs. The company designed the system to update and refine results in real time based on user feedback.
Privacy & Data Sharing
OpenAI states that user chats are never shared with retailers. Results are organic and based on publicly available retail sites.
Merchants who want to appear in shopping research results can follow an allowlisting process through OpenAI.
Limitations
OpenAI acknowledges the feature isn’t perfect. The model may make mistakes about product details like price and availability. The company encourages users to visit merchant sites for the most accurate information.
Why This Matters
This feature pulls more of the product comparison journey into one place.
As shopping research handles more of the “which one should I buy?” work inside ChatGPT, some of that early-stage discovery could happen without a traditional search click.
For retailers and affiliate publishers, that raises the stakes for inclusion in these results. Visibility may depend on how well your products and pages are represented in OpenAI’s shopping system and allowlisting process.
Looking Ahead
Shopping research in ChatGPT is now available to logged-in users starting today. OpenAI plans to add direct purchasing through ChatGPT for merchants participating in Instant Checkout, though no timeline was provided.
Every Q4, the same message shows up in our accounts:
“Use seasonality adjustments to get ready for Black Friday and Cyber Monday.”
On paper, it sounds reasonable. You expect conversion rates to rise, so you give Smart Bidding a heads up and tell it to bid more aggressively during the peak.
Optmyzr’s latest study puts a pretty big dent in that narrative.
Over three BFCM cycles from 2022 through 2024, Fred Vallaeys and the Optmyzr team analyzed performance for up to 6,000 advertisers per year, split into two cohorts: those who used seasonality bid adjustments and those who did not.
The question was simple: do these adjustments actually help during Black Friday and Cyber Monday, or are we just making Google bid higher for no meaningful gain?
Based on the data, seasonality adjustments often hurt efficiency and rarely deliver the breakthrough many advertisers expect.
Below is a breakdown of the study and what it means for PPC managers heading into peak season.
Key Findings from Optmyzr’s BFCM Seasonality Study
The study compared performance across three BFCM periods (2022–2024), defined as the Wednesday before Black Friday through the Wednesday after Cyber Monday. Each year’s results were then measured against a pre-BFCM baseline.
The accounts were grouped into:
Advertisers who did not use seasonality bid adjustments
Advertisers who did apply them
Across all three years, consistent patterns emerged from their study.
#1: Smart Bidding already adjusts for BFCM without manual prompts
For advertisers who skipped seasonality adjustments, Smart Bidding still responded to the conversion rate spike:
2022: Conversion rate up 17.5%
2023: Conversion rate up 11.9%
2024: Conversion rate up 7.5%
In other words, the algorithm did exactly what it was designed to do. It detected higher intent and increased bids without needing an external nudge.
#2: Seasonality adjustments inflated CPCs far more than necessary
Seasonality adjustments tell Google’s system to raise bids based on your predicted conversion rate increase.
Optmyzr notes that:
When you apply a seasonality adjustment, you are effectively telling Google: ‘I expect conversion rate to increase by X%. Raise bids immediately by X%.
And Smart Bidding acts as if you’re exactly right. It usually doesn’t soften that prediction or test into it.
The study showed that this is why CPCs climbed much faster for advertisers who used adjustments:
CPC inflation (no adjustment vs. with adjustment)
2022: +17% vs. +36.7%
2023: +16% vs. +32%
2024: +17% vs. +34%
Adjustments consistently doubled CPC inflation, even though Smart Bidding was already raising bids based on real-time conversion signals.
#3: ROAS dropped for advertisers using seasonality adjustments
When CPC increases outpace conversion rate increases, ROAS inevitably suffers.
ROAS change (no adjustment vs. with adjustment)
2022: -2% vs. -17%
2023: -1.5% vs. -10%
2024: +5.7% vs. -15.7%
The “no adjustment” group maintained stable ROAS, even improving in 2024. The “with adjustment” group saw steep declines every year.
Why Do Seasonality Adjustments Struggle During BFCM?
Optmyzr explains this dynamic as a precision issue.
When you apply a seasonality adjustment, you are making a specific prediction about the conversion lift. If you estimate the lift at +40% and the real lift ends up being +32–35%, that gap translates directly into overbidding.
Fred Vallaeys writes:
Smart Bidding takes this literally. It does not hedge your bet. It assumes you have perfect foresight.
That’s the core problem.
Black Friday and Cyber Monday are also in the category of highly predictable retail events. Google has years of historical BFCM data to model expected shifts. As a result, Optmyzr concludes:
Seasonality adjustments work best when Google cannot anticipate the spike.
BFCM is not one of those situations. It’s practically encoded into Google’s models.
The Trade-Off: More Revenue, Lower Efficiency
The study did show that advertisers using seasonality adjustments often drove higher revenue growth:
Revenue growth (no adjustment vs. with adjustment)
2022: +25% vs. +50.5%
2023: +30.3% vs. +52.8%
2024: +33.8% vs. 39.9%
In 2022 and 2023, the incremental revenue jump was significant. But again, those gains came with notable ROAS declines.
This supports a practical interpretation:
If your brand’s priority is aggressive market share capture, top-line revenue or inventory liquidation, seasonality adjustments can deliver more volume.
If your brand’s priority is profitable performance, adjustments tend to work against that goal during BFCM.
When Seasonality Adjustments Do Make Sense
In the study, Optmyzr made it very clear: seasonality adjustments themselves aren’t the problem. The misuse of them is.
They work well in scenarios where you genuinely have more insight into the spike than the platforms do, such as:
A short flash sale
A new one-time promotion with no historical precedent
A large, concentrated email push
Niche events with little global relevance
Situations where they may not make the most sense:
Black Friday and Cyber Monday (supported by their data study)
Christmas shopping windows
Valentine’s Day for gift categories
These events are already modeled extensively by Google’s bidding systems.
What Should PPC Managers Do With This Data?
If you’re looking to make some changes into your PPC accounts this holiday season, here’s a few ways to apply these findings in a practical way.
#1: Default to not using seasonality adjustments for BFCM
For the majority of advertisers, letting Smart Bidding handle the conversion rate spike naturally leads to steadier ROAS and fewer surprises.
The data supports this approach across three consecutive years.
#2: If leadership insists on volume, be explicit about the trade-off
You can lean on Optmzyr’s findings to set expectations, not just express an opinion.
For example:
“Optmyzr’s three-year analysis shows that seasonality adjustments can increase revenue but typically reduce ROAS by 10-17 percentage points.”
“We can use them if revenue volume is the priority, but we will need to prepare for much lower cost efficiency.”
These examples keep the conversation focused on the business, not just the tactical levers you pull.
#3: Spend your energy on guardrails, not the predictions
In the study, Optmzyr reminds advertisers that trusting the algorithm doesn’t mean blindly letting it run without any oversight.
Instead of guessing the exact uplift, your value during peak season come from:
Smart budget pacing
Hourly monitoring (with automated alerts, of course!)
Bid caps when necessary
Audience and device segmentation checks
Creative and offer readiness
These are some of the key areas where human judgment beats prediction.
Final Thoughts On Optmyzr’s Study
Optmyzr’s study doesn’t argue that seasonality bid adjustments are bad. What it does argue is that context is everything.
For predictable, high-volume retail events like BFCM, Google’s bidding systems already have the signal they need. Adding your own forecast often leads to overshooting, inflated CPCs, and unnecessary efficiency loss.
For unique or brand-specific spikes, adjustments remain valuable.
This research gives PPC managers something we rarely get during BFCM: solid data to support a more measured, less reactive approach. If nothing else, it gives you the backup you need the next time someone asks:
“Should we turn on seasonality adjustments this Black Friday?”
Your answer can be confident, data-driven, and clear.
Google Search Advocate John Mueller has pushed back on the idea of building separate Markdown or JSON pages just for large language models (LLMs), saying he doesn’t see why LLMs would need pages that no one else sees.
The discussion started when Lily Ray asked on Bluesky about “creating separate markdown / JSON pages for LLMs and serving those URLs to bots,” and whether Google could share its perspective.
Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots. Can you share Googleʼs perspective on this?
The question draws attention to a developing trend where publishers create “shadow” copies of important in formats that are easier for AI systems to understand.
There’s a more active discussion on this topic happening on X.
This has been the hot topic lately, I’ve been getting pitched by companies who do this https://t.co/rVnbPKUxZj
Mueller replied that he isn’t aware of anything on Google’s side that would call for this kind of setup.
He notes that LLMs have worked with regular web pages from the beginning:
I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?
When Ray followed up about whether a separate format might help “expedite getting key points across to LLMs quickly,” Mueller argued that if file formats made a meaningful difference, you would likely hear that directly from the companies running those systems.
If those creating and running these systems knew they could create better responses from sites with specific file formats, I expect they would be very vocal about that. AI companies aren’t really known for being shy.
He said some pages may still work better for AI systems than others, but he doesn’t think that comes down to HTML versus Markdown:
That said I can imagine some pages working better for users and some better for AI systems, but I doubt that’s due to the file format, and it’s definitely not generalizable to everything. (Excluding JS which still seems hard for many of these systems).”
Taken together, Mueller’s comments suggest that, from Google’s point of view, you don’t need to create bot-only Markdown or JSON clones of existing pages just to be understood by LLMs.
How Structured Data Fits In
Other individuals in the thread drew a line between speculative “shadow” formats and cases where AI platforms have clearly defined feed requirements.
A reply from Matt Wright pointed to OpenAI’s eCommerce product feeds as an example where JSON schemas matter.
In that context, a defined spec governs how ChatGPT ingests and displays product data. Wright explains:
Interestingly, the OpenAI eCommerce product feeds are live: JSON schemas appear to have a key role in AI search already.
That example supports the idea that structured feeds and schemas are most important when a platform publishes a spec and asks you to use it.
Additionally, Wright points to a thread on LinkedIn where Chris Long observed that “editorial sites using product schemas, tend to get included in ChatGPT citations.”
Why This Matters
If you’re questioning whether to build “LLM-optimized” Markdown or JSON versions of your content, this exchange can help steer you back to the basics.
Mueller’s comments reinforce that LLMs have long been able to read and parse standard HTML.
For most sites, it’s more productive to keep improving speed, readability, and content structure on the pages you already have, and to implement schema where there’s clear platform guidance.
At the same time, the Bluesky thread shows that AI-specific formats are starting to emerge in narrow areas such as product feeds. Those are worth tracking, but they’re tied to explicit integrations, not a blanket rule that markdown is better for LLMs.
Looking Ahead
The conversation highlights how fast AI-driven search changes are turning into technical requests for SEO and dev teams, often before there is documentation to support them.
Until LLM providers publish more concrete guidelines, this thread points you back to work you can justify today: keep your HTML clean, reduce unnecessary JavaScript where it makes content hard to parse, and use structured data where platforms have clearly documented schemas.
The European Commission has proposed a “Digital Omnibus” package that would relax parts of the GDPR, the AI Act, and Europe’s cookie rules in the name of competitiveness and simplification.
If you work with EU traffic or rely on European data for analytics, advertising, or AI features, it’s worth tracking this proposal even though nothing has changed in law yet.
What The Digital Omnibus Would Change
The Digital Omnibus would revise several laws at once.
On AI, the proposal would push back stricter rules for high-risk systems from August 2026 to December 2027. It would also lighten documentation and reporting obligations for some systems and move more oversight to the EU AI Office.
Regarding data protection, the Commission aims to clarify when information is no longer considered ‘personal,’ making it easier to share and reuse anonymized and pseudonymized datasets, especially for AI training.
Privacy group noyb says this new wording isn’t just about clarifying the rules. They believe the proposal introduces a more subjective approach, hinging on what a controller claims it can or plans to do. Noyb warns this change could exclude parts of the adtech and data-broker industry from GDPR protections.
Cookies, Consent, And Browser Signals
The cookie section is likely to be the most visible change for your day-to-day work if the proposal moves forward.
The Commission wants to cut “banner fatigue” by exempting some non-risk cookies from consent pop-ups and shifting more control into browser-level settings that apply across sites.
In practice, that would mean fewer consent banners for low-risk uses, such as certain analytics or strictly functional storage, once categories are defined.
The proposal would also require websites to respect standardized, machine-readable privacy signals from browsers when those standards exist.
AI Training & Data Rights
One of the most contested pieces of the Digital Omnibus is how it treats data used to train AI systems.
The package would allow companies including Google, Meta, and OpenAI to use Europeans’ personal data to train AI models under a broadened legal basis.
Privacy groups have argued that this kind of training should rely on explicit opt-in consent, rather than the more flexible approach they see in the proposal.
Noyb warns that long-running behavioral data, such as social media histories, could be used to train AI systems with only an opt-out model that is difficult for people to exercise in practice.
Why This Matters
This proposal is worth keeping on your radar if you’re responsible for analytics, consent, or AI-driven products that reach EU users.
Over time, you might observe smaller, browser-driven consent experiences for EU traffic, along with a different compliance approach for AI features that depend on behavioral data.
For now, nothing in your cookie banners, GA4 setup, or AI workflows needs to change solely because of the Digital Omnibus.
Looking Ahead
The Digital Omnibus is an early signal that the EU is re-balancing its digital rulebook around AI and competitiveness, not privacy and enforcement alone.
Key items to monitor include Parliament’s amendments to AI training and data language, cookie and browser-signal provisions for CMPs and browsers, and changes to AI training and consent for EU users.
Performance Max (like the more upper-funnel Demand Gen) is different enough from other Google Ads campaigns that it requires a different approach, even if the underlying search behavior and marketing principles are the same as they’ve always been.
For what it’s worth, Performance Max is typically not the first campaign to launch in any account. We typically start with Search and/or Shopping before layering on Performance Max when it makes sense, e.g., testing and scaling.
But when the time comes to make it work, it takes a specific mindset. And if your Google Ads methods and principles are still stuck in 2015, you’re not going to get very far.
Here’s how to tailor your approach and become a mentality monster for Performance Max.
Performance Max At Its Most Basic Level
A strong mindset for modern PPC begins with knowledge and education. If you still don’t understand the fundamental differences between Performance Max and legacy campaign types (like Search and Shopping), that’s step one.
The TL;DR is simple: Performance Max is driven by algorithms, not inputs or controls. There’s a certain degree of surrendering to the system that goes with it, and trying to exert control when there’s none to claim will only end up with a large chunk of wasted spend.
If you think you can be the exception to the rule and force Performance Max into traditional campaign structures, all you’ll do is choke the algorithm and spend money on poor-quality conversions. This has a compounding effect where the system then believes those are valid conversions and will try to bring you more of the same.
Here are five core truths to keep in mind:
1. You don’t control targeting. Performance Max simply does not go where you tell it to. At best, you can provide initial direction in the form of audience signals. But it will eventually start to make its own decisions about which channels to show your ads on and which audiences to pursue. Even keywords are more about guidance than a guideline to be followed strictly.
2. You don’t decide which headlines get paired with which creatives. With Performance Max, you’ll still need to build all the pieces of your ads: responsive search ads, video and static creatives, product feeds with robust descriptors, and so on. But how those get mixed and matched isn’t up to you. Google’s system will test different combinations with different audiences before settling on what works best.
3. You don’t get full visibility into every query or placement. There’s no question that Performance Max is capable of delivering great results. If you want that, then you simply have to accept that you must give up a certain degree of visibility into where your ads show and why. You may not like it, but this campaign only works when you set things up properly and trust the system (while still supervising and verifying its output).
4. Data, not content, is king. Performance Max runs on data, and Google expects you to provide far more data than it will. Accounts with more conversion data will perform better because Google has more user signals to decode. With clearer first-party inputs, Performance Max is more likely to deliver the conversions you want. The clearer your audience signals are, the easier it is to quickly move out of the learning phase. And a more complete and accurate product feed will go a long way in getting your products in front of people who want them.
5. That being said, reporting is getting better but can still be frustrating. We only recently got access to things like asset group reporting, search terms reports and negative keywords for Performance Max. It’s far more visibility than we had a few years ago, but Google is still some distance off the ideal balance. I’d advise you to make peace with the fact that reporting won’t be perfect and attribution will be even murkier than usual.
Fortunately, there’s plenty that you can control. Those factors just happen to be broader marketing principles and strategic direction:
Positioning, offer, and messaging strategy.
Quality and depth of your product feed.
Strength of your audience signals.
Depth of your first-party data inputs, e.g., conversion tracking, customer lists, data feeds.
Relevance of your ad copy, creatives, and landing pages.
Bidding strategy and goals.
Campaign and asset group structure at a high level.
Traits Of PPC Managers Who Struggle With Performance Max
I see PPC managers every day who are so set in their ways that all they can do is complain about some part of Google’s machine learning. While it’s perfectly fine to stick with Search and Shopping, what’s not okay is bringing that mindset to Performance Max and expecting results anyway. And there are some behaviors that show up most frequently.
They require granular control over everything. Wanting to dictate exactly how the system should operate is a red flag when managing Performance Max. These managers have a natural distrust of all things machine learning and want to deploy perfect Exact Match keywords, complicated manual bidding strategies, and specific traffic sculpting techniques.
They believe their experience is a guarantee of success. But they don’t put in the effort to stay up-to-date on market and technological developments. These are typically old school marketers (like me) who haven’t kept up with the modern pace of Google Ads or feel entitled to success because of their tenure (unlike me).
They specialize in Google Ads account management and little else. Modern PPC demands that account managers have a basic level of skill in areas like copywriting, landing page theory, conversion rate optimization, product feed management, market and audience research, and offer positioning. People who refuse to treat Google Ads as one piece of a wider marketing puzzle are learning this the hard way.
They don’t have the diamond hands needed to trust their strategy. “Eyes on, hands off” is our approach. People who push back at the first sign of below-average output tend to make changes that reset the learning period, which only delays Google’s ability to start delivering good conversions. Since it can take three to six weeks (in my experience) to get to a good position with Performance Max, you need to know when not to make changes. Get early buy-in from clients (and the budget needed to ride it out) as you work through this early period.
They take a “set it and forget it” approach to automation and machine learning. Part of exiting the learning period in Performance Max quickly is keeping an eye on early results and providing data inputs so the system learns what you want more/less of. Don’t just ride out the post-launch period without tracking what Google’s bringing to the plate.
They expect the system to magically understand what the client wants. One of the toughest parts of modern PPC is persuading clients to provide access to data that Google needs in order to understand what success looks like on the business side. The flipside is that without this input, Google will simply make guesses until it finds something you like. This is especially true for lead-gen brands like plumbers and contractors.
Quick disclaimer: Some industries require a granular level of control, either due to regulatory and compliance mandates or because Google simply doesn’t have enough search and user volume to make informed decisions in that niche. Accounts operating in areas like pharmaceuticals, legal services, and similar niches need a higher level of control than mass market verticals like apparel or beverages.
The PPC Manager Who Wins With Performance Max
Algorithmic campaigns aren’t suitable for every account. Sometimes, it’s just better to stick to Search and Shopping. But when there is an opportunity to scale with Performance Max, there’s a specific type of person you want in charge of the process.
They know where they’re more useful. Marketers who are willing to hand over control of ad operations to the system are able to focus on impactful areas where machines still struggle to create differentiated output: creative, ad copy, landing pages, and their UX, strategy, data sourcing and interpretation, etc.
They accept that they’re only as good as their last campaign. Good PPC managers in the modern era don’t just treat Performance Max as its own campaign. They understand that just because one campaign worked a certain way doesn’t mean the next one will, too. What you want is someone who’s ready and willing to learn with every new project and iteration.
They understand the value of data and how to source it. Marketers who focus on building an ecosystem of data inputs and learning get better results with Performance Max because they give Google more information to base its decisions on. Someone who knows where to find those and how to convince clients that they’re mission-critical is worth their weight in gold.
They know how to stick to the plan. When you put in work only for a campaign to return poor results in the first week, it’s tempting to burn everything down and try something new. Marketers who build a plan for those first weeks and stick to it have the patience and confidence needed to eventually get Performance Max to a position of power.
They excel at client communication. A lead-gen client that refuses to share its customer data is never going to get good results from Performance Max. Good marketers can see that and will recommend traditional Search instead of creating additional friction by pushing for CRM access. Another underrated trait is proactively setting expectations with clients and communicating with them throughout the campaign.
PPC-Adjacent Skills To Develop For Performance Max Success
With Google Ads demanding a more holistic marketing approach, so much of your success with Performance Max begins outside of the ad account. With the system taking over much of the button-pushing that we used to do, here’s where you should be upskilling in order to cement your future in PPC.
Why I’m Bullish: Performance Max Is The Start Of The Future
Added balance between machine learning and human control is Google telling us that we only have one choice: learn to work together on these algorithmic campaigns. Performance Max has changed significantly from when it was first released, and so has Google’s attitude.
Newer features in Performance Max, like negative keywords and improved reports, help refine campaigns and offer advertisers more of what we’ve been asking for. But this can be dangerous if you don’t make the right decisions – you might see that video ads are not performing as well and remove them, only to find that their role is to push certain conversions down the line.
As it stands, Performance Max today is perfectly viable for virtually any type of business – a far cry from its early use case being limited to big-budget ecommerce and retail (how viable it is for a specific business still depends on factors such as budget, expertise, risk tolerance, and data availability).
So, while you may not necessarily need it today or every day, you should be adapting to this new direction if your top priority is to protect your business, career, and clients.
In this series (here and here), I’ve covered why founder-led marketing works and the systems you need to stay consistent, based on the playbook I co-authored for LinkedIn (my employer).
You’ve built the content engine and the operational frameworks to avoid burnout. Now comes the final, most critical part: proving it works.
Your founder provides the authentic voice. Your job as the marketer is to amplify that voice to the entire market and build the measurement framework that proves to the board, “This is working.”
This is how you turn a content strategy into a scalable, predictable, full-funnel growth loop.
Part 1: Amplify What’s Already Working
Your founder’s organic content is resonating, but it’s only reaching their first-degree network. Why guess what might work when you can use data to amplify what’s already working?
This is the most efficient paid strategy you can run, because paid works better when it’s built on trust. Our playbook data shows that startups whose directors post actively already generate 33% more leads through their paid campaigns.
Your secret weapon is Thought Leader Ads (TLAs).
TLAs are a LinkedIn ad format that lets you promote posts from individuals – founders, employees, even customers – rather than just your company page. They look and feel like organic posts: authentic, human, and scroll-stopping.
In general, TLAs are a high-performing format resulting in 1.5x higher click-through-rates (CTRs), 30% more efficient cost-per-click (CPCs), and 2x follower growth.
Apply them to startups and the impact is even bigger:
7.6x more engagement than any other paid ad format.
5x higher video engagement with video TLAs than regular sponsored video ads.
This isn’t just a top-of-funnel awareness play. You can use TLAs to build a full-funnel machine:
Top-of-Funnel: Amplify your founder’s best “scar story” or “contrarian take” post to your entire Ideal Customer Profile.
Mid-Funnel: Retarget everyone who engaged with that TLA with a more direct offer, like a Conversation Ad or a Lead Gen Form for a webinar.
Bottom-of-Funnel: Add this engaged audience to your nurture sequences and track them as they become sales-qualified leads.
The foundation is your founder’s best organic posts. From there, you can plug them into a full-funnel paid strategy.
Part 2: Build The Measurement Framework
This strategy feels right, but you have to prove it.
The biggest challenge in founder-led marketing is that the most important metrics – trust, reputation, resonance – don’t show up on a simple dashboard. They show up in your deal velocity, your DMs, and the way people talk about you when you’re not in the room.
There are ways you can start to track these on LinkedIn. Let’s break it down.
First 90 Days: Track Leading Indicators
Validate whether your content is resonating before it drives pipeline:
Engagement quality: Comments from ideal customer profiles (ICPs), DMs received, reposts by peers.
Audience growth: Follower count, especially from target segments.
Conversation starters: Number of inbound messages or replies sparked by content.
Profile metrics: Track who’s viewing your profile after seeing your posts.
LinkedIn recently expanded its analytics for individual members, giving you more visibility into how your content performs. Under the “Analytics” tab, you can now track:
These metrics help you move beyond vanity metrics to start measuring resonance – what’s landing, with whom, and why.
What not to do: Obsess over engagement metrics, delete underperforming posts, or let your founder compare themself to established thought leaders. These habits will drain motivation before your systems are strong enough to carry them through the dip.
Next 90 Days: Track Momentum
Track how your content is influencing relationships and reputation:
Prospect mentions: Train your sales team to log every time a prospect mentions your founder’s content during calls.
Dark social mentions: Track when your content gets shared in private peer networks like Slack groups or email threads.
Content-influenced deals: Create a CRM field to tag every prospect who mentions your posts.
Scott Albro, TOPO founder, does this in Salesforce by creating a “content-influenced” deal stage and tagging every prospect who mentions posts, comments, or competitor reactions. Then he measures deal velocity and pipeline.
Irina Novoselsky, CEO of Hootsuite, shared her results in the playbook: “I just did the math on my daily LinkedIn commitment over the last 3 months—10M+ impressions generated. But most importantly, 37% of our monthly leads are influenced by my social presence.”
Her team saw measurable business impact:
Executive presence was mentioned more frequently in sales calls in Q1 2025 than in all of 2024.
Deals closed faster when buyers referenced her content.
Enterprise opportunities influenced by her social presence had higher ACV.
Kacie Jenkins, former SVP of Marketing at Sendoso, found that when a prospect followed one of their Director+ executives on LinkedIn, they saw 11% higher win rates and 120% larger closed-won deal sizes.
Peep Laja, CEO of Wynter, tracks self-reported attribution: “About 80% of people signing up for Wynter or scheduling a demo say they found me on LinkedIn.”
6 Months Onwards: Business Impact Metrics
Track your lagging indicators:
Increasing inbound pipeline:Gal Aga’s rule is “if 20%+ of your pipeline mentions your content, you’ve won”.
Increasing deal velocity: Deals with content-influenced leads close faster due to pre-established trust
Attracting talent: Job applicants cite your posts.
Owning your category: You’re increasingly referenced in industry conversations.
Connect The Paid Loop
This final step connects amplification and measurement. How do you prove your TLA spend is driving revenue?
Use LinkedIn’s Conversions API (CAPI) to connect your CRM and website data directly to LinkedIn. This gives you visibility into offline actions and helps you attribute pipeline.
LinkedIn’s revenue attribution tools let you measure impact at the business, campaign, and company level. One tech company using revenue attribution found 36% higher win rates and 37% shorter deal cycles.
Startup advisor Canberk Beker sums it up: “When founders connect their organic presence to paid strategy – and measure both direct and influenced pipeline – they see outsized ROI. We’ve proven that TLAs lift demo requests and drive cross-channel conversions.”
Your Role As The Growth Multiplier
A founder-led strategy is a game-changer for sales and marketing.
Your founder’s job is to be the authentic voice. Your job as the marketer is to build the machine around them.
By connecting an authentic organic strategy with a high-powered amplification lever and a sophisticated measurement framework, you create a complete growth loop.
This is the modern marketing engine, one that builds trust at scale and proves its impact on the bottom line.
Imagine telling someone that www.mysite.com/blog/myarticle and www.mysite.com/myarticle are actually the same page. To you, they’re the same, but to Google, even a small difference in the URL makes them separate pages. That is where the canonical tag steps in. In this guide, we will walk you through what a canonical URL is, how URL canonicalization works, when to use it, and which mistakes to avoid so that search engines always understand your preferred page version.
Table of contents
What is a canonical URL?
A canonical URL is the main, preferred, or official version of a webpage that you want search engines like Google to crawl and index. It helps search engines determine which version of a page to treat as the primary one when multiple URLs lead to similar or duplicate content. As a result, it avoids duplicate content and protects your SEO ranking signals.
All of the following URLs can show the same page, but you should set only one as the canonical URL:
https://www.mysite.com/product/shoes
https://mysite.com/product/shoes?ref=instagram
https://m.mysite.com/product/shoes
https://www.mysite.com/product/shoes?color=black
What is a canonical tag?
A canonical tag (also called a rel="canonical" tag) is a small HTML snippet placed inside the section of a webpage to tell search engines which URL is the canonical or master version. It acts like a clear label saying, “Index this page, not the others.” This prevents duplicate content issues, consolidates ranking signals, and supports proper canonicalization across your site.
Here’s an example of a canonical tag in action:
This tag should be placed on any alternate or duplicate versions that point back to the main page you want indexed.
How does URL canonicalization work?
Canonicalization is the process of selecting the representative or canonical URL of a piece of content. From a group of identical or nearly identical URLs, this is the version that search engines treat as the main page for indexing and ranking.
Once you understand that, canonicalization becomes much easier to visualize. Think of it as a three-step workflow.
How the canonicalization process works
Here’s how the canonicalization works:
Search engines detect duplicate or similar URLs
Google groups URLs that return the same (or almost the same) content. These could come from:
URL parameters
HTTP vs. HTTPS versions
Desktop vs. mobile URLs
Filtered or sorted pages
Regional versions
Accidental duplicates like staging URLs
You signal which URL is canonical
You can guide search engines using canonical signals like:
The rel="canonical" tag
301 redirects
Internal links pointing to one preferred version
Consistent hreflang usage
XML sitemaps listing the preferred URL
HTTPS over HTTP
The strongest and clearest hint is the canonical tag placed in the head of the page.
Google selects one canonical URL
Google uses your signals, along with its own evaluation, to determine the primary URL. While Google typically follows canonical tags, it may override them if it detects stronger signals such as redirects, internal linking patterns, or user behaviour.
Once Google settles on the canonical URL, search engines will:
Consolidate link equity into the canonical page
Index the canonical URL
Treat all non-canonical URLs as duplicates
Reduce crawl waste
Avoid showing similar pages in search results
Canonical tags are a hint, not a directive. Google may still distribute link equity differently if it deems the canonical tag unreliable.
Reasons why canonicalization happens
Canonicalization becomes necessary when different URLs lead to the same content. Some common reasons are:
Region variants
For example, you have one product page for the USA and one for the UK, like: https://example.com/product/shoes-us and https://example.com/product/shoes-uk.
If the content is almost identical, use one canonical link or a clear regional setup to avoid confusion.
Pro tip: For regional variants, combine canonical tags with hreflang to specify language/region targeting.
Device variants
When you serve separate URLs for mobile and desktop, such as: https://m.example.com/product/shoes and https://www.example.com/product/shoes.
Canonical tags help search engines understand which URL is the primary version.
Protocol variants
Sorting and filtering often create many URLs that show similar content, like:
https://example.com/shoes?sort=price or https://example.com/shoes?color=black&size=7
A single canonical URL, such as https://example.com/shoes, tells search engines which page should carry the main ranking signals.
Maybe a staging or demo version of the site is left crawlable, or both https://example.com/page and https://example.com/page/ return the same content
Canonical tags and proper URL canonicalization help avoid these unintentional duplicates.
Some duplicate content on a site is normal. The goal of canonicalization in SEO is not to eliminate every duplicate, but to show search engines which URL you want them to treat as the primary one.
In practical aspects
In practice, canonicalization comes down to a few key things:
Placement
The canonical tag is placed in the head of the HTML, for example:
link rel="canonical" href="https://www.example.com/preferred-page" /
Each page should have at most one canonical tag, and it should point to the clean, preferred canonical URL.
Identification
Search engines examine several signals to determine the canonical version of a page. The rel="canonical" tag is important, but they also consider 301 redirects, internal links, sitemaps, hreflang, and whether the page is served on HTTPS. When these signals are consistent, it is easier for Google to pick the right canonicalized URL.
Crawling and indexing
Once search engines understand which URL is canonical, they primarily crawl and index that version, folding duplicates into it. Link equity and other signals are consolidated to the canonical page, which improves stability in rankings and makes your canonical tag SEO setup more effective.
The main rule for canonicalization is simple: if multiple URLs display the same content, choose one, make it your canonical URL, and clearly signal that choice with a proper canonical tag.
Google’s John Mueller puts it simply: ‘I recommend doing this kind of self-referential rel=canonical because it really makes it clear for us which page you want to have indexed or what this URL should be when it’s indexed.’
And that’s exactly why canonical tags matter; they tell search engines which version of a page is the real one. This keeps your SEO signals clean and prevents your site from competing with itself.
They’re important because they:
Avoid duplicate content issues: Canonical tags inform Google which URL should be indexed, preventing similar or duplicate pages from confusing crawlers or diluting rankings
Consolidate link equity: Canonicalization works similarly to internal linking; both are techniques used to direct authority to the page that matters most. Instead of splitting ranking signals across duplicate URLs, all information is consolidated into a single canonical URL
Improve crawl efficiency: Search engines don’t waste time crawling unnecessary duplicate pages, which helps them discover your important content faster
Enhance user experience: Users land on the correct, up-to-date version of your page, not a filtered, parameterized, or accidental duplicate
Canonical tags are useful in various everyday SEO scenarios. Here are the most common scenarios where you’ll want to use a rel=canonical tag to signal your preferred URL.
URL versions
If your page loads under multiple URL formats, with or without “www,” HTTP vs. HTTPS, and with or without a trailing slash, search engines may index each version separately. A canonical tag helps you standardize the preferred version so Google doesn’t treat them as separate pages.
Duplicate content
Ecommerce sites, blogs with tag archives, and category-driven pages often generate duplicate or near-duplicate content by design. If the same product or article appears under multiple URLs (filters, parameters, tracking codes, etc.), canonical tags help Google understand which canonical URL is the authoritative one. This prevents cannibalization and protects your canonical SEO setup.
If your content is republished on partner sites or aggregators, always use a canonical tag that points back to your original version. This ensures your page retains the ranking signals, not the syndicated copy, and search engines know exactly where the content was originally published.
If syndication partners don’t honor your canonical tag, consider using noindex or negotiating link attribution.
Paginated pages
Long lists or multi-page articles often create a chain of URLs like /page/2/, /page/3/, and so on. These pages contribute to the same topic but shouldn’t be indexed individually. Adding canonical tags to the paginated sequence (typically pointing to page 1 or a “view-all” version) helps consolidate indexing and keeps rankings focused on the primary page.
Pro tip: For paginated content, use self-referencing canonicals (each page points to itself) unless you have a ‘view-all’ page that loads quickly and is crawlable.
When you change domains, restructure URLs, or move from HTTP to HTTPS, using consistent canonical tags helps reinforce which pages replace the old ones. It signals to search engines which canonicalized URL should inherit ranking power. During migrations, canonical tags act as a safety net to prevent duplicate versions from competing with each other.
URL canonicalization is all about giving search engines a clear signal about which version of a page is the preferred or canonical URL. You can implement it in several simple steps.
Using the rel=”canonical” tag
The most common way (as shown multiple times in this blog post) to set a canonical URL is by adding a rel="canonical" tag in the head section of your page. It looks like this:
link rel="canonical" href="https://www.example.com/preferred-url"/
This tag tells search engines which URL should carry all ranking signals and appear in search results. Ensure that every duplicate or alternate version links to the same preferred URL, and that the canonical tag is consistent throughout the site.
You can also use rel="canonical" in HTTP headers for non-HTML content such as PDFs. This is helpful when you cannot place a tag in the page itself.
Pro tip: While supported for PDFs, Google may not always honor canonical HTTP headers. Use them in conjunction with other signals (e.g., sitemaps).
Also, ensure the canonical tag is as close to the top of the head section as possible so that search engines can see it early. Each page should have only one canonical tag, and it should always point to a clean, accessible URL. Avoid mixing signals. The canonical URL, your internal links, and your sitemap entries should all match.
Setting a preferred domain in Google Search Console
Google lets you choose whether you prefer your URLs to appear with or without www. Setting this preference helps reinforce your canonical signals and prevents search engines from treating www and non-www versions as different URLs.
To set your preferred domain, open your property in Google Search Console, go to Settings, and choose the version you want to treat as your primary domain.
Redirects (301 redirects)
A 301 redirect is one of the strongest signals you can send. It permanently informs browsers and search engines that one URL has been redirected to another and that the new URL should be considered the canonical URL.
Use 301 redirects when:
You merge duplicate URLs
You change your site structure
You migrate to HTTPS
You want to consolidate link equity from outdated pages
Of course, redirects replace the old URL, while canonical tags suggest a preference without removing the duplicate.
With Yoast SEO Premium, you can manage redirects effortlessly right inside your WordPress dashboard. The built-in redirect manager feature of the SEO plugin helps you avoid unnecessary 404s and prevents visitors from landing on dead ends, keeping your site structure clean and your user experience smooth.
A smarter analysis in Yoast SEO Premium
Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!
Additional canonicalization techniques
There are a few more ways to support your canonical setup.
XML sitemaps: Always include only canonical URLs in your sitemap. This helps search engines understand which URLs you want indexed
Hreflang annotations: For multi-language or multi-region sites, hreflang tags help search engines serve the correct regional version while still respecting your canonical preference
Link HTTP headers: For files like PDFs or other non-HTML content, using a rel="canonical" HTTP header helps you specify the preferred URL server-side
Each of these methods reinforces your canonical signals. When you use them together, search engines have a much clearer understanding of your canonicalized URLs.
Implementing canonicalization in WordPress with Yoast
Manually adding a rel="canonical" tag to the head of every duplicate page can be fiddly and error prone. You need to edit templates or theme files, keep tags consistent with your sitemap and internal linking, and remember special cases, such as PDFs or paginated series. Modifying site code and HTML is risky when you have numerous pages or multiple editors working on the site.
Yoast SEO makes this easier and safer. The plugin automatically generates sensible canonical URL tags for all your pages and templates, eliminating the need for manual theme file edits or code additions. You can still override that choice on a page-by-page basis in the Yoast SEO sidebar: open the post or page, go to Advanced, and paste the full canonical URL in the Canonical URL field, then save.
Automatic coverage: Yoast automatically adds canonical tags to pages and archives by default, which helps prevent many common duplicate content issues
Manual override: For special cases, use the Yoast sidebar > Advanced > Canonical URL field to set a custom canonical. This accepts full URLs and updates when you save the post
Edge cases handled: Yoast will not output a canonical tag on pages set to noindex, and it follows best practices for paginated series and archives
Developer options: If you need custom behavior, you can filter the canonical output programmatically using the wpseo_canonical filter or use Yoast’s developer API
Cross-domain and non-HTML: Yoast supports cross-site canonicals, and you can use rel=”canonical” in HTTP headers for non-HTML files when needed
Both Yoast SEO and Yoast SEO Premium include canonical URL handling, and the Premium version adds extra automation and controls to streamline larger sites.
Canonical URLs may seem like a small technical detail, but they play a huge role in helping search engines understand your site. When Google finds multiple URLs displaying the same content, it must select one version to index. If you do not guide that choice, Google will make the decision on its own, and that choice is not always the version you intended. That can lead to split ranking signals, wasted crawl activity, and frustrating drops in visibility.
Using canonical URLs gives you back that control. It tells search engines which page is the primary version, which ones are duplicates, and where all authority signals should be directed. From filtering URLs to regional variants to accidental duplicates that slip through the cracks, canonicals keep everything tidy and predictable.
The good news is that canonicalization does not have to be complicated. A simple rel=”canonical” tag, consistent URL handling, smart redirects, and clean sitemap signals are enough to prevent most issues. And if you are working in WordPress, Yoast SEO takes care of almost all of this automatically, so you can focus on creating content instead of wrestling with code.
At the end of the day, canonical URLs are about clarity. Show search engines the version that matters, remove the noise, and keep your authority consolidated in one place. When your signals are clear, your rankings have a solid foundation to grow.
Joost is an internet entrepreneur and the founder of Yoast. He has a long history in WordPress and digital marketing. On our blog, he has written a lot about SEO in general, technical SEO and important topics related to SEO.
Artificial intelligence is changing ecommerce so quickly that keeping up is daunting.
Consider the prominent payment processors, platforms, and marketplaces that collaborated with OpenAI and Perplexity in the past year.
Perplexity and Shopify in November 2024.
OpenAI with Shopify in April 2025.
Perplexity and PayPal in May 2025.
OpenAI and Shopify again in September 2025.
Perplexity and Stripe in September 2025.
OpenAI and Walmart in October 2025.
Perplexity and PayPal again in November 2025.
OpenAI and Target in November 2025.
Each of these partnerships and integrations pushes the industry toward various forms of AI search, AI-assisted shopping, and agentic commerce. Consumers will shop differently very soon.
How consumers shop online is quickly changing.
Marketplaces
Many mid-market businesses will benefit.
Mark Simon, vice president of strategy at Celigo, an automation platform, told me recently that direct-to-consumer brands are now selling on the Walmart Marketplace and could greatly benefit if it pushes their products into the emerging AI shopping ecosystem.
Yet the product data feeds to those marketplaces, for even a moderate number of SKUs, work only when automated. And not all data-feed integrations are the same.
“There is definitely a way to obtain a competitive advantage,” said Simon. “If you choose a modern technique…a modern method [of integration], you can move quickly. You can shift to an automation-first approach.”
Simon’s perspective is notable given that Celigo is an infrastructure-as-a-service company that connects and automates business systems, including Walmart Marketplace integrations.
An automation-first mindset could help ecommerce businesses more broadly as the race to keep up with AI shopping intensifies.
Automation First
Imagine a repetitive but essential task, such as a workflow for creating AI-generated product descriptions. The workflow can start manually. A marketing specialist develops a prompt, pastes it into an AI, provides feedback on the output, re-generates it, and so on.
An automation-first mindset prioritizes how the workflow functions at scale. It seeks to make automation the default process for most operational, marketing, and business tasks.
For product descriptions, an automation would integrate the catalog, AI, and ecommerce platform. Once connected, it could run a series of tests to improve the output. When launched, the automation works at scale.
Getting Started
To implement an automation-first mindset:
Become proactive. Simon said it like this, “Instead of being reactive around everything that’s changing, think differently and become proactive.” Automate repetitive, time-sensitive, and error-prone operations from the start.
Invest the time. Building an automated process or workflow can take more up-front work and collaboration. Invest the time.
Build for multiple applications. Modern integrations and automations should be mostly agnostic toward companies and software tools. The integration that feeds data to the Walmart Marketplace should easily adapt to Amazon, eBay, and even Mercado Libre.
Find repeatable and scalable tasks. Automation, after all, is the idea of doing something over and over again. So design processes and workflows flexible enough to grow with the business.
Monitor outcomes consistently. Good automations should include feedback loops and regular reports, not a “set and forget” approach.
Adopt strategic alignment and common sense. Finally, automation first does not mean automation always. Ensure it makes sense for the business and passes a common-sense test.
Keeping Up
Given these characteristics, an automation-first mindset could help merchants:
Absorb rapid change.
Add operational margin and flexibility.
Absorbing change
If Walmart or any other marketplace alters how it ingests product data or modifies its discovery algorithm, a good integration takes those changes in stride.
Certainly change is inevitable, but automation makes adoption relatively easier.
Operational margin
The automation-first mindset can create something akin to operational margin — the space and time needed to respond thoughtfully rather than reactively.
When the integrations, workflows, and connections run automatically and reliably, managers reclaim hours each week for revenue-generating projects, avoiding manual updates, error chasing, or feed maintenance.
It has started to get really wintry here in London over the last few days. The mornings are frosty, the wind is biting, and it’s already dark by the time I pick my kids up from school. The darkness in particular has got me thinking about vitamin D, a.k.a. the sunshine vitamin.
At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.
But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D.
Yes, it is important for bone health. But recent research is also uncovering surprising new insights into how the vitamin might influence other parts of our bodies, including our immune systems and heart health.
Vitamin D was discovered just over 100 years ago, when health professionals were looking for ways to treat what was then called “the English disease.” Today, we know that rickets, a weakening of bones in children, is caused by vitamin D deficiency. And vitamin D is best known for its importance in bone health.
That’s because it helps our bodies absorb calcium. Our bones are continually being broken down and rebuilt, and they need calcium for that rebuilding process. Without enough calcium, bones can become weak and brittle. (Depressingly, rickets is still a global health issue, which is why there is global consensus that infants should receive a vitamin D supplement at least until they are one year old.)
In the decades since then, scientists have learned that vitamin D has effects beyond our bones. There’s some evidence to suggest, for example, that being deficient in vitamin D puts people at risk of high blood pressure. Daily or weekly supplements can help those individuals lower their blood pressure.
A vitamin D deficiency has also been linked to a greater risk of “cardiovascular events” like heart attacks, although it’s not clear whether supplements can reduce this risk; the evidence is pretty mixed.
Vitamin D appears to influence our immune health, too. Studieshave found a link between low vitamin D levels and incidence of the common cold, for example. And other research has shown that vitamin D supplements can influence the way our genes make proteins that play important roles in the way our immune systems work.
We don’t yet know exactly how these relationships work, however. And, unfortunately, a recent study that assessed the results of 37 clinical trials found that overall, vitamin D supplements aren’t likely to stop you from getting an “acute respiratory infection.”
Other studies have linked vitamin D levels to mental health, pregnancy outcomes, and even how long people survive after a cancer diagnosis. It’s tantalizing to imagine that a cheap supplement could benefit so many aspects of our health.
But, as you might have gathered if you’ve got this far, we’re not quite there yet. The evidence on the effects of vitamin D supplementation for those various conditions is mixed at best.
In fairness to researchers, it can be difficult to run a randomized clinical trial for vitamin D supplements. That’s because most of us get the bulk of our vitamin D from sunlight. Our skin converts UVB rays into a form of the vitamin that our bodies can use. We get it in our diets, too, but not much. (The main sources are oily fish, egg yolks, mushrooms, and some fortified cereals and milk alternatives.)
The standard way to measure a person’s vitamin D status is to look at blood levels of 25-hydroxycholecalciferol (25(OH)D), which is formed when the liver metabolizes vitamin D. But not everyone can agree on what the “ideal” level is.
Even if everyone did agree on a figure, it isn’t obvious how much vitamin D a person would need to consume to reach this target, or how much sunlight exposure it would take. One complicating factor is that people respond to UV rays in different ways—a lot of that can depend on how much melanin is in your skin. Similarly, if you’re sitting down to a meal of oily fish and mushrooms and washing it down with a glass of fortified milk, it’s hard to know how much more you might need.
There is more consensus on the definition of vitamin D deficiency, though. (It’s a blood level below 30 nanomoles per liter, in case you were wondering.) And until we know more about what vitamin D is doing in our bodies, our focus should be on avoiding that.
For me, that means topping up with a supplement. The UK government advises everyone in the country to take a 10-microgram vitamin D supplement over autumn and winter. That advice doesn’t factor in my age, my blood levels, or the amount of melanin in my skin. But it’s all I’ve got for now.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
We’re learning more about what vitamin D does to our bodies
At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.
But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D. Read the full story.
—Jessica Hamzelou
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
If you’re interested in other stories from our biotech writers, check out some of their most recent work:
+ Advanced in organs on chips, digital twins, and AI are ushering in a new era of research and drug development that could help put a stop to animal testing. Read the full story.
+ Scientists are creating the beginnings of bodies without sperm or eggs. How far should they be allowed to go? Read the full story.
+ This retina implant lets people with vision loss do a crossword puzzle. Read the full story.
Partying at one of Africa’s largest AI gatherings
It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. Deep Learning Indaba is an annual AI conference where Africans present their research and technologies they’ve built, mingling with friends as a giant screen blinks with videos created with generative AI.
The main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. But the organizers hope to see more homegrown ventures create opportunities within Africa. Read the full story.
—Abdullahi Tsanni
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Google’s new Nano Banana Pro generates convincing propaganda The company’s latest image-generating AI model seems to have few guardrails. (The Verge) + Google wants its creations to be slicker than ever. (Wired $) + Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent. (MIT Technology Review)
2 Taiwan says the US won’t punish it with high chip tariffs In fact, official Wu Cheng-wen says Taiwan will help support the US chip industry in exchange for tariff relief. (FT $)
3 Mental health support is one of the most dangerous uses for chatbots They fail to recognize psychiatric conditions and can miss critical warning signs. (WP $) + AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)
4 It costs an average of $17,121 to deport one person from the US But in some cases it can cost much, much more. (Bloomberg $) + Another effort to track ICE raids was just taken offline. (MIT Technology Review)
5 Grok is telling users that Elon Musk is the world’s greatest lover What’s it basing that on, exactly? (Rolling Stone $) + It also claims he’s fitter than basketball legend LeBron James. Sure. (The Guardian) 6 Who’s really in charge of US health policy? RFK Jr. and FDA commissioner Marty Makary are reportedly at odds behind the scenes. (Vox) + Republicans are lightly pushing back on the CDC’s new stance on vaccines. (Politico) + Why anti-vaxxers are seeking to discredit Danish studies. (Bloomberg $) + Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)
7 Inequality is worsening in San Francisco As billionaires thrive, hundreds of thousands of others are struggling to get by. (WP $) + A massive airship has been spotted floating over the city. (SF Gate)
8 Donald Trump is thrusting obscure meme-makers into the mainstream He’s been reposting flattering AI-generated memes by the dozen. (NYT $) + MAGA YouTube stars are pushing a boom in politically charged ads. (Bloomberg $) 9 Moss spores survived nine months in space And they could remain reproductively viable for another 15 years. (New Scientist $) + It suggests that some life on Earth has evolved to endure space conditions. (NBC News) + The quest to figure out farming on Mars. (MIT Technology Review)
10 Does AI really need a physical shape? It doesn’t really matter—companies are rushing to give it one anyway. (The Atlantic $)
Quote of the day
“At some point you’ve got to wonder whether the bug is a feature.”
—Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, ponders xAI and Grok’s proclivity for surfacing Elon Musk-friendly and/or far-right sources, the Washington Post reports.
One more thing
The AI lab waging a guerrilla war over exploitative AI
Back in 2022, the tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.
But artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.
Ben Zhao, a computer security researcher at the University of Chicago, was listening. He and his colleagues have built arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping: two tools called Glaze and Nightshade that add barely perceptible perturbations to an image’s pixels so that machine-learning models cannot read them properly.
But Zhao sees the tools as part of a battle to slowly tilt the balance of power from large corporations back to individual creators. Read the full story.
—Melissa Heikkilä
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ If you’re ever tempted to try and recreate a Jackson Pollock painting, maybe you’d be best leaving it to the kids. + Scientists have discovered that lions have not one, but two distinct types of roars + The relentless rise of the quarter-zip must be stopped! + Pucker up: here’s a brief history of kissing