The Download: the case for AI slop, and helping CRISPR fulfill its promise

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How I learned to stop worrying and love AI slop

—Caiwei Chen

If I were to locate the moment AI slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral last summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical generated clips.

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance. 

I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. To try to get to the bottom of how I felt (and why), I spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything after all. Read the full story.

A new CRISPR startup is betting regulators will ease up on gene-editing

Here at MIT Technology Review we’ve been writing about the gene-editing technology CRISPR since 2013, calling it the biggest biotech breakthrough of the century. Yet so far, there’s been only one gene-editing drug approved, and it’s been used commercially on only about 40 patients, all with sickle-cell disease.

It’s becoming clear that the impact of CRISPR isn’t as big as we all hoped. In fact, there’s a pall of discouragement over the entire field—with some journalists saying the gene-editing revolution has “lost its mojo.”

So what will it take for CRISPR to help more people? A new startup says the answer could be an “umbrella approach” to testing and commercializing treatments which could avoid costly new trials or approvals for every new version. Read the full story.

—Antonio Regalado

America’s new dietary guidelines ignore decades of scientific research

The first days of 2026 have brought big news for health. On Wednesday, health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir.

That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets.

These guidelines are a big deal—they influence food assistance programs and school lunches, for example. Let’s take a look at the good, the bad, and the ugly advice being dished up to Americans by their government.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Grok has switched off its image-generating function for most users
Following a global backlash to its sexualized pictures of women and children. (The Guardian)
+ Elon Musk has previously lamented the “guardrails” around the chatbot. (CNN)
+ XAI has been burning through cash lately. (Bloomberg $)

2 Online sleuths tried to use AI to unmask the ICE agent who killed a woman
The problem is, its results are far from reliable. (WP $)
+ The Trump administration is pushing videos of the incident filmed from a specific angle. (The Verge)
+ Minneapolis is struggling to make sense of the shooting of Renee Nicole Good. (WSJ $)

3 Smartphones and PCs are about to get more expensive
You can thank the memory chip shortage sparked by the AI data center boom. (FT $)
+ Expect delays alongside those price rises, too. (Economist $)

4 NASA is bringing four of the seven ISS crew members back to Earth
It’s not clear exactly why, but it said one of them experienced a “medical situation” earlier this week. (Ars Technica)

5 The vast majority of humanoid robots shipped last year were from China
The country is dominating early supply for the bipedal machines. (Bloomberg $)
+ Why a Chinese robot vacuum firm is moving into EVs. (Wired $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

6 New Jersey has banned students’ phones in schools
It’s the latest in a long line of states to restrict devices during school hours. (NYT $)

7 Are AI coding assistants getting worse?
This data scientist certainly seems to think so. (IEEE Spectrum)
+ AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)

8 How to save wine from wildfires 🍇
Smoke leaves the alcohol with an ashy taste, but a group of scientists are working on a solution. (New Yorker $)

9 Celebrity Letterboxd accounts are good fun
Unsurprisingly, a subset of web users have chosen to hound them. (NY Mag $)

10 Craigslist refuses to die
The old-school classifieds corner of the web still has a legion of diehard fans. (Wired $)

Quote of the day

“Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. The harms are rippling out.”

—Ngaire Alexander, head of the Internet Watch Foundation’s reporting hotline, explains the dangers around low-moderation AI tools like Grok to the Wall Street Journal.

One more thing

How to measure the returns on R&D spending

Given the draconian cuts to US federal funding for science, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway?

To answer that, in several recent papers, economists have approached this issue in clever new ways.  And, though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make. Read the full story.

—David Rotman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Bruno Mars is back, baby!
+ Hmm, interesting: Apple’s new Widow’s Bay show is inspired by both Stephen King and Donald Glover, which is an intriguing combination.
+ Give this man control of the new Lego AI bricks!
+ An iron age war trumpet recently uncovered in Britain is the most complete example discovered anywhere in the world.

Ecommerce Success with Fractional Talent

In his previous appearance on the podcast, in November 2022, Jai Dolwani was the CMO of a wine subscription company that would shortly declare bankruptcy. It was his third struggling startup, he says, prompting “serious self-reflection.”

The result? He pivoted to entrepreneurship and launched The Starters, a marketplace for fractional ecommerce talent, in late 2023. The company has thrived, having attracted more than 600 freelancers and 500 client brands.

In our recent conversation, Jai addressed the demand for ecommerce talent, tips for hiring freelancers, and plans for 2026 and beyond.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Tell us what you do.

Jai Dolwani: I’m the founder of The Starters, a company that helps ecommerce brands access top-tier fractional talent through a curated, vetted marketplace. We connect brands with experienced marketers, creatives, and technologists — professionals who’ve helped build some of the world’s best companies — so they can hire flexibly and efficiently.

Before that, I was CMO at Winc, a wine subscription company. We had recently gone public, but behind the scenes, the business struggled. A few weeks after my last conversation with you, Winc declared bankruptcy and was later acquired. I was offered a role, but it marked my third startup in a row to fail financially, prompting some serious self-reflection.

I questioned whether it was bad luck or my own shortcomings. Ultimately, I decided I needed full ownership of outcomes and became an entrepreneur. I bootstrapped the business with a $5,000 personal investment, and it’s grown steadily since. My mission now is twofold: help freelancers find meaningful, flexible work and help ecommerce brands build lean, efficient, and profitable organizations.

Bandholz: You’re building a two-sided marketplace. How did you attract talent early and create traction?

Dolwani: From day one, my philosophy was to attract the best talent first, and brands will follow. Brands are always on the hunt for top talent. To do that, we built what I believe is the most talent-friendly marketplace in the industry.

We don’t charge freelancers a commission. Unlike platforms such as Upwork or Fiverr, 100% of what freelancers bill goes directly to them. We offer private profiles. Many top performers already have jobs and don’t want a public marketplace profile visible to employers, so access is limited to vetted brands. We also avoid the race to the bottom. Most marketplaces become transactional and price-driven, with median rates of $10 to $15 per hour. We stay highly curated and vetted, focusing on strategic fit over cost.

As a result, talent on our platform competes on expertise, not price. The median rate is about $90 per hour, reflecting quality and outcomes. We’ve attracted executives from nine-figure companies and world-class specialists who would never join typical freelance platforms.

Early on, it was hard — sourcing and onboarding the first freelancers myself. Over time, strong experiences generated word-of-mouth from freelancers and brands.

Bandholz: If talent keeps 100%, how does your business make money?

Dolwani: We monetize exclusively on the brand side. Brands pay $295 per month to access the platform and hire talent through us. We charge the fee upfront, before they can view or contact freelancers, which ensures high intent. If a brand isn’t satisfied or we can’t meet its needs, we refund the fee.

This model is simple and fair. We’re not trying to build a billion-dollar, venture-backed company. We’re bootstrapped, and the pricing reflects the value we provide while allowing freelancers to keep 100% of their rates.

Payments to freelancers go through Stripe. We never touch that money — funds move directly from the brand to the freelancer. We orchestrate the experience and facilitate the connection, while the working relationship remains direct.

Most of our clients are ecommerce brands earning $1–10 million that need support but aren’t ready for full-time hires, though we also help pre-launch and nine-figure companies.

Bandholz: How do brands work with talent on your platform, and what types of expertise can they access?

Dolwani: Engagements are flexible and depend on what the brand and freelancer agree on, but I recommend a clear progression. Start with a small, capped hourly trial — around five hours — to evaluate quality. If the work doesn’t impress you immediately, it likely won’t improve. After that, work together for one to two months on an hourly basis, with a cap, to understand the output, speed, and communication.

Once expectations are clear and things are working well, shifting to a monthly retainer makes sense. Retainers reduce time tracking and keep everyone focused on outcomes rather than hours.

We now have just over 600 vetted professionals on the platform. Our core strengths are marketing, creative, and technology — everything from media buyers and fractional CMOs to designers, creative directors, Shopify developers, and heads of data and analytics. We’ve recently expanded into operations, product development, supply chain, finance, and retail expansion for consumer package brands, with more growth planned for 2026.

We’ve succeeded at our initial goal of helping ecommerce brands access better fractional talent than they’d find elsewhere. But the world is changing. As AI reduces the need for human execution, the value shifts away from task completion toward specialized knowledge and better decision-making.

Companies will win because they have deeper human insight guiding strategy. That’s where I see us going. Beyond a freelancer marketplace, we’re building a home for ecommerce expertise.

That means making knowledge accessible through courses, guides, webinars, live Q&As, consulting calls, and ongoing advisory relationships — hiring being just one option. Long-term, we hope The Starters becomes a destination for ecommerce brands to build their “human advantage” to create differentiated strategies and win.

Bandholz: Where can people follow you? Hire some freelancers?

Dolwani: Our site is Hirethestarters.com. I’m on X and LinkedIn.

Paid Media Marketing: 8 Changes Marketers Should Make In 2026 via @sejournal, @brookeosmundson

Paid media didn’t slow down last year. If anything, the platforms made sure we stayed busy.

Google rolled out more AI-assisted ad creation features, new Performance Max reporting updates, and continued refining how AI-influenced results shape visibility across search.

Microsoft pushed forward with its own set of AI tools inside Ads and Copilot, along with quality updates that changed how some advertisers measure performance. Meta expanded Advantage+ capabilities and tightened its recommendations for creative structure.

We also saw strong momentum from platforms that used to sit on the sidelines. TikTok introduced more search-focused ad placements. Reddit continued improving its targeting and creative tools.

Privacy shifts kept moving as well. Targeting options continued evolving, and some long-standing measurement assumptions started to feel less reliable. Marketers had to adjust how they test, track, and validate results across every channel.

As we head into 2026, the message is familiar but still true. You can’t always rely on what worked a year ago, and you can’t assume the platforms will keep things the same. This list focuses on the changes that matter most right now. These are practical adjustments that help teams stay competitive without rebuilding everything from scratch.

Let’s walk through the strategies worth prioritizing this year and why they deserve your attention.

1. Embrace The Shift To Conversational AI In Ad Creation

Conversational AI tools like Google’s Gemini and Microsoft’s Copilot enable ad creation and optimization in a more fluid, interactive way.

They’re becoming essential for marketers who want to scale ad variations without exhausting creative resources.

If you’re looking to test and scale how this can work for you, start small with AI-generated ad copy tests. Use the conversational AI tools within the Google Ads platform to create a few new ad variations that differ from your standard copy.

For instance, if your current ads are heavily CTA-focused, let the AI suggest more storytelling or benefits-driven language and test these versions in a limited campaign to gauge performance.

Another tip is to start experimenting with ad personalization at scale. AI tools allow you to input audience insights, such as location or interests, to create tailored ad variations.

Create segmented ads that appeal to different demographics or psychographics and use split testing to identify which approach resonates best.

Lastly, whenever you’re using AI-generated content, make sure to set aside time to review those suggestions monthly. Take note of recurring suggestions that could highlight hidden opportunities or adjustments you may not have initially considered.

2. Refine Ad Targeting With Data Privacy In Mind

With the unreliability of third-party cookies, the upcoming year marks the need for refined targeting strategies that balance effectiveness with privacy.

Tools like Google’s enhanced privacy features and Microsoft’s predictive audience segmentation help ensure you’re reaching the right users in a compliant way.

Now’s the time to develop a robust first-party data strategy. Start by auditing your first-party data to identify gaps and potential sources for future data.

You can also utilize your customer relationship management (CRM) tools and website data collection to capture behavior-based insights and create audience segments you own.

Additionally, because both Google and Microsoft allow Customer Match solutions, it’s a great time to review those policies.

Use tools like cookie consent managers and transparency banners to build trust and ensure you’re gathering data responsibly. If you don’t, you’re at risk of not being able to use first-party data solutions by the ad platforms.

When creating a consent-based tracking strategy, it’s also a good idea to proactively share with users how you use their data and offer clear opt-out options. Transparency is key in this two-way buyer and seller relationship journey.

3. Optimize For AI-Driven Search Ad Placements

AI-generated search summaries, especially in Google’s AI Overviews, are creating new ad placements and impacting traditional ad performance. This trend requires close monitoring and proactive adjustments to stay competitive.

As these new ad placements continue to roll out, here are a few tips to make sure your PPC ads are optimized for this new wave of AI content.

  • Monitor CTRs On AI-Influenced Placements: Start tracking the click-through rates of ads appearing in AI-generated results versus traditional SERPs. This insight can help you understand whether AI-generated placements impact user engagement and identify areas for improvement.
  • Create Specialized Assets For AI Overviews: Use images, headlines, and descriptions designed for short attention spans. For instance, include a compelling image and a clear, concise CTA in your ad to boost appeal in this new placement.
  • Review Performance Max Insights Regularly: Google’s Performance Max campaigns, which include AI-driven placements, provide insights into what combinations work best across channels. Use this data to refine ads in other campaigns where similar placements are available.

4. Lean Into Multi-Channel Campaign Integration

With consumers using multiple platforms interchangeably, paid media strategies must embrace an integrated, omni-channel approach.

Platforms like TikTok and Reddit have built out more robust ad offerings, providing marketers with more cross-platform synergy.

Start by mapping out a cross-platform customer journey. Outline your audience’s touchpoints across different platforms.

For instance, if your customer typically discovers products on TikTok but purchases through Google Shopping, ensure you’re present and active on both channels with consistent messaging.

Another item to keep in mind is utilizing platform-specific metrics to refine your strategy.

Each platform has unique engagement metrics. For example, on TikTok, you can monitor completion rates and engagement (likes, comments) to assess content effectiveness.

LinkedIn, on the other hand, is a place to focus on connection and message response rates.

Tailor your content based on what performs best on each channel. Each channel should have a different content strategy, not just putting the same ads across all platforms, hoping that one of them will click with a user.

5. Optimize Creative Customization With AI Image Editing

AI-powered image editing allows for rapid customization across visuals, which is critical for multi-audience campaigns.

Canva’s integration with Google Workspace and Microsoft’s AI image generator simplifies the creative process, enabling customization without extensive design resources.

To make the most of these AI editors and integrations, start with creating templates for faster customization.

Design or download templates on Canva that match your brand guidelines, making it easy to adjust colors, fonts, and messages for different audiences with minimal effort.

The templates can help you maintain visual consistency while catering to different segments.

To take it up a notch, try running A/B tests on custom visuals. Create two or more variations of AI-edited images to test different elements.

When testing creative, make sure to test differences that are noticeable enough. Track which visual styles drive the most engagement, and use those insights to guide future designs.

If you’re targeting multiple locations in your ads, use AI tools to adjust visuals for regional appeal.

For example, if you’re running an ad in New York and California, you can use AI to create images that feature landmarks or seasonal elements relevant to each location.

6. Enhance Attribution Tracking And Adjust KPIs Accordingly

A multi-device world demands better attribution tracking to understand the entire customer journey.

Google’s Enhanced Conversions and Microsoft’s Customer Insights provide more reliable data across touchpoints, helping marketers adjust KPIs to reflect complex engagement patterns.

To start, review enhanced conversions for first-party tracking to determine if this makes sense for your account.

Enhanced Conversions capture data from form fills or purchases to match offline actions back to Google Ads. When setting this up, make sure your campaigns reflect actual conversions, not just clicks, allowing for more accurate reporting.

Additionally, if you’re still using Last Click attribution models, you will be left in the dust.

It’s time to move beyond last-click attribution to track the impact of each customer touchpoint. You can use Google Analytics or Microsoft’s attribution reports to assess the role of each ad in a customer’s journey, and allocate credit accordingly.

Lastly, when it comes to measurement, it’s time to evolve your key performance indicators (KPIs). Not every channel in your marketing mix should be measured by direct purchases.

Just last year, in North America, the average person owned 13 devices – a 63% increase from 2018.

Users leverage multiple devices during their purchase journey, accounting for more visits but fewer conversions. No wonder conversion rates are decreasing!

For example, if you’re running a brand awareness campaign on TikTok for an audience who’s never heard of you, your KPIs should not be measuring purchases.

Track meaningful metrics like engagement rates, increase in branded search queries, or time on site to understand how those platforms contribute to long-term brand growth and loyalty.

7. Make Influencers Part Of Your Marketing Model

Influencer marketing still has value. But the rules have changed. What used to feel like a side bet now needs to operate with the same discipline you apply to any other channel.

One of the biggest shifts in 2025 was the rollout of Creator Partnerships inside Google Ads. The new tool lets brands find YouTube creators who already mention or align with their products, request to link their content directly in Ads, and then promote that content as ad assets.

That matters because it addresses many of the traditional challenges of influencer marketing.

Brands no longer have to manage a separate workflow or use external tools to run creator campaigns. Everything can be done natively inside Google Ads. Finding creators, getting permission, promoting videos, building remarketing audiences, and tracking performance – it all happens in the same place as your other media.

This integration changes what influencer marketing should be. Instead of treating creator content as a loose “boost,” treat it as another media channel that you plan, test, track, and optimize.

When you find a creator whose audience overlaps yours, link their video, promote it via “Partnership Ads,” and compare performance against other video or display placements. Use the same ROI expectations, the same reporting discipline, the same budget scrutiny.

That does not mean every influencer partnership needs to run through Creator Partnerships. But for brands that want to take creator content seriously, this is now the clearest path forward.

Influencer marketing can still introduce your brand to new audiences, but only if it becomes part of a broader, data-driven media mix rather than a side experiment.

8. Invest In Brand-Owned And Emerging Media Channels

Paid platforms can shift without much warning, which is why brands need more stability built into their mix. That stability comes from channels you control and channels that offer predictable reach without relying entirely on algorithm changes.

Brand-owned channels like email, SMS, and your CRM audience lists continue to grow in value as privacy rules tighten. These channels help you stay connected with people who have already shown interest, and they support every other part of your media strategy. When your first-party data is strong, your targeting improves across search, social, and display.

At the same time, emerging media channels are becoming easier to test and measure.

Connected TV, podcasts, retail media networks, and social commerce have grown into meaningful sources of reach and intent. Many brands are now seeing that a small, well-planned investment in these channels helps lift branded search, engagement rates, and assisted conversions across their entire account.

You do not need to adopt every new channel. You only need to choose a few that match your audience and test them with clear goals.

Look for indicators like uplift in search demand, stronger remarketing pools, or improvements in cross-channel efficiency. When these channels support your paid campaigns, they earn a long-term place in your strategy.

The brands that put effort into these areas now will be less dependent on any single platform. They will also see more consistent performance as auctions change, costs fluctuate, and targeting evolves throughout the year.

Your 2026 Plan Should Be Evolving

Paid media will keep shifting this year, but the path forward does not need to feel overwhelming.

The changes outlined above reflect what marketers are running into every day across search, social, retail media, and emerging channels.

None of these updates requires a complete rebuild. They simply call for a more intentional approach to testing, measurement, creative, and channel mix.

The advertisers who stay close to the data, spend time understanding how each platform is evolving, and make steady adjustments will see the most consistent results. The year ahead is less about chasing every new feature and more about choosing the changes that actually strengthen performance.

If you focus on the areas that matter, you’ll be in a strong position to keep improving your campaigns as the platforms continue to evolve.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

SEO Pulse: Core Update Favors Niche Expertise, AIO Health Inaccuracies & AI Slop via @sejournal, @MattGSouthern

Welcome to this week’s Pulse: updates on rankings from December’s core update, platform responses to AI quality issues, and disputes that reveal tensions in AI-generated health information.

Early analysis of Google’s December core update suggests specialized sites gained visibility in several shared examples. Microsoft and Google executives reframed criticism of AI quality. The Guardian reported concerns about health-related AI Overviews, and Google pushed back on aspects of the testing.

Here’s what matters for you and your work.

December Core Update Favors Specialists Over Generalists

Early analysis of Google’s December core update suggests specialized sites gained visibility in examples shared across publishing, ecommerce, and SaaS.

Key facts: Aleyda Solís’s analysis found sites with narrower, category-specific strength appear to be gaining ground on “best of” and mid-funnel product terms.

Some publisher sites appeared to lose visibility on broader, top-of-funnel queries in examples shared after the rollout. In examples shared after the December 11-29 rollout, ecommerce and SaaS brands with direct category expertise appeared to outperform broader review sites and affiliate aggregators.

Why SEOs Should Pay Attention

This update highlights a trend where generalist sites face ranking pressure, especially on queries with commercial intent or specific domain knowledge. Sites covering multiple categories are affected by competition from dedicated category sites.

Google says improvements can take time to show up. Some changes can take effect in a few days, but it can take several months for its systems to confirm longer-term improvement. Google also says it makes smaller, unannounced core updates that it doesn’t typically announce.

In the examples shared so far, specialization appears to outperform breadth when queries have specific intent.

What SEO Professionals Are Saying

Luke R., founder at Adexa.io, commented on LinkedIn:

“Specialists rise when search stops guessing and starts serving intent. These shifts reward brands that live one problem, one buyer.”

AYESHA ASIF, social media manager and content strategist, wrote:

“Generalist pages used to win on authority, but now depth matters more than domain size.”

Thanos Lappas, founder at Datafunc, added:

“This feels like the beginning of a long-anticipated transition in how search evaluates relevance and expertise.”

In that thread, several commenters argued the update favors deep, category-specific content over broad coverage. Several commenters suggested domain authority mattered less than focused expertise in the examples being discussed.

Read our full coverage: December Core Update: More Brands Win “Best Of” Queries

Guardian Investigation Claims AI Overview Health Inaccuracies

The Guardian reported that health organizations and experts reviewed examples of AI Overviews for medical queries and raised concerns about inaccuracies. A Google spokesperson said many examples were “incomplete screenshots.” The spokesperson also said the vast majority of AI Overviews are factual and helpful, and that Google continuously makes quality improvements.

Key facts: The Guardian said it tested health queries and shared AI Overview responses with health groups and experts for review. A Google spokesperson said many examples were “incomplete screenshots,” but added that the results linked “to well-known, reputable sources” and recommended seeking out expert advice.

Why SEOs Should Pay Attention

AI Overviews can appear at the top of results. When the topic is health, errors carry more weight. The Guardian’s reporting also highlights a practical problem. One charity leader told The Guardian the AI summary changed when repeating the same search, pulling from different sources. That can make verification harder.

Publishers have spent years investing in documented medical expertise to meet Google’s expectations around health content. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.

What Health Organizations Are Saying

Sophie Randall, director of the Patient Information Forum, told The Guardian:

 “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”

Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK, stated:

“If someone followed what the search result told them, they might not take in enough calories … and be unable to tolerate either chemotherapy or potentially life-saving surgery.”

The reactions reveal two concerns. First, that even when AI Overviews link to trusted sources, the summary itself can override that trust by presenting confident but incorrect guidance. Second, some reactions framed Google’s response as addressing individual examples without explaining how these errors happen or how often they occur.

Read our full coverage: Guardian Investigation: AI Overviews Health Accuracy

Microsoft CEO And Google Engineer Reframe AI Quality Criticism

Within one week, Microsoft CEO Satya Nadella published a blog post asking the industry to “get beyond the arguments of slop vs. sophistication,” while Google Principal Engineer Jaana Dogan posted that people are “only anti new tech when they are burned out from trying new tech.”

Key facts: Nadella’s blog post characterized AI as “cognitive amplifier tools” and called for “a new equilibrium” that accounts for humans having these tools. Dogan’s X post framed anti-AI sentiment as burnout from trying new technology. In replies, some people pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable in day-to-day workflows. The timing follows Merriam-Webster naming “slop” its 2025 Word of the Year.

Why SEOs Should Pay Attention

Some readers may interpret these statements as an attempt to move the conversation away from output quality and toward user expectations. When people are urged to move past “slop vs. sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.

The practical concern is how these companies respond to user feedback versus how they frame criticism. Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related one.

What Industry Observers Are Saying

Jez Corden, managing editor at Windows Central, wrote that Nadella’s framing of AI as a “scaffolding for human potential” felt “either naively utopic, or at worse, wilfully dishonest.”

Tom Warren, senior editor at The Verge, wrote on Bluesky that Nadella wants everyone to move beyond the arguments about AI slop, calling 2026 a “pivotal year for AI.”

The commentary reveals a gap between executive messaging about AI as a transformative technology and the user experience of AI products, which feels inconsistent or forced. Some reactions suggested the request drew more attention to the term.

Read our full coverage: Microsoft CEO, Google Engineer Deflect AI Quality Complaints

Theme Of The Week: Competing Standards

Each story this week reveals a tension between the quality standards applied to publishers and those applied to platforms’ own AI systems.

The December core update appears to put more weight on category expertise than broad coverage in the examples highlighted. The Guardian investigation questions whether AI Overviews meet the accuracy bar Google sets for health content. The Nadella messaging attempts to reframe quality concerns as user adjustment problems rather than product issues.

The week highlights a tension between the standards applied to websites and the way platforms defend their own AI summaries when accuracy is questioned.

More Resources:


Featured Image: Accogliente Design/Shutterstock

PPC Pulse: Reddit Max Campaigns, Google Creator & Microsoft Targeting Updates via @sejournal, @brookeosmundson

Welcome to the first PPC Pulse of 2026! In this week’s update, Reddit introduces a new automated campaign type, Google expands its Creator Partnerships beta, and Microsoft announces new data-driven targeting capabilities.

Reddit launched Max Campaigns, an automated campaign format designed to simplify setup and expand reach across its ad inventory.

Google rolled out updates to its Creator Partnerships beta, adding creator search and centralized inquiry management inside Google Ads.

Microsoft announced a new partnership with Publicis Media Exchange and Epsilon, bringing Epsilon audience data directly into the Microsoft Advertising platform.

Read on for more details and why they matter for advertisers.

Reddit Ads Introduces Max Campaigns

Reddit has officially introduced Max Campaigns, its new automated campaign type designed to simplify setup and expand reach across the platform’s inventory.

Max Campaigns automate targeting, bidding, and ad delivery with the goal of driving conversions at scale. Advertisers provide a few core inputs, including budget, creative, and optimization goals, and Reddit’s system handles the rest.

If this sounds familiar, it should. Max Campaigns mirror the broader industry shift toward automation-first buying, similar to Google’s Performance Max or Meta’s Advantage+ formats.

What’s notable is the timing. Reddit has spent the last year improving its ad infrastructure, creative formats, and targeting capabilities. Max Campaigns feel like the next logical step in pushing advertisers toward a more consolidated buying experience.

Why This Matters For Advertisers

For advertisers already testing Reddit, Max Campaigns lower the barrier to scaling spend without building complex campaign structures.

This matters most for teams that have struggled with Reddit’s historically manual setup. Instead of managing multiple ad groups or niche targeting layers, Max Campaigns encourage a broader approach that lets Reddit’s system identify where conversions actually come from.

That said, this is not a “set it and forget it” situation.

Advertisers should expect trade-offs, as with any other automated campaign type.

Automation reduces setup friction, but it also limits visibility and control. Early testers will need to pay close attention to conversion quality, placement mix, and creative fatigue, especially since Reddit’s communities behave very differently from traditional social feeds.

The opportunity here is testing, not wholesale replacement. Max Campaigns make Reddit easier to experiment with, but they still need guardrails, realistic expectations, and clear success metrics.

Google Ads Expands Creator Partnership Beta

Google Ads quietly rolled out meaningful improvements to its Creator Partnerships beta, adding tools that make creator discovery and management far more usable for advertisers.

The update was first spotted by Thomas Eccel on LinkedIn.

Screenshot by author on LinkedIn, January 2026

The first is Creator Search, which allows advertisers to search for creators directly using keywords. This replaces the clunky browsing experience that made creator discovery feel disconnected from actual campaign goals.

Advertisers can now filter creators by:

  • Subscriber Count.
  • Average Views.
  • Location.
  • Preferred contact methods.

The second update is a Management Menu that centralizes creator inquiries. Advertisers can view creator names, statuses, subjects, response deadlines, and contact details in one place.

Why This Matters For Advertisers

Google is clearly positioning creators as a more integrated part of paid media strategy, not just a brand add-on.

For advertisers already running Demand Gen or YouTube campaigns, this update closes a workflow gap. Instead of managing creator outreach in spreadsheets or external tools, Google is pulling creator collaboration closer to the ad platform itself.

This also matters for efficiency. Teams can align creator selection more closely with campaign objectives, audience geography, and performance expectations.

It also signals where Google is headed. Creators remain one of the few formats that consistently earn attention instead of getting lost in a sea of generic ads. Google investing in better creator discovery suggests this channel will play a larger role in future campaign types.

The caveat is program maturity. This is still a beta. Measurement, attribution, and scalability remain open questions. Advertisers should approach this as a testing ground, not a replacement for established creator programs.

Microsoft Announces New Data-Driven Targeting Capabilities

Microsoft Advertising used CES to announce a new collaboration with Publicis Media Exchange and Epsilon that brings Epsilon data directly into the Microsoft Advertising platform.

The initiative, called Third-Party Search (3PS), allows Publicis Media clients to activate Epsilon’s identity and audience data across Microsoft’s search, native, and display inventory.

According to Microsoft, early pilots in the travel vertical showed strong results, including higher return on ad spend (ROAS) and access to net-new audiences that were not previously identifiable through standard in-market targeting.

The announcement reinforces Microsoft’s push toward identity-driven personalization while staying compliant with evolving privacy expectations.

There wasn’t detail provided about what specific audience types or how advertisers can use these audiences in the platform, but I’m sure more detail will follow in the coming weeks or months.

Why This Matters For Advertisers

This update highlights Microsoft’s long-term strategy: differentiated data partnerships instead of pure scale competition with Google.

For large advertisers and agencies with access to Epsilon data, this unlocks more precise audience activation without relying solely on keyword intent. That’s especially valuable in verticals like travel, finance, and retail, where user intent is fragmented across devices and touchpoints.

It also reflects a broader shift away from traditional in-market audiences. As privacy constraints tighten, platforms are leaning on richer identity frameworks and curated data partnerships to maintain performance.

For advertisers not working with Publicis or Epsilon, this announcement still matters. It signals where Microsoft is investing and how future audience solutions may evolve.

Expect more emphasis on data interoperability, identity resolution, and partnerships that sit outside standard platform-owned audiences.

Theme Of The Week: Platforms Are Simplifying Entry, Not Strategy

This week’s updates all lower the barrier to getting started, but none of them remove the need for thoughtful decision-making.

Reddit’s Max Campaigns make it easier to launch and scale without building complex structures, but advertisers still have to define success, monitor conversion quality, and decide when broader delivery is actually working.

Google’s Creator Partnerships updates streamline discovery and outreach, but they do not solve measurement, creative fit, or long-term performance questions.

Microsoft’s data collaboration expands access to richer audiences, yet advertisers still need a clear plan for how those audiences fit into their overall targeting approach.

The common thread is access, not automation as a substitute for judgment.

As setup gets easier, the real differentiator becomes how clearly advertisers define what they want these systems to achieve, and how disciplined they are about evaluating results.

More Resources:


Featured Image: beast01/Shutterstock

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles.

On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately.

The cited reason? National security, specifically concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years.

Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the struggling offshore wind industry in the US.

This pause affects $25 billion in investment in five wind farms: Vineyard Wind 1 off Massachusetts, Revolution Wind off Rhode Island, Sunrise Wind and Empire Wind off New York, and Coastal Virginia Offshore Wind off Virginia. Together, those projects had been expected to create 10,000 jobs and power more than 2.5 million homes and businesses.

In a statement announcing the move, the Department of the Interior said that “recently completed classified reports” revealed national security risks, and that the pause would give the government time to work through concerns with developers. The statement specifically says that turbines can create radar interference (more on the technical details here in a moment).

Three of the companies involved have already filed lawsuits, and they’re seeking preliminary injunctions that would allow construction to continue. Orsted and Equinor (the developers for Revolution Wind and Empire Wind, respectively) told the New York Times that their projects went through lengthy federal reviews, which did address concerns about national security.

This is just the latest salvo from the Trump administration against offshore wind. On Trump’s first day in office, he signed an executive order stopping all new lease approvals for offshore wind farms. (That order was struck down by a judge in December.)

The administration previously ordered Revolution Wind to stop work last year, also citing national security concerns. A federal judge lifted the stop-work order weeks later, after the developer showed that the financial stakes were high, and that government agencies had previously found no national security issues with the project.

There are real challenges that wind farms introduce for radar systems, which are used in everything from air traffic control to weather forecasting to national defense operations. A wind turbine’s spinning can create complex signatures on radar, resulting in so-called clutter.

Previous government reports, including one 2024 report from the Department of Energy and a 2025 report from the Government Accountability Office (an independent government watchdog), have pointed out this issue in the past.

“To date, no mitigation technology has been able to fully restore the technical performance of impacted radars,” as the DOE report puts it. However, there are techniques that can help, including software that acts to remove the signatures of wind turbines. (Think of this as similar to how noise-canceling headphones work, but more complicated, as one expert told TechCrunch.)

But the most widespread and helpful tactic, according to the DOE report, is collaboration between developers and the government. By working together to site and design wind farms strategically, the groups can ensure that the projects don’t interfere with government or military operations. The 2025 GAO report found that government officials, researchers, and offshore wind companies were collaborating effectively, and any concerns could be raised and addressed in the permitting process.

This and other challenges threaten an industry that could be a major boon for the grid. On the East Coast where these projects are located, and in New England specifically, winter can bring tight supplies of fossil fuels and spiking prices because of high demand. It just so happens that offshore winds blow strongest in the winter, so new projects, including the five wrapped up in this fight, could be a major help during the grid’s greatest time of need.

One 2025 study found that if 3.5 gigawatts’ worth of offshore wind had been operational during the 2024-2025 winter, it would have lowered energy prices by 11%. (That’s the combined capacity of Revolution Wind and Vineyard Wind, two of the paused projects, plus two future projects in the pipeline.) Ratepayers would have saved $400 million.

Before Donald Trump was elected, the energy consultancy BloombergNEF projected that the US would build 39 gigawatts of offshore wind by 2035. Today, that expectation has dropped to just 6 gigawatts. These legal battles could push it lower still.

What’s hardest to wrap my head around is that some of the projects being challenged are nearly finished. The developers of Revolution Wind have installed all the foundations and 58 of 65 turbines, and they say the project is over 87% complete. Empire Wind is over 60% done and is slated to deliver electricity to the grid next year.

To hit the pause button so close to the finish line is chilling, not just for current projects but for future offshore wind efforts in the US. Even if these legal battles clear up and more developers can technically enter the queue, why would they want to? Billions of dollars are at stake, and if there’s one word to describe the current state of the offshore wind industry in the US, it’s “unpredictable.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Using unstructured data to fuel enterprise AI success

Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult.

But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes.

A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation.

The challenges of organizing and contextualizing unstructured data

Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.

Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies.

The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.

How computer vision gave the Charlotte Hornets an edge 

When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis.

“Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.”

By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration. 

This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title.

Annotation of a basketball match

Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

Taking AI pilot programs into production 

From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.” 

For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality. 

Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.”

Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.” 

For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping.

Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing. 

“The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Researchers are getting organoids pregnant with human embryos

At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins.

Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.

In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next.

—Antonio Regalado

LLMs contain a LOT of parameters. But what’s a parameter?

A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.  

OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.)

But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in

—Will Douglas Heaven

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles.

On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately.

The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years.

Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry.

—Casey Crownhart

This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s death
It’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)
+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)

2 The Trump administration’s chief output is online trolling
Witness the Maduro memes. (The Atlantic $)

3 OpenAI has created a new ChatGPT Health feature 
It’s dedicated to analyzing medical results and answering health queries. (Axios)
+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

4 Meta’s acquisition of Manus is being probed by China
Holding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)
+ What happened when we put Manus to the test. (MIT Technology Review)

5 China is building humanoid robot training centers
To address a major shortage of the data needed to make them more competent. (Rest of World)
+ The robot race is fueling a fight for training data. (MIT Technology Review)

6 AI still isn’t close to automating our jobs
The technology just fundamentally isn’t good enough yet—for now. (WP $)

7 Weight regain seems to happen within two years of quitting the jabs
That’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)

8 This Silicon Valley community is betting on algorithms to find love
Which feels like a bit of a fool’s errand. (NYT $)

9 Hearing aids are about to get really good
You can—of course—thank advances in AI. (IEEE Spectrum)

10 The first 100% AI-generated movie will hit our screen within three years
That’s according to Roku’s founder Anthony Wood. (Variety $)
+ How do AI models generate videos? (MIT Technology Review)

Quote of the day

“I’ve seen the video. Don’t believe this propaganda machine. ” 

—Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.

One more thing

Inside the strange limbo facing millions of IVF embryos

Millions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.

The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love hearing about musicians’ favorite songs 🎶
+ Here are some top tips for making the most of travelling on your own.
+ Check out just some of the excellent-sounding new books due for publication this year.
+ I could play this spherical version of Snake forever (thanks Rachel!)

America’s new dietary guidelines ignore decades of scientific research

The new year has barely begun, but the first days of 2026 have brought big news for health. On Monday, the US’s federal health agency upended its recommendations for routine childhood vaccinations—a move that health associations worry puts children at unnecessary risk of preventable disease.

There was more news from the federal government on Wednesday, when health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir.

That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets.

These guidelines are a big deal—they influence food assistance programs and school lunches, for example. So this week let’s look at the good, the bad, and the ugly advice being dished up to Americans by their government.

The government dietary guidelines have been around since the 1980s. They are updated every five years, in a process that typically involves a team of nutrition scientists who have combed over scientific research for years. That team will first publish its findings in a scientific report, and, around a year later, the finalized Dietary Guidelines for Americans are published.

The last guidelines covered the period 2020 to 2025, and new guidelines were expected in the summer of 2025. Work had already been underway for years; the scientific report intended to inform them was published back in 2024. But the publication of the guidelines was delayed by last year’s government shutdown, Kennedy said last year. They were finally published yesterday.

Nutrition experts had been waiting with bated breath. Nutrition science has evolved slightly over the last five years, and some were expecting to see new recommendations. Research now suggests, for example, that there is no “safe” level of alcohol consumption.

We are also beginning to learn more about health risks associated with some ultraprocessed foods (although we still don’t have a good understanding of what they might be, or what even counts as “ultraprocessed”.) And some scientists were expecting to see the new guidelines factor in environmental sustainability, says Gabby Headrick, the associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security in Washington DC.

They didn’t.

Many of the recommendations are sensible. The guidelines recommend a diet rich in whole foods, particularly fresh fruits and vegetables. They recommend avoiding highly processed foods and added sugars. They also highlight the importance of dietary protein, whole grains, and “healthy” fats.

But not all of them are, says Headrick. The guidelines open with a “new pyramid” of foods. This inverted triangle is topped with “protein, dairy, and healthy fats” on one side and “vegetables and fruits” on the other.

USDA

There are a few problems with this image. For starters, its shape—nutrition scientists have long moved on from the food pyramids of the 1990s, says Headrick. They’re confusing and make it difficult for people to understand what the contents of their plate should look like. That’s why scientists now use an image of a plate to depict a healthy diet.

“We’ve been using MyPlate to describe the dietary guidelines in a very consumer-friendly, nutrition-education-friendly way for over the last decade now,” says Headrick. (The UK’s National Health Service takes a similar approach.)

And then there’s the content of that food pyramid. It puts a significant focus on meat and whole-fat dairy produce. The top left image—the one most viewers will probably see first—is of a steak. Smack in the middle of the pyramid is a stick of butter. That’s new. And it’s not a good thing.

While both red meat and whole-fat dairy can certainly form part of a healthy diet, nutrition scientists have long been recommending that most people try to limit their consumption of these foods. Both can be high in saturated fat, which can increase the risk of cardiovascular disease—the leading cause of death in the US. In 2015, on the basis of limited evidence, the World Health Organization classified red meat as “probably carcinogenic to humans.” 

Also concerning is the document’s definition of “healthy fats,” which includes butter and beef tallow (a MAHA favorite). Neither food is generally considered to be as healthy as olive oil, for example. While olive oil contains around two grams of saturated fat per tablespoon, a tablespoon of beef tallow has around six grams of saturated fat, and the same amount of butter contains around seven grams of saturated fat, says Headrick.

“I think these are pretty harmful dietary recommendations to be making when we have established that those specific foods likely do not have health-promoting benefits,” she adds.

Red meat is not exactly a sustainable food, and neither are dairy products. And the advice on alcohol is relatively vague, recommending that people “consume less alcohol for better overall health” (which might leave you wondering: Less than what?).

There are other questionable recommendations in the guidelines. Americans are advised to include more protein in their diets—at levels between 1.2 and 1.6 grams daily per kilo of body weight, 50% to 100% more than recommended in previous guidelines. There’s a risk that increasing protein consumption to such levels could raise a person’s intake of both calories and saturated fats to unhealthy levels, says José Ordovás, a senior nutrition scientist at Tufts University. “I would err on the low side,” he says.

Some nutrition scientists are questioning why these changes have been made. It’s not as though the new recommendations were in the 2024 scientific report. And the evidence on red meat and saturated fat hasn’t changed, says Headrick.

In reporting this piece, I contacted many contributors to the previous guidelines, and some who had led research for 2024’s scientific report. None of them agreed to comment on the new guidelines on the record. Some seemed disgruntled. One merely told me that the process by which the new guidelines had been created was “opaque.”

“These people invested a lot of their time, and they did a thorough job [over] a couple of years, identifying [relevant scientific studies],” says Ordovás. “I’m not surprised that when they see that [their] work was ignored and replaced with something [put together] quickly, that they feel a little bit disappointed,” he says.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Acquiring Customers via Google Ads

Acquiring new customers versus targeting existing ones is a common goal of advertisers. Google Ads offers account and bid settings to help. It starts with a feature called “New Customer Acquisition” in customer lifecycle optimization, located in the account-level goals section.

Advertisers can set an incremental conversion value for new customers. For example, if converting a repeat customer is worth $20, an incremental value could be $10, elevating new customers to $30.

Google provides a tool to calculate an incremental value based on average order value. To use, set a target return on ad spend for new customers. Say an advertiser wants a 100% customer acquisition ROAS — a 1:1 return. In the screen capture below, Google suggested a $46.27 value for new customers based on a given AOV and a $10 current value.

Google Ads interface showing a target ROAS slider for new customers and an ‘Incremental conversion value for new customers’ panel, indicating an increase from $10.00 current value to a suggested value of $46.27.

Google suggests a $46.27 value for new customers based on a given AOV, 100% ROAS, and a $10 current value.

The next step is to provide existing audience segments via Customer Match lists, either by specific category or overall. Google uses the lists (of at least 1,000 customers) to identify new prospects, much like how it uses first-party data to build lookalike audiences in Demand Gen campaigns.

Advertisers can deviate from Google’s suggested new-customer value and assign a higher amount. The option is often relevant for an audience segment of high-ticket buyers, provided there are 1,000 members. But a higher value is not critical in my experience, especially if the goal is to acquire any customer.

Having assigned an incremental value and uploaded a match list(s), the New Customer Acquisition feature is complete at the account level. Implement for each campaign by selecting the box labeled “Adjust your bidding to help acquire new customers.”

Google Ads settings screen showing the option ‘Adjust your bidding to help acquire new customers,’ with ‘Bid higher for new customers’ selected. The panel displays an incremental conversion value of $10.00 for new customers and an example showing a $41.90 purchase valued at $51.90 for new customers.

For each campaign, implement account-level acquisition settings by selecting the box labeled “Adjust your bidding to help acquire new customers.”

Google Ads will now bid higher within the target ROAS to acquire new customers, with lower bids for existing ones.

Yet advertisers can opt to bid only for new customers, such as for free trials or samples. Be careful, however, as this option will limit traffic. An alternative tactic is uploading an audience exclusion list of consumers who signed up for the free offer.

To activate New Customer Acquisition, advertisers must first set up value-based bidding. The system can’t bid higher for new customers without a base conversion value. Use a maximize conversion value bid strategy (with or without a target ROAS). Additionally, confirm the conversion classification is “purchase.”

Reporting

It’s worthwhile for reporting purposes to set a nominal new-customer conversion value (even $0.01) regardless of the bid strategy. Absent an incremental value, you can’t see the number of conversions from new vs. returning customers in the segment option view of “Conversions > New vs. returning customers.”

Google Ads interface showing a target ROAS slider for new customers and an ‘Incremental conversion value for new customers’ panel, indicating an increase from $10.00 current value to a suggested value of $46.27

Even a nominal incremental value is worthwhile for distinguishing conversions between new and returning customers.