Google Rolls Out Gemini 2.5 Pro & Deep Search For Paid Subscribers via @sejournal, @MattGSouthern

Google is rolling out two enhancements to AI Mode in Labs: Gemini 2.5 Pro and Deep Search.

These capabilities are exclusive to users subscribed to Google’s AI Pro and AI Ultra plans.

Gemini 2.5 Pro Now Available In AI Mode

Subscribers can now access Gemini 2.5 Pro from a dropdown menu within the AI Mode tab.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

While the default model remains available for general queries, the 2.5 Pro model is designed to handle more complex prompts, particularly those involving reasoning, mathematics, or coding.

In an example shared by Google, the model walks through a multi-step physics problem involving gravitational fields, showing how it can solve equations and explain its reasoning with supporting links.

Screenshot from: blog.google/products/search/deep-search-business-calling-google-search, July 2025.

Deep Search Offers AI-Assisted Research

Today’s update also introduces Deep Search, which Google describes as a tool for conducting more comprehensive research.

The feature can generate detailed, citation-supported reports by processing multiple searches and aggregating information across sources.

Google stated in its announcement:

“Deep Search is especially useful for in-depth research related to your job, hobbies, or studies.”

Availability & Rollout

These features are currently limited to users in the United States who subscribe to Google’s AI Pro or AI Ultra plans and have opted into AI Mode through Google Labs.

Google hasn’t provided a firm timeline for when all eligible users will receive access, but rollout has begun.

The “experimental” label on Gemini 2.5 Pro suggests continued adjustments based on user testing.

What This Means

The launch of Deep Search and Gemini 2.5 Pro reflects Google’s broader effort to incorporate generative AI into the search experience.

For marketers, the shift raises questions about visibility in a time when AI-generated summaries and reports may increasingly shape user behavior.

If Deep Search becomes a commonly used tool for information gathering, the structure and credibility of content could play a larger role in discoverability.

Gemini 2.5 Pro’s focus on reasoning and code-related queries makes it relevant for more technical users. Google has positioned it as capable of helping with debugging, code generation, and explanation of advanced concepts. Similar to tools like ChatGPT’s coding features or GitHub Copilot.

Its integration into Search may appeal to users who want technical assistance without leaving the browser environment.

Looking Ahead

The addition of these features behind a paywall continues Google’s movement toward monetizing AI capabilities through subscription services.

While billed as experimental, these updates may provide early insight into how the company envisions the future of AI in search: more automated, task-oriented, and user-specific.

Search professionals will want to monitor how these features evolve, as tools like Deep Search could become more widely adopted.

Google Search Can Now Call Local Businesses Using AI via @sejournal, @MattGSouthern

Google has introduced a new AI-powered calling feature in Search that contacts local businesses on a user’s behalf to gather pricing and availability details.

The feature, rolling out to all U.S. Search users this week, allows people to request information from multiple businesses with a single query.

When searching for services like pet grooming or dry cleaning, users may now see a new option to “Have AI check pricing.”

How It Works

After selecting the AI option, users are guided through a form to provide details about the service they need.

Google’s AI then calls relevant local businesses to gather information such as pricing, appointment availability, and service options. The responses are consolidated and presented to the user.

The experience starts with a typical local search, such as “pet groomers near me.” If the AI calling feature is available, users can specify details like:

  • Pet type, breed, and size
  • Requested services (e.g., bath, nail trim, haircut)
  • Time preferences (e.g., within 48 hours)
  • Preferred method of communication (SMS or email)

According to a Google spokesperson, the AI determines which businesses to contact based on traditional local search rankings. Only those that appear in results for the relevant query and match the user’s criteria will be contacted.

What It Looks Like

Examples show a multi-step process where users enter information and confirm their request.

Google displays responses from participating businesses, including prices and availability, all gathered through automated calls.

Before submitting a request, users must confirm that Google can call businesses and share the submitted details. The process is governed by Google’s privacy policy, and users are informed of how their data will be used.

Business Participation & Control

Businesses can manage whether they receive these AI-driven calls via their Business Profile settings.

Google describes the feature as creating “new opportunities” to connect with potential customers, while also giving businesses control over participation.

Available to All (With Premium Perks)

The AI calling feature is available to all users in the U.S., though Google AI Pro and AI Ultra subscribers benefit from higher usage limits.

Google says more agentic AI features will debut for these subscribers before expanding globally.

What This Means

Because the AI selects businesses using standard local search rankings, maintaining strong local SEO becomes even more important.

Businesses with optimized listings and higher rankings are more likely to receive calls and capture leads.

This could also shift how businesses handle inbound requests. Those that rely on phone calls may want to prepare staff or systems to handle more frequent, possibly scripted, AI-initiated inquiries.

Looking Ahead

By automating time-consuming tasks like gathering service quotes, Google aims to make Search more actionable.

Adoption will depend on how well the AI handles real-world complexity, as well as how many businesses opt in.

For marketers and local service providers, it’s another sign that search visibility directly connects to lead generation. Keeping Business Profile data accurate and staying visible in local results could increasingly determine whether a business gets contacted at all.

Scaling PPC Campaigns Sustainably: Use The SCALE Framework To Move Beyond Actionism

Budget increase, performance drops, budget decrease. Almost every marketer knows that short-sighted game, where decisions are made on a daily basis and campaign performance fluctuates to extremes, without a clear goal.

I’ve seen this pattern destroy more campaigns than I can count. The problem isn’t bad ads or wrong keywords – it’s “actionism.”

That’s when you’re constantly changing things without a plan, reacting to yesterday’s numbers instead of building for tomorrow.

PPC scaling isn’t about doing more. It’s about doing the right things in the right order, which is why I highly recommend a sustainable growth framework to companies working on their long-term goals.

The following framework has consistently delivered three to five times growth while keeping campaigns profitable.

Why Most PPC Scaling Falls Apart

Here’s what I see marketers doing wrong every single day:

  • Changing bids daily because yesterday’s numbers looked bad.
  • Adding random keywords without thinking about why.
  • Swapping ad copy constantly without proper tests.
  • Throwing more money at broken campaigns.
  • Jumping to new platforms before fixing the current one.
  • Increasing or decreasing budgets without a goal.
  • Triggering learning phases left and right, not letting the algorithm stabilize.

Sound familiar? These create a mess.

Bad results make you or your leadership panic and change more stuff. More changes mess up your data. Messy data means you can’t tell what’s actually working.

Your campaigns end up stuck between “meh” and “disaster,” never really growing.

The SCALE Framework: A 5-Step System For PPC Growth

Here’s the system I use to scale campaigns without the guesswork:

  • S – Stabilize Performance.
  • C – Capture Market Intelligence.
  • A – Amplify What Works.
  • L – Layer New Opportunities.
  • E – Evolve And Optimize.

Step 1: Stabilize Performance

You can’t scale chaos. Before adding budget anywhere, fix what you have first.

Start with a reality check. Look at your campaigns and find what’s actually working. Which ad groups bring in customers? Which keywords convert? Which ads get clicked and actually lead to sales?

Write this stuff down – these are your money-makers.

Track your key numbers: How much it costs to get a customer, how much money you make per dollar spent, conversion rates, and average order size. These become your benchmarks for everything else.

Next, cut the dead weight. This sounds backwards, but scaling often starts with doing less. Pause campaigns that have been losing money for X+ days with no signs of life.

Remove ad groups that overlap and compete with each other. Stop throwing good money after bad.

Here’s the key: Take 80% of your budget and put it on your top 20% best performers. This gives you cleaner data and better results faster.

Make everything consistent. Create naming systems that make sense. Set up tracking that actually works. Build templates for ads and landing pages you can copy later.

Most importantly, set rules for when campaigns get more budget, like they need to hit your target cost per customer and keep it there before getting more money.

Analyze deeper. Don’t just look at surface numbers. Watch how your budget gets spent throughout the day.

Those Google Ads notifications about limited budgets? They’re garbage. They show up late, stick around for days after you’ve fixed things, and waste your time.

Instead, build a proper budget monitor. I use Google Ads scripts that loads data into Google Sheets so I can see exactly how fast money is burning in real time.

If you want something quicker to set up, Google has a budget depletion report in Looker Studio that works decently enough to start with.

Step 2: Capture Market Data

Once your campaigns are stable, it’s time to understand what’s happening in your market and where you stand against competitors.

Know your competition. Use auction insights to see who you’re really fighting against. Look at your products manually or use merchant center data to see how your pricing stacks up.

Find out what you’re good at and where you’re getting crushed. Maybe certain product categories just don’t work, or your margins are too thin.

Here’s the thing: Google wants you to dump everything into Performance Max and call it a day. That works for basic campaigns, but in my opinion, it won’t scale.

Real growth comes from understanding why some products sell and others don’t. Sometimes a small tweak fixes everything.

Other times, a product is just dead in the water. You need to know the difference if you want to grow consistently without wild swings in performance.

Track search trends and volume. Google Keyword Planner shows you search volume, plus three-month and year-over-year trends – perfect for spotting seasonal patterns.

Google Trends helps you see what’s hot and what’s dying.

Stay on top of market news by checking Google News regularly. Set up Google Alerts for your brand names and key industry terms so you don’t miss anything important.

If you’re in the EU and work with a CSS partner, ask for CSS Insights reports. They show you market data on clicks, impressions, and how deep other advertisers are bidding.

CSS Insights sample report (Image from author, June 2025)

These insights give you a clear picture of industry click volume, impression volume, and how tough your competition really is.

Always back your decisions with real data. Otherwise, you’re just guessing. But when you have solid data, you can make moves with confidence.

This analysis shows you how much room your current campaigns have to grow and where new opportunities are hiding.

Step 3: Amplify What Works

Now, you take your winners and make them bigger. This isn’t just throwing more money at campaigns. It’s a smart expansion based on what the data tells you.

Scale budgets the right way. For campaigns hitting your targets, increase budgets gradually. I mean gradually – max 20-30% every couple of days. Go faster and you’ll trigger Google’s learning phase or blow through cash before you know what hit you.

Watch your numbers like a hawk when scaling.

If your cost-per-customer jumps more than 20% or your return on ad spend (ROAS) drops below your limit, stop the increases immediately.

Fix what’s broken first. Also, remember that conversions take time. Don’t panic and make changes if performance wobbles for a day or two.

Segment everything by performance. Here’s where most people screw up scaling. They lump all their products together – bestsellers mixed with money burners. That’s a recipe for disaster.

Label your products by profit margins or performance, for example, with data-driven product segmentation.

Create scores or labels that make sense. Then, split your campaigns by these scores so similar products are grouped together. Your top performers get their own campaigns, your problem products get theirs.

Why? Because Google’s algorithm isn’t perfect. It might hit your average return target, but it’s doing it by letting your bestsellers carry the dead weight.

From the outside, everything looks fine, but you’re wasting tons of money on products that will never work while starving your winners of budget.

This is the biggest scaling blocker I see. Everything looks okay at the top level, but dig deeper and you’ll find massive waste.

Separate your winners from your losers, and suddenly you have way more budget to put where it actually makes money.

Step 4: Localize And Expand

Your home market is working. Now, it’s time to take those winning campaigns and spread them to new countries and platforms. But here’s the key: Don’t just copy and paste everything, hoping it works.

Go international the smart way. Start with countries that are similar to your home market. The same language is easiest, but similar buying behavior and economic conditions matter more.

If you’re crushing it in Germany, try Austria or Switzerland before jumping to Brazil.

Check your current data first. Look at your Google Analytics – you’re probably already getting some international traffic.

Start with countries that already convert for you organically. These are your low-hanging fruit.

Set up separate campaigns for each country. Don’t just translate your ads, localize them.

Different countries care about different things. Price might be everything in one market, while quality and service matter more in another.

Your checkout process, shipping costs, and customer service all need to work in the local language and culture.

Start small. Take your best-performing campaign and recreate it for one new country. Get that profitable first, then expand to more markets. Don’t spread yourself thin trying to launch everywhere at once.

Expand to new platforms carefully. Once you’ve maxed out Google Ads in your main markets, look at other platforms. But here’s what most people get wrong: They think Facebook works like Google, or TikTok works like Facebook. They don’t.

Each platform has its own game. Google captures people already looking to buy. Facebook interrupts people scrolling. TikTok is all about entertainment first.

Your ads, targeting, and strategy need to match how people actually use each platform.

Start with one new platform and master it before moving to the next. Take your winning products and test them, but expect to rebuild your ad creative from scratch. What works on Google Search probably won’t work on Facebook Feeds.

The mistake I see all the time? People launch on three platforms simultaneously, spread their budget too thin, and conclude none of them work.

Pick one, give it proper attention and budget, and make it profitable before adding more.

Step 5: Evolve And Optimize

Scaling isn’t a one-time thing. Markets change, competitors adapt, and platforms update their algorithms. You need systems that keep you ahead of the curve and focused on what actually matters, long-term growth.

Think long-term, not daily panic. Here’s where most marketers lose their minds. They check performance every day and freak out over weekly fluctuations. Stop it.

Focus on your North Star metrics, the big picture numbers that actually matter for your business over months and quarters, not days.

Set up proper attribution that shows the real customer journey. People no longer just click an ad and buy.

They see your Google ad, check you out on Facebook, read reviews, and then come back through organic search to purchase.

If you’re only looking at last-click attribution, you’re making decisions with half the story.

Marketing Mix Models (MMMs) help you understand how all your channels work together. They show you the true impact of each platform and how they influence each other. This is crucial when you’re running campaigns across multiple platforms and countries.

Let automation handle the boring stuff. Once you have enough conversion data, smart bidding strategies like Target CPA and Target ROAS can actually work well.

But they need proper setup and constant monitoring. Don’t just turn them on and hope for the best.

Build custom scripts or use third-party tools to automate the routine stuff, bid adjustments, budget pacing, and performance alerts. This frees you up to focus on strategy instead of daily maintenance.

Test everything, but do it right. Create a systematic approach to testing new ad copy, extensions, and landing pages. But only test one thing at a time, or you’ll never know what actually made the difference.

Watch for trouble before it hits. Set up early warning systems that alert you when performance starts shifting before it becomes a real problem.

Track things like impression share drops, quality score changes, and competitive pressure increases.

The goal isn’t to react to every small change, but to spot the big trends early so you can adapt your strategy before your competition does.

Common Pitfalls And How To Avoid Them

  • The Patience Problem: Scaling takes time. Resist the urge to accelerate timelines or skip phases. Each phase builds on the previous one, and rushing leads to unstable growth.
  • The Complexity Trap: As campaigns grow, complexity increases exponentially. Maintain documentation, standardized processes, and regular audits to prevent campaigns from becoming unmanageable.
  • The Attribution Challenge: Multi-platform scaling makes attribution more complex. Invest in proper tracking and attribution modeling early to maintain visibility into performance drivers.

Building Sustainable Growth

Sustainable PPC scaling isn’t about revolutionary tactics or secret strategies. It’s about disciplined execution of proven principles, systematic testing, and patient optimization.

The SCALE framework provides the structure to move beyond actionism toward strategic growth.

By stabilizing performance first, capturing market intelligence, amplifying what works, layering new opportunities systematically, and continuously evolving your approach, you create a foundation for sustained success.

Remember: Scaling PPC campaigns is not about doing everything at once. It’s about doing the right things in the right order, with the discipline to stick to the process even when the temptation to “optimize” everything at once becomes overwhelming.

The companies that achieve sustainable PPC growth aren’t the ones with the most sophisticated tactics. They’re the ones with the most disciplined systems.

Build your system, trust your process, and let compound growth work in your favor.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

When Direct Means We Don’t Know: CMOs Need To Rethink Attribution In AI Search via @sejournal, @gregjarboe

I was asked recently to take a closer look at the data for a website in Google Analytics 4 (GA4).

This was for “Measurement Queen” Katie Delahaye Paine, a pioneer with over 30 years of experience in communications research and measurement, who now feels like she is flying blind.

From looking at her data in GA4, it turns out that 86% of the new users who visited her website over the last 28 days came from the “direct” channel.

That means the author of Measure What Matters can’t identify the sources of the vast majority of her website’s traffic.

So, I compared user acquisition for the last 28 days with the same period last year (matching the day of week). The good news was that total new users were up 29% year-over-year (YoY).

But here’s the bad news: Direct traffic to her site was up 126% YoY, while referral traffic was down 90%, organic social traffic was down 33%, and organic search traffic was down 28%.

This means that more than six out of seven users are now arriving on Paine’s site without a traceable referrer.

This includes situations where a user types her website address directly into their browser, uses a bookmark to access her site, or arrives from a source that doesn’t pass referrer information.

Digging Deeper Into What’s Behind The Traffic Surge

So, I asked the Measurement Queen a couple of standard questions:

She replied, “I don’t even have a TikTok account and haven’t used WhatsApp in years!”

I’d call that a big no. But it also indicated that I should use GA4’s search function to discover “top landing page by users for first user default channel group of direct traffic.”

The site’s homepage was the top landing page for direct traffic, but only 18.37% of users landed there. In second place was her blog, The Measurement Advisor, which got 13.96% of the site’s direct traffic.

When I shared this data with Paine, she revealed, “I’ve been blogging more frequently.”

I’d call that a big yes. So, I just asked Google about the titles of her recent blog posts in Google’s search box.

Here’s what I saw when I Googled [Sorry boss, I never got the Memo. How to know if you’re reaching the unreachables?].

Screenshot from search for [Sorry boss, I never got the Memo. How to know if you’re reaching the unreachables?], Google, June 2025

It’s worth noting that even when her content appears in a Google AI Overview, the link to her blog post doesn’t pass referrer data to GA4, and the link to her LinkedIn article on the same topic isn’t tracked by her GA4 account.

Then, I just asked Google, [Is Paine Publishing an authoritative site?].

Here’s what I saw:

Screenshot from search for [Is Paine Publishing an authoritative site?], Google, June 2025

So, even some of the direct traffic to her homepage may have come from links in AI Overviews that don’t pass referrer data to GA4.

Why haven’t similar insights been reported to more CMOs?

Reporting Squirrels, who primarily focus on generating reports without necessarily providing deep insights or actionable recommendations, are reluctant to highlight this type of anomaly, especially when “direct” means “We don’t know.”

So, CMOs need to rethink attribution in AI search. They need to independently verify and interpret GA4 event-based data.

And they also need to hire “Analysis Ninjas,” who excel at analyzing data to uncover hidden patterns, generate insights, and provide recommendations for business improvement.

Rethinking Attribution In AI Search

CMOs need to rethink their fundamental assumptions about attribution.

How should they attribute credit to key user actions throughout the customer’s journey toward making a purchase or completing other important actions on their sites?

They should avoid the old discussions that narrowly focused on data-driven attribution versus paid and organic last-click attribution.

Those touchpoints seem less meaningful when AI search is obscuring the sources of six out of seven of their website’s visitors.

Instead, CMOs (and important members of their team) should read “It’s Time for Marketers to Move Beyond the Linear Funnel,”

The article by the Boston Consulting Group cites force-fitting the complex array of touchpoints into a linear, funnel model doesn’t align with actual customer journeys.

This linear, funnel model can lead to missed opportunities due to poorly allocated resources or ineffective communication.

BCG says, “Marketers should instead adopt a more adaptable framework that more accurately reflects the real paths consumers take.”

BCG recommends shifting from the linear funnel to “influence maps.” But, before CMOs fly into that fog bank, they should re-examine the “expanding network of touchpoints – new streaming services, online shopping experiences, GenAI, and social platforms.”

Recognizing The Attribution Gaps That Existed Before AI

If CMOs blow up the funnel model and examine what’s in the awareness stage, they’ll see it includes radio, TV ads, magazines/newspapers, in-store announcements, word of mouth, packaging, and billboards. None of these were ever tracked in GA4.

And, if they analyze what’s in the consideration stage, they’ll see it includes video, brand sites, social media, search, sponsored content, retail media, in-app, and email. These were tracked by GA4 – until AI search started clouding over the sources of this traffic to websites.

In other words, GA4 didn’t track the awareness stage of this “multi-touchpoint landscape” even before the advent of Google AI Overviews.

And now that AI search is obscuring the sources of most of the touchpoints in the consideration stage, CMOs need to rapidly reconsider, review, revise, reassess, reconceptualize, and reimagine their assumptions about data-driven attribution.

These old assumptions may still be valid for Performance Max campaigns in Google Ads, which leverage Google’s AI to maximize performance across all of Google’s advertising channels, including Search, Display, YouTube, Discover, Gmail, and Maps.

And when an organization connects its Google Analytics property to a Google Ads account, it makes it possible to align GA4 and Google Ads conversions using the organization’s most important events.

But, according to a 2024 zero-click search study, paid search accounts for only 1% of clicks.

So, how do CMOs assign credit to SEO, content marketing, social media marketing, and communications for the 40.5% of other Google searches that produce clicks, or the 58.5% of zero-click searches?

Until Google provides a new version of Analytics that measures what matters for professionals across the entire marketing mix, CMOs will need to independently verify and interpret GA4’s event-based data.

Independently Verifying And Interpreting GA4 Event-Based Data

How do CMOs discover the critical data and strategic insights they need to successfully navigate through the fog bank surrounding the awareness stage of customer journeys?

They should conduct more old-school market research. Ironically, many brands cut their budgets for market research after Google started offering free brand lift studies to advertisers for their YouTube campaigns in March 2013.

But, CMOs don’t need to limit independent brand lift studies to asking questions about ad recall. They can ask questions about brand awareness, consideration, and purchase intent to understand the value of their entire marketing mix.

In 2019, my digital marketing agency helped the Rutgers School of Management and Labor Relations (SMLR) launch a new online master’s degree program.

We won the U.S. Search Award for Best Use of PR in a Search Campaign and were a finalist in the Best Integrated Campaign category.

We conducted pre- and post-launch surveys six months apart to show:

  • The percentage of respondents who said they were “familiar with” Rutgers SMLR had increased from 13.8% pre-launch to 18.5% post-launch.
  • The percentage of respondents who said they were “very likely” to recommend Rutgers SMLR to a friend or colleague had increased from 16.7% pre-launch to 19.0% post-launch.

Next, CMOs can successfully navigate through the low clouds now obscuring the touchpoints in the consideration stage by putting someone in charge of audience research as well as market research.

There are several excellent audience research tools, each with unique strengths, to help understand their needs, behaviors, preferences, and motivations.

For online behavior and digital footprints, SparkToro and Similarweb are highly effective.

For psychographic and cultural affinity analysis, Audiense and BuzzSumo are great choices.

For social listening and brand monitoring, Sprout Social and Keyhole are powerful options.

Taking Control When Analytics Falls Short

Next, CMOs should challenge their SEO, content marketing, social media marketing, and communications teams to create their own audiences in GA4 just like the ones that the paid media team is already using for remarketing campaigns.

For example, a PR audience could include users who:

  • Scroll to 90% of a blog post or article.
  • Download a whitepaper.
  • Play at least 50% of a product video.
  • Complete a tutorial.

The communications team can share their PR audiences with their colleagues in paid media, who can use Google Ads to remarket to these groups of users.

  • If users scroll to 90% or more of your blog post or download a whitepaper, then they can use ads to invite them to subscribe to your newsletter.
  • If users play at least 50% of a product video or complete a tutorial, then they can use ads to invite them to attend one or more in-person or virtual events.

CMOs should also ask their digital analytics teams if they have used “Explorations” this month. This is a set of advanced tools in GA4 designed to go beyond basic reports, allowing them to gain deeper insights into their customers’ behavior.

There’s no way to predict what different digital analytics teams will discover, but CMOs who feel they’re flying blind will want to know what their team saw when they used:

  • User Exploration to dig into data about individual users or groups within your segments to analyze detailed user journeys.
  • Cohort Exploration to study user groups with shared traits to understand behavior trends and performance over time.
  • Segment Overlap to compare how user segments intersect to uncover hidden audiences that meet specific conditions.
  • Funnel Exploration to track the steps users follow to complete key actions, helping you optimize conversion paths and spot performance issues.
  • Path Exploration to visualize the actual navigation paths users take through your website or app.
  • User Lifetime to assess long-term user behavior and value from the first visit through their customer lifecycle.

Hire Analysis Ninjas Who Excel At Analyzing Data

Finally, CMOs need to ask themselves: How did the attribution problem manage to fly under the radar for so long?

They could blame GA4’s Analytics Intelligence. Automated insights are supposed to detect unusual changes or emerging trends in their website’s data and notify their digital analytics team automatically, on the Insights dashboard, within the Analytics platform.

If the so-called Reporting Squirrels were reluctant to highlight this type of anomaly, especially when “direct” means “We don’t know,” then who is really to blame?

That’s why CMOs also need to ask themselves: How do I turn at least one of my Reporting Squirrels into an Analysis Ninja?

To encourage a Reporting Squirrel to evolve into an Analysis Ninja, CMOs must shift from asking for data to encouraging someone on their digital analytics team to actively interpret it and recommend solutions.

This also involves encouraging them to develop skills in statistical analysis, understand business context, and communicate findings effectively.

More Resources:


Featured Image: Viktoriia Hnatiuk/Shutterstock

Confirmed CWV Reporting Glitch In Google Search Console via @sejournal, @martinibuster

Google Search Console Core Web Vitals (CWV) reporting for mobile is experiencing a dip that is confirmed to be related to the Chrome User Experience Report (CrUX). Search Console CWV reports for mobile performance show a marked dip beginning around July 10, at which point the reporting appears to stop completely.

Not A Search Console Issue

Someone posted about it on Bluesky

“Hey @johnmu.com is there a known issue or bug with Core Web Vitals reporting in Search Console? Seeing a sudden massive drop in reported URLs (both “good” and “needs improvement”) on mobile as of July 14.”

The person referred to July 14th, but that’s the date the reporting hit zero. The drop actually starts closer to July 10th, which you can see when you hover a cursor at the point that the drops begin.

Google’s John Mueller responded:

“These reports are based on samples of what we know for your site, and sometimes the overall sample size for a site changes. That’s not indicative of a problem. I’d focus on the samples with issues (in your case it looks fine), rather than the absolute counts.”

The person who initially started the discussion responded to inform Mueller that this isn’t just on his site, the peculiar drop in reporting is happening on other sites.

Mueller was unaware of any problem with CWV reporting so he naturally assumed that this was an artifact of natural changes in Internet traffic and user behavior. So his next response continued under the assumption that this wasn’t a widespread issue:

He responded:

“That can happen. The web is dynamic and alive – our systems have to readjust these samples over time.”

Then Jamie Indigo responded to confirm she’s seeing it, too. 

“Hey John! Thanks for responding 🙂 It seems like … everyone beyond the usual ebb and flow. Confirming nothing in the mechanics have changed?”

At this point it was becoming clear that this weird behavior wasn’t isolated to just one site and Mueller’s response to Jamie reflected this growing awareness.  Mueller confirmed that there’s nothing happening on the Search Console side, leaving it open about the CrUX side of the Core Web Vitals reporting.

His response:

“Correct, nothing in the mechanics changed (at least with regards to Search Console — I’m also not aware of anything on the Chrome / CrUX side, but I’m not as involved there).”

CrUX CWV Field Data

CrUX is the acronym for the Chrome User Experience report. It’s CWV reporting based on real website visits. The data is collected from Chrome browser website visits by users who have opted in to reporting their data for the report.

Google’s Chrome For Developers page explains:

“The Chrome User Experience Report (also known as the Chrome UX Report, or CrUX for short) is a dataset that reflects how real-world Chrome users experience popular destinations on the web.

CrUX is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented.

CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites.”

Core Web Vitals Reporting Outage Is Widespread

At this point more people joined the conversation, with Alan Bleiweiss offering both a comment and a screenshot showing the same behavior where the reporting completely drops off is happening on the Search Console CWV reports for other websites.

He posted:

“oooh Google had to slow down server requests to set aside more power to keep the swimming pools cool as the summer heats up.”

Here’s a closeup detail of Alan’s screenshot of a Search Console CWV report:

Screenshot Of CWV Report Showing July 10 Drop

I searched the Chrome Lighthouse changelog to see if there’s anything there that corresponds to the drop but nothing stood out.

So what is going on?

CWV Reporting Outage Is Confirmed

I next checked the X and Bluesky accounts of Googlers who work on the Chrome team and found a post by Barry Pollard, Web Performance Developer Advocate on Google Chrome, who had posted about this issue last week.

Barry posted a note about a reporting outage on Bluesky:

“We’ve noticed another dip on the metrics this month, particularly on mobile. We are actively investigating this and have a potential reason and fix rolling out to reverse this temporary dip. We’ll update further next month. Other than that, there are no further announcements this month.”

Takeaways

Google Search Console Core Web Vitals (CWV) data drop:
A sudden stop in CWV reporting was observed in Google Search Console around July 10, especially on mobile.

Issue is widespread, not site-specific:
Multiple users confirmed the drop across different websites, ruling out individual site problems.

Origin of issue is not at Search Console:
John Mueller confirmed there were no changes on the Search Console side.

Possible link to CrUX data pipeline:
Barry Pollard from the Chrome team confirmed a reporting outage and mentioned a fix may be rolled out at an unspecified time in the future.

We now know that this is a confirmed issue. Google Search Console’s Core Web Vitals reports began showing a reporting outage around July 10, leading users to suspect a bug. The issue was later acknowledged by Barry Pollard as reporting outage affecting CrUX data, particularly on mobile.

Featured Image by Shutterstock/Mix and Match Studio

AI’s giants want to take over the classroom

School’s out and it’s high summer, but a bunch of teachers are plotting how they’re going to use AI this upcoming school year. God help them. 

On July 8, OpenAI, Microsoft, and Anthropic announced a $23 million partnership with one of the largest teachers’ unions in the United States to bring more AI into K–12 classrooms. Called the National Academy for AI Instruction, the initiative will train teachers at a New York City headquarters on how to use AI both for teaching and for tasks like planning lessons and writing reports, starting this fall

The companies could face an uphill battle. Right now, most of the public perceives AI’s use in the classroom as nothing short of ruinous—a surefire way to dampen critical thinking and hasten the decline of our collective attention span (a viral story from New York magazine, for example, described how easy it now is to coast through college thanks to constant access to ChatGPT). 

Amid that onslaught, AI companies insist that AI promises more individualized learning, faster and more creative lesson planning, and quicker grading. The companies sponsoring this initiative are, of course, not doing it out of the goodness of their hearts.

No—as they hunt for profits, their goal is to make users out of teachers and students. Anthropic is pitching its AI models to universities, and OpenAI offers free courses for teachers. In an initial training session for teachers by the new National Academy for AI Instruction, representatives from Microsoft showed teachers how to use the company’s AI tools for lesson planning and emails, according to the New York Times

It’s early days, but what does the evidence actually say about whether AI is helping or hurting students? There’s at least some data to support the case made by tech companies: A recent survey of 1,500 teens conducted by Harvard’s Graduate School of Education showed that kids are using AI to brainstorm and answer questions they’re afraid to ask in the classroom. Studies examining settings ranging from math classes in Nigeria to colleges physics courses at Harvard have suggested that AI tutors can lead students to become more engaged. 

And yet there’s more to the story. The same Harvard survey revealed that kids are also frequently using AI for cheating and shortcuts. And an oft-cited paper from Microsoft found that relying on AI can reduce critical thinking. Not to mention the fact that “hallucinations” of incorrect information are an inevitable part of how large language models work.

There’s a lack of clear evidence that AI can be a net benefit for students, and it’s hard to trust that the AI companies funding this initiative will give honest advice on when not to use AI in the classroom.

Despite the fanfare around the academy’s launch, and the fact the first teacher training is scheduled to take place in just a few months, OpenAI and Anthropic told me they couldn’t share any specifics. 

It’s not as if teachers themselves aren’t already grappling with how to approach AI. One such teacher, Christopher Harris, who leads a library system covering 22 rural school districts in New York, has created a curriculum aimed at AI literacy. Topics range from privacy when using smart speakers (a lesson for second graders) to misinformation and deepfakes (instruction for high schoolers). I asked him what he’d like to see in the curriculum used by the new National Academy for AI Instruction.

“The real outcome should be teachers that are confident enough in their understanding of how AI works and how it can be used as a tool that they can teach students about the technology as well,” he says. The thing to avoid would be overfocusing on tools and pre-built prompts that teachers are instructed to use without knowing how they work. 

But all this will be for naught without an adjustment to how schools evaluate students in the age of AI, Harris says: “The bigger issue will be shifting the fundamental approaches to how we assign and assess student work in the face of AI cheating.”

The new initiative is led by the American Federation of Teachers, which represents 1.8 million members, as well as the United Federation of Teachers, which represents 200,000 members in New York. If they win over these groups, the tech companies will have significant influence over how millions of teachers learn about AI. But some educators are resisting the use of AI entirely, including several hundred who signed an open letter last week.

Helen Choi is one of them. “I think it is incumbent upon educators to scrutinize the tools that they use in the classroom to look past hype,” says Choi, an associate professor at the University of Southern California, where she teaches writing. “Until we know that something is useful, safe, and ethical, we have a duty to resist mass adoption of tools like large language models that are not designed by educators with education in mind.”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AI text-to-speech programs could “unlearn” how to imitate certain people

A technique known as “machine unlearning” could teach AI models to forget specific voices—an important step in stopping the rise of audio deepfakes, where someone’s voice is copied to carry out fraud or scams.

Recent advances in artificial intelligence have revolutionized the quality of text-to-speech technology so that people can convincingly re-create a piece of text in any voice, complete with natural speaking patterns and intonations, instead of having to settle for a robotic voice reading it out word by word. “Anyone’s voice can be reproduced or copied with just a few seconds of their voice,” says Jong Hwan Ko, a professor at Sungkyunkwan University in Korea and the coauthor of a new paper that demonstrates one of the first applications of machine unlearning to speech generation.

Copied voices have been used in scams, disinformation, and harassment. Ko, who researches audio processing, and his collaborators wanted to prevent this kind of identity fraud. “People are starting to demand ways to opt out of the unknown generation of their voices without consent,” he says. 

AI companies generally keep a tight grip on their models to discourage misuse. For example, if you ask ChatGPT to give you someone’s phone number or instructions for doing something illegal, it will likely just tell you it cannot help. However, as many examples over time have shown, clever prompt engineering or model fine-tuning can sometimes get these models to say things they otherwise wouldn’t. The unwanted information may still be hiding somewhere inside the model so that it can be accessed with the right techniques. 

At present, companies tend to deal with this issue by applying guardrails; the idea is to check whether the prompts or the AI’s responses contain disallowed material. Machine unlearning instead asks whether an AI can be made to forget a piece of information that the company doesn’t want it to know. The technique takes a leaky model and the specific training data to be redacted and uses them to create a new model—essentially, a version of the original that never learned that piece of data. While machine unlearning has ties to older techniques in AI research, it’s only in the past couple of years that it’s been applied to large language models.

Jinju Kim, a master’s student at Sungkyunkwan University who worked on the paper with Ko and others, sees guardrails as fences around the bad data put in place to keep people away from it. “You can’t get through the fence, but some people will still try to go under the fence or over the fence,” says Kim. But unlearning, she says, attempts to remove the bad data altogether, so there is nothing behind the fence at all. 

The way current text-to-speech systems are designed complicates this a little more, though. These so-called “zero-shot” models use examples of people’s speech to learn to re-create any voice, including those not in the training set—with enough data, it can be a good mimic when supplied with even a small sample of someone’s voice. So “unlearning” means a model not only needs to “forget” voices it was trained on but also has to learn not to mimic specific voices it wasn’t trained on. All the while, it still needs to perform well for other voices. 

To demonstrate how to get those results, Kim taught a recreation of VoiceBox, a speech generation model from Meta, that when it was prompted to produce a text sample in one of the voices to be redacted, it should instead respond with a random voice. To make these voices realistic, the model “teaches” itself using random voices of its own creation. 

According to the team’s results, which are to be presented this week at the International Conference on Machine Learning, prompting the model to imitate a voice it has “unlearned” gives back a result that—according to state-of-the-art tools that measure voice similarity—mimics the forgotten voice more than 75% less effectively than the model did before. In practice, this makes the new voice unmistakably different. But the forgetfulness comes at a cost: The model is about 2.8% worse at mimicking permitted voices. While these percentages are a bit hard to interpret, the demo the researchers released online offers very convincing results, both for how well redacted speakers are forgotten and how well the rest are remembered. A sample from the demo is given below. 

A voice sample of a speaker to be forgotten by the model.
The generated text-to-speech audio from the original model using the above as a prompt.
The generated text-to-speech audio using the same prompt, but now from the model where the speaker was forgotten.

Ko says the unlearning process can take “several days,” depending on how many speakers the researchers want the model to forget. Their method also requires an audio clip about five minutes long for each speaker whose voice is to be forgotten.

In machine unlearning, pieces of data are often replaced with randomness so that they can’t be reverse-engineered back to the original. In this paper, the randomness for the forgotten speakers is very high—a sign, the authors claim, that they are truly forgotten by the model. 

 “I have seen people optimizing for randomness in other contexts,” says Vaidehi Patil, a PhD student at the University of North Carolina at Chapel Hill who researches machine unlearning. “This is one of the first works I’ve seen for speech.” Patil is organizing a machine unlearning workshop affiliated with the conference, and the voice unlearning research will also be presented there. 

She points out that unlearning itself involves inherent trade-offs between efficiency and forgetfulness because the process can take time, and can degrade the usability of the final model. “There’s no free lunch. You have to compromise something,” she says.

Machine unlearning may still be at too early a stage for, say, Meta to introduce Ko and Kim’s methods into VoiceBox. But there is likely to be industry interest. Patil is researching unlearning for Google DeepMind this summer, and while Meta did not respond with a comment, it has hesitated for a long time to release VoiceBox to the wider public because it is so vulnerable to misuse. 

The voice unlearning team seems optimistic that its work could someday get good enough for real-life deployment. “In real applications, we would need faster and more scalable solutions,” says Ko. “We are trying to find those.”

The Download: combating audio deepfakes, and AI in the classroom

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI text-to-speech programs could one day “unlearn” how to imitate certain people

The news: A new technique known as “machine unlearning” could be used to teach AI models to forget specific voices.

How it works: Currently, companies tend to deal with this issue by checking whether the prompts or the AI’s responses contain disallowed material. Machine unlearning instead asks whether an AI can be made to forget a piece of information that the company doesn’t want it to know. It works by taking a model and the specific data to be redacted then using them to create a new model—essentially, a version of the original that never learned that piece of data.

Why it matters: This could be an important step in stopping the rise of audio deepfakes, where someone’s voice is copied to carry out fraud or scams. Read the full story.

—Peter Hall

AI’s giants want to take over the classroom

School’s out and it’s high summer, but a bunch of teachers are plotting how they’re going to use AI this upcoming school year. God help them.

On July 8, OpenAI, Microsoft, and Anthropic announced a $23 million partnership with one of the largest teachers’ unions in the United States to bring more AI into K–12 classrooms. They will train teachers at a New York City headquarters on how to use AI both for teaching and for tasks like planning lessons and writing reports, starting this fall.

But these companies could face an uphill battle. There’s a lack of clear evidence that AI can be a net benefit for students, and it’s hard to trust that the AI companies funding this initiative will give honest advice on when not to use AI in the classroom. Read the full story.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Nvidia says the US has lifted its ban on AI chip sales to China
Jensen Huang has sweet-talked Donald Trump into reversing his three-month old ban. (BBC)
+ The company will start selling its H20 chip to China. (WSJ $)
+ America may slap tariffs on a raw material used for chips and solar panels. (FT $)

2 China has launched its digital ID system
It’ll give the country even greater powers to surveil and censor its internet users. (WP $)

3 xAI has secured a contract with the US Department of Defense
Just days after its Grok chatbot had an anti-Semitic meltdown. (The Guardian)
+ EU officials are holding talks with X representatives after the outburst. (Bloomberg $)

4 Meta’s data centers are on the verge of triggering a major water shortage
Local residents in Newton County, Georgia are suffering. (NYT $)
+ But Zuckerberg wants to build gigawatt-size centers anyway. (Bloomberg $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

5 The Trump administration is incinerating tons of emergency food
Rather than sending it to people in need. (The Atlantic $)

6 The US is attempting to revive its rare-earth industry
The Pentagon has invested more than $1 billion in American firm MP Materials. (WSJ $)
+ It’s all part of a plan to counter China’s critical mineral dominance. (FT $)
+ This rare earth metal shows us the future of our planet’s resources. (MIT Technology Review)

7 AI nudifying apps are big business
They’re making millions of dollars a year, and rely on tech built by US companies. (Wired $)
+ The viral AI avatar app Lensa undressed me—without my consent. (MIT Technology Review)

8 Can anything save the web at this point?
Traffic is dropping, and AI use is rising. (Economist $)
+ How to fix the internet. (MIT Technology Review)

9 Bytedance is working on its own mixed reality goggles
A couple of years after it scaled back its work on an AR and VR headset. (The Information $)
+ What’s next for smart glasses. (MIT Technology Review)

10 Minecraft has birthed a generation of entrepreneurs
The game encourages players to learn to program. (Insider $)

Quote of the day

“I suddenly felt pure, unconditional love.”

—Faeight, a woman ‘married’ to a chatbot named Gryff, describes her strong feelings for a previous AI partner, the Guardian reports.

One more thing

End of life decisions are difficult and distressing. Could AI help?

End-of-life decisions can be extremely upsetting for surrogates—the people who have to make those calls on behalf of another person. Friends or family members may disagree over what’s best for their loved one, which can lead to distressing situations.

David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial intelligence-based tool that can help surrogates predict what the patients themselves would want in any given situation.

Wendler hopes to start building their tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Did you know the shark in the Jaws poster isn’t actually a great white?
+ Japan’s Nakagin Capsule Tower was ahead of its time.
+ I love the Public Domain Image Archive.
+ Forums are far from dead—here are some of the best that are still alive and kicking.

Building community and clean air solutions

When Darren Riley moved to Detroit seven years ago, he didn’t expect the city’s air to change his life—literally. Developing asthma as an adult opened his eyes to a much larger problem: the invisible but pervasive impact of air pollution on the health of marginalized communities.

“I was fascinated about why we don’t have the data we need,” Riley recalls, “or why we don’t have the infrastructure to solve these issues, to understand where pollution is coming from, how it’s impacting our communities, so that we can solve these problems and make an equitable breathing environment for everybody.”

That personal reckoning sparked the idea for JustAir, a Michigan-based clean-tech startup building neighborhood-level air quality monitoring tools. The goal is simple but urgent: provide communities with access to hyper-local data so they can better manage pollution and protect public health. As Riley puts it, “JustAir is solving the problem of how to better manage local pollution so that we can make sure our communities, our lifestyles—where we work, where we play, and where we learn—are really protected.”

Founded during the height of the pandemic, when the connection between health disparities and air quality became impossible to ignore, JustAir now partners with local governments, health departments, and community residents to deploy monitoring networks that offer key data relevant to everything from policy to personal decision-making.

From the start, the Michigan Economic Development Corporation (MEDC) offered key support that helped turn JustAir’s bold vision into technical infrastructure. Through the MEDC’s early-stage funding partners and a network of mentorship and resources known as SmartZones, JustAir sharpened its product-market fit and gained critical momentum.

Success for Riley isn’t just about scale, it’s about impact. “It warms my heart, and it shows that we’re doing exactly what we said we wanted to do,” Riley says, “which is to make sure that communities have the data that they deserve to create the future, the clean, healthy future that they desperately need.”

To other burgeoning entrepreneurs, Riley sees a sense of community as key to lasting and impactful change. “When people are celebrating you with your head up, and then when people are helping you put your chin up when your head’s down, I think it’s so, so critical. I found that here in Michigan, and also found it here in our community, right here in Detroit. Passion and finding a community that’s going to help get you through the journey is all it takes.”

This episode of Business Lab is produced in association with the Michigan Economic Development Corporation.

Full Transcript

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Today’s episode is brought to you in partnership with the Michigan Economic Development Corporation.

Our topic today is building a technology startup in the U.S. state of Michigan. Taking an innovative idea to a full-fledged product and company requires resources that individuals might not have. That’s why the Michigan Economic Development Corporation, the MEDC, has launched an innovation campaign to support technology entrepreneurs.

Two words for you: startup ecosystem.

My guest is Darren Riley, the co-founder and CEO at JustAir, a clean air startup that began its journey in Michigan.

Welcome, Darren.

Darren Riley: Hi. Thanks for having me.

Megan: Thank you ever so much for being with us. To get us started, let’s just talk a bit about JustAir. How did the idea for the company come about, and what does your company do as well?

Darren: Yeah, absolutely. The real thesis of JustAir, is really a combination of one, my personal experience but also my professional experience. On the professional side, background in software engineering, graduated from Carnegie Mellon University, but I was always fascinated by how to use technology to really support and innovate and really push the frontier on issues that are near and dear to my heart. Coming from Houston, Texas, coming from communities that often are restricted with certain issues, systemic issues, is something that I always carried in my heart.

And on the personal side, it was around seven years ago when I moved to Detroit, in Southwest Detroit, where I developed asthma. Not growing up with asthma and not developing any issues, having that disease of the lungs really opened my eyes to just how much our environment impacts our health and well-being.

The combination of those, that pain point and also my background in technology, I was fascinated about why we don’t have the data we need or why we don’t have the infrastructure to solve these issues, to understand where pollution is coming from, how it’s impacting our communities, so that we can solve these problems and make an equitable breathing environment for everybody. That’s kind of what birthed JustAir in a way.

And actually, it was around COVID-19 where we really started to push forward, where we saw all this information and research around health disparities and a lot of the issues of mortality rates around COVID-19, which kind of coincides with COPD, asthma, and other diseases that are often overburdened in communities that look like ours, in Black and brown communities. That’s kind of where we got our start.

And what is JustAir today? JustAir is solving the problem of how to better manage local pollution so that we can make sure our communities, our lifestyles—where we work, where we play, and where we learn—are really protected. And, so, what JustAir does is build hyper-local neighborhood-level air quality monitoring networks. Communities have access to the data, policymakers and decision-makers can use that data to really influence and push things to help protect the community, but also other stakeholders can use the data to move the environment to a healthier state. So that’s where we are, and we’re four years strong, and I’m really excited to be a part of this journey here in Michigan.

Megan: So you launched about four years ago now. Why did you choose to build and grow just there in Michigan?

Darren: Yeah, I think a combination of things, the reason why I chose to start here and be intentional about building our team here. I think first is really around the ecosystem support around Michigan. So the MEDC has a network of what we call SmartZones that really offer funding, resources, mentorship, advisory on the different challenges that can range from capital, legal, and other issues that kind of hold an entrepreneur from just getting out there and putting their product in the market. First and foremost, I’m super thankful and grateful for just the state really focusing on and putting entrepreneurs first in that regard.

I think secondly is community. I really felt a strong sense of community here in Detroit. One of the founding members of an organization called Black Tech Saturdays, which sees over hundreds, 500-1,000 folks almost every Saturday of the month, just really sharing and really engaging with tech-curious folks from all different walks of life, but making intentional space for folks who are often left out of those rooms and out of those conversations. And just really seeing a peer network of entrepreneurs who come from a similar cultural background or a similar situation, really going after it together and helping each other navigate some issues.

And then lastly, I talk about this a lot, but problem-solution fit. Being here in Detroit where I developed asthma, where we have many issues and many around the environment that have hit some communities the hardest, right here in Detroit in my own backyard I really want to be very narrowly focused and make sure that I’m building something that actually solves the problem that got me on this journey in the first place. Not thinking about regional-wide, different country, international, et cetera, but how do we build something right here in the backyard that solves the problem for my neighbors and makes sure that we can make a real difference in the community. So, from the community to the problem that I really care about and make sure we solve, and then also just the ecosystem support is why we’re here in Michigan and why we plan to really grow and really be a part of this movement.

Megan: Fantastic. And you’ve touched on a few of those already, but as you were getting started, what specific resources, partnerships, or community support helped you navigate the early-stage research and development stages?

Darren: One example, really early, actually, I forgot about this for a while, but we have a Business Accelerator Fund here in Michigan where there’s funding offered to entrepreneurs for technical assistance. I used that to operationalize some of our technical roadmap processes to build out the infrastructure that we really intended to do. So, that real funding that was non-dilutive that the state provided helped accelerate some of those issues in the early days, where it was just myself and advisors going after this problem. And so now, where we are today, there are funds that receive funding from MEDC, so local funds and venture capital that help you get your first check. Those are really helpful as well. All that to say is basically a combination of funding primary source, but also strategically, that funding is going towards product positioning and product-market fit. Those were some of the two core examples that have been beneficial.

And then, I think the last thing I’ll mention as well, MEDC and a lot of the SmartZones within the state, these SmartZones are just bucketed in different regions and areas, so you have Ann Arbor, you’ve got Detroit, you have Grand Rapids, the whole nine yards, having these events and creating these clusters, if you will, of density of entrepreneurs, I think is super, super critical. I’ve experienced in New York, Chicago, and San Francisco, and other bigger ecosystems that density is so critical to where you’re constantly rubbing shoulders with the next entrepreneur, the next investor, the next customer, to really kind of accelerate that velocity of your journey.

Megan: Yeah. Having that ecosystem makes such a difference, doesn’t it?

Darren: Oh yeah, absolutely.

Megan: And tech acumen and business acumen are very different sets of skills. I wonder what was the process like developing out your technology whilst also building out a viable business plan?

Darren: I think I have a real unique opportunity. Having a software background, I code all the time, felt I had a lot of ideas, always joked that I had a Google Drive of 30 ideas that never worked, that I never showed anybody. I really felt I had that piece. What I was missing in my journey and why nothing ever came to fruition was just the simple principles of, are you solving a real problem, a real pain point for a customer?

Two things on the business acumen side are having an affinity for the problem. I truly believe that going on the entrepreneurial journey is lonely, it’s risky, it’s stressful, and tiring. The more I can wake up in the morning and think about [how] the problems that we solve could actually result in a breath of clean air for someone who may not have that awareness or have the tools to advocate on their behalf, just having that extra motivation and having that affinity towards a problem that I feel really deeply, I think does help.

But I think also from the business acumen side of things, I had the opportunity to work at an organization called Endeavor based here in Michigan, where I was on the other side of an entrepreneur resource support organization. I got to see founders from high-growth companies throughout Michigan, series A, series B, retail, fintech, the whole nine yards, health tech, and seeing where are the challenges, where are things going well and where things are going wrong, from co-founder struggles to missing the market timing or going through banking issues from a couple years ago and all that stuff. All those things really help build a muscle memory of, I don’t have all the answers, but being able to pull through those experiences and pattern matching does help as well, from how you actually build a business from zero, from product-market fit to scale and grow.

Megan: Yeah, absolutely. And as you say, it can be a stressful journey, life as an entrepreneur, but I wonder if you could also share some highlights from your journey so far, any partnerships or projects that you’re really excited about at the moment?

Darren: I think the first and foremost highlight [that] I didn’t realize I would come to enjoy so much is certainly my team. Being able to work with people who are aligned in passionate values and just kind of the culture and the focus is immensely valuable. If I’m going to spend this many hours in a week or in a year, I’d love to spend it with folks who are really passionate about it. I want to see them succeed. So I think first and foremost, I think the biggest success is really just the fortunate opportunity to work with people I really enjoy working with.

The others I’ll mention [are] we have one of the largest county-owned monitoring networks in the country within Wayne County. The Health Department of Wayne County and Executive Warren Evans established this partnership where we deployed 100 fixed monitors throughout Wayne County to understand the patterns of local pollution to where we can help combat some of these issues where we are ranked F in air quality from the Lung Association, or Detroit is the third-worst from Asthma and Allergy Foundation of America, the third-worst place to live in with asthma. So, how do we really look at this data and tell the story, and how can we really mitigate solutions, while also giving data to the public so that they can navigate the world that’s happening to them. That’s one of our critical partnerships.

We’re also very excited, we just got announced in Fast Company as one of the most innovative companies of 2025, so woo-hoo to that.

Megan: Congratulations.

Darren: It is really exciting, yeah, in the social impact, social good category. There are many, many more, but I think the last one, I’m so, so grateful for, and I tell our team this all the time, is that we’ve already succeeded. Going to community meetings, hearing people raise their hand, asking questions about the adjuster application or about their data, and I to emphasize that when you hear community members saying ‘our data’ and not an ask, but as something that they have obtained, it warms my heart, and it shows that we’re doing exactly what we said we wanted to do, which is to make sure that communities have the data that they deserve to create the future, the clean, healthy future that they desperately need.”.

Megan: Yeah, absolutely, what an incredible achievement. And what advice, finally, would you offer to other burgeoning entrepreneurs?

Darren: Yeah, I think really something you are passionate about. Repeat that point again, do something that you feel that you can really go through those pain points and struggles for, [because] you need some extra kick to get you through and navigate these challenges.

The second thing, and the most important thing that a lot of people take away is community, community, community. I wouldn’t be here today if I didn’t have people to call on when I’m at my lowest points, and call on people in my highest points. When people are celebrating you with your head up, and then when people are helping you put your chin up when your head’s down, I think it’s so, so critical. I found that here in Michigan, and also found it here in our community, right here in Detroit. Passion and finding a community that’s going to help get you through the journey is all it takes.

Megan: Fantastic. All great advice. Thank you ever so much, Darren.

Darren: Absolutely.

Megan: That was Darren Riley, the co-founder and CEO at JustAir whom I spoke with from Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us on the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Google’s generative video model Veo 3 has a subtitles problem

As soon as Google launched its latest video-generating AI model at the end of May, creatives rushed to put it through its paces. Released just months after its predecessor, Veo 3 allows users to generate sounds and dialogue for the first time, sparking a flurry of hyperrealistic eight-second clips stitched together into ads, ASMR videos, imagined film trailers, and humorous street interviews. Academy Award–nominated director Darren Aronofsky used the tool to create a short film called Ancestra. During a press briefing, Demis Hassabis, Google DeepMind’s CEO, likened the leap forward to “emerging from the silent era of video generation.” 

But others quickly found that in some ways the tool wasn’t behaving as expected. When it generates clips that include dialogue, Veo 3 often adds nonsensical, garbled subtitles, even when the prompts it’s been given explicitly ask for no captions or subtitles to be added. 

Getting rid of them isn’t straightforward—or cheap. Users have been forced to resort to regenerating clips (which costs them more money), using external subtitle-removing tools, or cropping their videos to get rid of the subtitles altogether.

Josh Woodward, vice president of Google Labs and Gemini, posted on X on June 9 that Google had developed fixes to reduce the gibberish text. But over a month later, users are still logging issues with it in Google Labs’ Discord channel, demonstrating how difficult it can be to correct issues in major AI models.

Like its predecessors, Veo 3 is available to paying members of Google’s subscription tiers, which start at $249.99 a month. To generate an eight-second clip, users enter a text prompt describing the scene they’d like to create into Google’s AI filmmaking tool Flow, Gemini, or other Google platforms. Each Veo 3 generation costs a minimum of 20 AI credits, and the account can be topped up at a cost of $25 per 2,500 credits.

Mona Weiss, an advertising creative director, says that regenerating her scenes in a bid to get rid of the random captions is becoming expensive. “If you’re creating a scene with dialogue, up to 40% of its output has gibberish subtitles that make it unusable,” she says. “You’re burning through money trying to get a scene you like, but then you can’t even use it.”

When Weiss reported the problem to Google Labs through its Discord channel in the hopes of getting a refund for her wasted credits, its team pointed her to the company’s official support team. They offered her a refund for the cost of Veo 3, but not for the credits. Weiss declined, as accepting would have meant losing access to the model altogether. The Google Labs’ Discord support team has been telling users that subtitles can be triggered by speech, saying that they’re aware of the problem and are working to fix it. 

So why does Veo 3 insist on adding these subtitles, and why does it appear to be so difficult to solve the problem? It probably comes down to what the model has been trained on.  

Although Google hasn’t made this information public, that training data is likely to include YouTube videos, clips from vlogs and gaming channels, and TikTok edits, many of which come with subtitles. These embedded subtitles are part of the video frames rather than separate text tracks layered on top, meaning it’s difficult to remove them before they’re used for training, says Shuo Niu, an assistant professor at Clark University in Massachusetts who studies video sharing platforms and AI.

“The text-to-video model is trained using reinforcement learning to produce content that mimics human-created videos, and if such videos include subtitles, the model may ‘learn’ that incorporating subtitles enhances similarity with human-generated content,” he says.

“We’re continuously working to improve video creation, especially with text, speech that sounds natural, and audio that syncs perfectly,” a Google spokesperson says. “We encourage users to try their prompt again if they notice an inconsistency and give us feedback using the thumbs up/down option.”

As for why the model ignores instructions such as “No subtitles,” negative prompts (telling a generative AI model not to do something) are usually less effective than positive ones, says Tuhin Chakrabarty, an assistant professor at Stony Brook University who studies AI systems. 

To fix the problem, Google would have to check every frame of each video Veo 3 has been trained on, and either get rid of or relabel those with captions before retraining the model—an endeavor that would take weeks, he says. 

Katerina Cizek, a documentary maker and artistic director at the MIT Open Documentary Lab, believes the problem exemplifies Google’s willingness to launch products before they’re fully ready. 

“Google needed a win,” she says. “They needed to be the first to pump out a tool that generates lip-synched audio. And so that was more important than fixing their subtitle issue.”