The cheapest way to supercharge America’s power grid

US electricity consumption is rising faster than it has in decades, thanks in part to the boom in data center development, the resurgence in manufacturing, and the increasing popularity of electric vehicles. 

Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. This challenge has created one of the biggest obstacles to getting more electricity generation online, reducing investment in new power plants and stranding others in years-long “interconnection queues” while they wait to join the grid.

Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. They could help us quickly bring online a significant portion of the nearly 2,600 gigawatts of backlogged generation and storage projects awaiting pathways to connect to the electric grid. 

The opportunity to leverage advanced transmission technologies to update the way we deliver and consume electricity in America is as close to a $20 bill sitting on the sidewalk as policymakers may ever encounter. Promoting the development and use of these technologies should be a top priority for politicians in Washington, DC, as well as electricity market regulators around the country.

That includes the new Trump administration, which has clearly stated that building greater electricity supply and keeping costs low for consumers are high priorities. 

In the last month, Washington has been consumed by the Trump team’s efforts to test the bounds of executive power, fire civil servants, and disrupt the basic workings of the federal government. But when or if the White House and Congress get around to enacting new energy policies, they would be wise to pick up the $20 bill by enacting bipartisan measures to accelerate the rollout of these innovative grid technologies.

ATTs generally fall into four categories: dynamic line ratings, which combine local weather forecasts and measurements on or near the transmission line to safely increase their capacity when conditions allow; high-performance conductors, which are advanced wires that use carbon fiber, composite cores, or superconducting materials to carry more electricity than traditional steel-core conductors; topology optimization, which uses software to model fluctuating conditions across the grid and identify the most efficient routes to distribute electricity from moment to moment; and advanced power flow control devices, which redistribute electricity to lines with available capacity. 


“This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.”


Other countries from Belgium to India to the United Kingdom are already making large-scale use of these technologies. Early projects in the United States have been remarkably successful as well. One recent deployment of dynamic line ratings increased capacity by more than 50% for only $45,000 per mile—roughly 1% of the price of building new transmission.

So why are we not seeing an explosion in ATT investment and deployment in the US? Because despite their potential to unlock 21st-century technology, the 20th-century structure of the nation’s electricity markets discourages adoption of these solutions. 

For one thing, under the current regulatory system, utilities generally make money by passing the cost of big new developments along to customers (earning a fixed annual return on their investment). That comes in the form of higher electricity rates, which local public utility commissions often approve after power companies propose such projects.

That means utilities have financial incentives to make large and expensive investments, but not to save consumers money. When ATTs are installed in place of building new transmission capacity, the smaller capital costs mean that utilities make lower profits. For example, utilities might earn $600,000 per year after building a new mile of transmission, compared with about $4,500 per mile annually after installing the equipment and software necessary for line ratings. While these state regulatory agencies are tasked with ensuring that utilities act in the best interest of consumers, they often lack the necessary information to identify the best approach for doing so.

Overcoming these structural barriers will require action from both state and federal governments, and it should appeal to Democrats and Republicans alike. We’ve already seen some states, including Minnesota and Montana, move in this direction, but policy interventions to date remain insufficient. In a recent paper, we propose a new approach for unlocking the potential of these technologies.

First, we suggest requiring transmission providers to use ATTs in some “no regrets” contexts, where possible downsides are minor or nonexistent. The Federal Energy Regulatory Commission, for example, is already considering requiring dynamic line ratings on certain highly congested lines. Given the low cost of dynamic line ratings, and their clear benefit in cases of congestion, we believe that FERC should quickly move forward with, and strengthen, such a rule. Likewise, the Department of Energy or Congress should adopt an efficiency standard for the wires that carry electricity around the country. Every year, approximately 5% of electricity generated is lost in the transmission and distribution process. The use of high-performance conductors can reduce those losses by 30%.

In addition, federal agencies and state lawmakers should require transmission providers to evaluate the potential for using ATTs on their grid, or provide support to help them do so. FERC has recently taken steps in this direction, and it should continue to strengthen those actions. 

Regulators should also provide financial incentives to transmission providers to encourage the installation of ATTs. The most promising approach is a “shared savings” incentive, such as that proposed in the recent Advancing GETS Act. This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.

Finally, we should invest in building digital tools so transmission owners can identify opportunities for these technologies and so regulators can hold them accountable. Developing these systems will require transmission providers to share information about electricity supply and demand as well as grid infrastructure. Ideally, with such data in hand, researchers can develop a “digital twin” of the current transmission system to test different configurations of ATTs and help improve the performance and efficiency of our grids. 

We are all too aware that the world often faces difficult policy trade-offs. But laws or regulations that facilitate the use of ATTs can quickly expand the grid and save consumers money. They should be an easy yes on both sides of the aisle.

Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.

The Download: supercharging the power grid, and a new Chinese AI agent

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The cheapest way to supercharge America’s power grid

—Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.

US electricity consumption is rising faster than it has in decades. Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.

But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. So why are we not seeing an explosion in ATT investment and deployment in the US? Read the full story.

Interested in learning more about this topic? Read more of our stories:

+ What’s driving electricity demand? It isn’t just AI and data centers.

+ That said, AI’s search for energy is growing more urgent

+ Why this developer won’t quit fighting to connect the US’s grids. 

+ Here are four ways AI is making the power grid faster and more resilient. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China claims to have created the world’s first fully autonomous AI agent 
The agent, called Manus, can allegedly operate fully free of human intervention. (Forbes)
+ But it’s not clear if the hype can be justified at this stage. (TechCrunch)
+ Two former DeepMind researchers are chasing superintelligence. (Bloomberg $)
+ Four Chinese AI startups to watch beyond DeepSeek. (MIT Technology Review)

2 Meta went to extreme lengths to win China’s approval
Including developing a censorship system to comply with the CCP. (WP $)
+ However, its attempts to curry favor with the party did not bear fruit. (Gizmodo)

3 Anonymous Chinese investors are quietly funding Elon Musk’s ventures
They’re happy to invest tens of millions—so long as their identities remain under wraps. (FT $)
+ Despite the influx of cash, SpaceX isn’t having a great year. (NYT $)
+ Starlink is reaping the benefits of its founder’s proximity to the White House. (NBC News)

4 Ukraine doesn’t have minable rare earths
And even if it did, it would take at least 15 years to reach them. (IEEE Spectrum)
+ The country is preparing to hold negotiations with the US this week. (Economist $)

5 Farewell, the Athena lunar lander
It landed sideways in a crater and has been officially written off. (The Register)
+ Intuitive Machines, the company behind it, is contracted for another two landings. (AP News)
+ Firefly Aerospace, another private firm, had better luck. (Economist $)

6 The American public really doesn’t like DOGE
And Donald Trump is starting to pay attention. (The Atlantic $)
+ Musk represents the problem he is claiming he wants to solve. (Wired $)
+ The Trump administration is threatening scientific progress. (New Yorker $)
+ Anti-Musk protestors are targeting Tesla stores and infrastructure. (WP $)

7 Wikipedia is struggling to document the war in the Middle East
Certain editors have been forbidden from working on related pages. (Bloomberg $)

8 How to store the world’s data
Hard discs seem the obvious choice—for now. (WSJ $)
+ Music labels are going after the Internet Archive for copyright infringement. (Ars Technica)
+ The race to save our online lives from a digital dark age. (MIT Technology Review)

9 YouTube bros are peddling Taliban tourism
Inside the depressing rise of videos purporting to show “another side to Afghanistan.” (Insider $)

10 Amazon and Google’s AI calls Mein Kampf “a true work of art” 
That’s what happens when you search for positive reviews of the Nazi manifesto. (404 Media)
+ Want AI that flags hateful content? Build it. (MIT Technology Review)

Quote of the day

“The US won the internet and the US should win crypto.”

—Tyler Winklevoss, who runs crypto exchange Gemini with his twin brother Cameron, could not be happier with the outcome of Donald Trump’s crypto summit, according to a post on X.

The big story

How this Turing Award–winning researcher became a legendary academic advisor

October 2023

Every academic field has its superstars. But a rare few achieve superstardom not just by demonstrating individual excellence but also by consistently producing future superstars.

Computer science has its own such figure: Manuel Blum, who won the 1995 Turing Award—the Nobel Prize of computer science. He is the inventor of the captcha—a test designed to distinguish humans from bots online.

Three of Blum’s students have also won Turing Awards, and many have received other high honors in theoretical computer science. More than 20 hold professorships at top computer science departments. So what’s the formula to his success? Read the full story.

—Sheon Han

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Looking for some books to make you laugh out loud? Look no further.
+ What can’t White Lotus star Walton Goggins live without? An orange pen and 22-year old sand, apparently.
+ When it’s time to take a break, here’s how to recharge properly.
+ $40 for “magic” yogurt? What the hell, sure.

Create Google Ads Scripts via GenAI

I’ve addressed the benefits of Google Ads scripts to automate aspects of account management. Producing custom scripts is now much easier with generative AI platforms.

Scripts can go beyond automation to solve problems. For example, it’s not always evident when an account stops serving ads. The payment method may fail, or Google could reject them. The “Account Down” script runs hourly and sends an email when an account hasn’t seen an impression in a designated period.

I’ve struggled with campaigns that set a higher target cost per acquisition than the budget. It’s not always obvious. A budget might be $500, but the tCPA is $600. The tCPA should be at least three times the budget in my experience.

AI for Ad Scripts

Here’s where AI comes in. Tools such as ChatGPT, Gemini, and Claude (my favorite) can generate scripts quickly.

Sample prompt in Claude for the tCPA mismatch and Claude's response.

Generative AI platforms such as Claude can produce scripts quickly.

Here’s my prompt in Claude to create a script:

Please write a Google Ads script that reviews all campaigns and sends an email alert when the tCPA is higher than the campaign budget.

Here’s Claude’s response:

Google Ads tCPA Alert Script

This script:

1. Iterates through all enabled campaigns in your Google Ads account
2. Identifies campaigns using Target CPA bidding strategy
3. Compares each campaign’s tCPA value against its daily budget
4. Creates a list of campaigns where tCPA exceeds the budget
5. Sends you an email alert with details if any campaigns meet this condition

I pasted the script into Google Ads and received an error.

3/5/2025 3:47:20 PM TypeError: campaign.bidding(...).get is not a function
at main (Code:39:40)
at Object. (adsapp_compiled:20555:54)

I then prompted Claude to fix the error. Here’s Claude’s response:

Google Ads tCPA Alert Script – Fixed

I’ve fixed the error by correcting how the script retrieves the target CPA value. The issue was with the line `var bidding = campaign.bidding().get();` which was using an incorrect method call.

Key changes made:

1. Replaced the problematic method chain with a direct call to `campaign.bidding().getTargetCpa()`

2. Added error handling with try/catch blocks to gracefully handle different API versions and bidding strategy configurations

3. Added support for retrieving tCPA from shared bidding strategies (which have a different access method)

4. Added validation to ensure tCPA was successfully retrieved before comparing it to the budget

The script — download it here — now runs in Google. I solved a problem through a series of prompts.

Not perfect

My next AI task was to obtain a script that identified all active ad groups in live campaigns with no impressions in 21 days. The alert would trigger me to investigate the cause.

Claude’s script worked, but it had two errors:

  • It defaulted to any period, not 21 days.
  • It included custom experiments that ended 21 or more days previously. Experiments run only for a designated period, but the script included them as normal campaigns.

After sharing these errors and receiving replacement scripts, Claude produced new errors for invalid inputs. I ended the session without securing the script I wanted.

It’s a handy reminder that AI is not always as productive as humans. A developer could likely set up this script without the extended back-and-forth.

Claude’s script for tCPA conflicts worked. The no-impressions script did not.

Monitor the unexpected

The final script I created addressed rejected ads. Advertisers should monitor not just the ads but rejected keywords and audiences, which occur occasionally.

Any script in Google Ads times out after 30 to 60 minutes. I first prompted Claude to identify disapproved ad assets (sitelinks, callouts, and more). Fearing a timeout, I opted for two separate scripts.

Dialog showing the prompt and Claude's response.

The follow-up script and response focused on “limited” statuses.

Here’s the prompt for the non-asset disapproval.

Please update this script so it only looks at disapproved ads, keywords, and audiences. Entities with “limited” status should not be included.

And Claude’s response:

I’ll update the script to only include items with “DISAPPROVED” status and exclude those with “LIMITED” or any other non-disapproved status. Here’s the modified version:

Google Ads Script for Disapproved Items Only. Click to open code.

The key changes I made to the script:

1. Changed the condition checks for ads from:

```javascript
if (ad.getPolicyApprovalStatus() != 'APPROVED') {
```

to:

```javascript
if (ad.getPolicyApprovalStatus() == 'DISAPPROVED') {

Download the script.

No limits

My example scripts above alert advertisers via email, but I could have prompted Claude to output to a spreadsheet instead. I could have included paused entities. The possibilities are seemingly limitless.

U.S. DOJ Antitrust Filing Proposes 4 Ways To Break Google’s Monopoly via @sejournal, @martinibuster

The plaintiffs in an antitrust lawsuit against Google filed a revised proposed final judgment for the judge in the case to consider. The proposal comes after a previous ruling where the court determined that Google broke antitrust laws by illegally maintaining its monopoly.

The legal filing by the plaintiffs, the United States Department Of Justice and State Attorneys General, argue that Google has maintained monopolies in search services and text advertising through anticompetitive practices.

The filing proposes four ways to loosen Google’s monopolistic hold on search and advertising.

  1. Requiring Google to separate Chrome from its business—this could mean selling it or spinning it off into an independent company.
  2. Limiting Google’s payments to companies like Apple for making Google the default search engine, reducing its ability to secure exclusive deals.
  3. Stopping Google from favoring its own products over competitors in search results and other services, ensuring a more level playing field.
  4. Increasing transparency in Google’s advertising and data practices so competitors have fairer access to key information.

The proposal asks that Google be subjected to continuous oversight through mandatory reporting to ensure transparency in Google’s advertising and data practices:

“Google must provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes.”

It also suggests ongoing enforcement to guarantee that Google doesn’t impose new restrictions that undermine transparency requirements:

“Google must not limit the ability of advertisers to export in real time (by downloading through an interface or API access) data or information relating to their entire portfolio of ads or advertising campaigns bid on, placed through, or purchased through Google.”

The goal of the above section is to increase transparency in Google’s advertising system and make it easier for advertisers to analyze their ad performance, greater transparency.

Real-time access ensures advertisers can make immediate adjustments to their campaigns instead of waiting for delayed reports and it assures that advertisers aren’t locked into the Google advertising system by holding them hostage to their historical data.

The legal filing requires government-imposed restrictions and changes to Google’s advertising business practices. It proposes remedies for how Google should be regulated or restructured following the court’s earlier ruling that Google engaged in monopolistic practices. However, this is not the final judgment and the court must still decide whether to adopt, modify, or reject these proposed remedies.

YouTube’s Creator Liaison Shares Advice For Mid-Roll Ad Changes via @sejournal, @MattGSouthern

YouTube Creator Liaison Rene Ritchie has advised content creators on adapting to YouTube’s upcoming mid-roll advertising changes.

These changes take effect on May 12 and will alter how ads appear within videos.

Background

Starting May 12, YouTube will implement a new system prioritizing mid-roll ad placements during natural content breaks rather than at potentially disruptive moments.

YouTube will automatically place ads at natural transitions in videos, but creators can manually control ad placements if they prefer.

This update introduces a hybrid approach, allowing creators to use automatic and manual mid-roll placements simultaneously.

According to YouTube’s early testing, channels adopting this combined approach have seen an average increase in ad revenue of 5%.

Ritchie’s Adaptation Strategy

Sharing his approach on X, Ritchie outlined specific steps he’s taking with his own YouTube channel:

“I’m turning on auto mid-rolls, since that system will continue to be improved and optimized by launch and over time. For new videos, I’m manually inserting additional slots if and as needed where I think it’ll provide the best experience for viewers.”

For existing content, Ritchie recommends a prioritized approach, stating:

“For back catalog, I’m sorting by current watch time and doing the same for the top 20-50 most-watched videos.”

Maintaining Creator Control

Ritchie addressed concerns about YouTube potentially removing manual placement options:

“No one is taking away manual mid-roll placements. Creators can still put slots wherever and whenever we want.”

He reminded creators that designated ad slots don’t guarantee ad placement but indicate where ads can potentially appear.

Ritchie drew a parallel to YouTube’s retention analytics and explained how the new ad feedback tool provides valuable insights.

“In the days before the retention graph in Analytics, my 10-second long intro might have caused a ton of people to dip from the video and I never knew it. Similarly, I can still put that mid-roll slot anywhere I want, but now I’m getting data about how it will perform.”

Ongoing Improvements

YouTube is actively refining the automatic detection system and will continue improving it after the May launch.

Ritchie notes there’s a mutual interest in making mid-rolls more effective:

“YouTube and creators share revenue, so it’s in everyone’s best interest to make mid-rolls work better.”

What Creators Should Do Now

Based on both YouTube’s official guidance and Ritchie’s recommendations, creators should:

  • Enable automatic mid-roll placement while maintaining manual control where needed
  • Review high-performing back catalog content first
  • Use the new feedback tool to identify potentially disruptive ad placements

Continue providing feedback to YouTube as the system develops. This interaction with Ritchie shows the team is listening.


Featured Image: Alejo Bernal/Shutterstock

Why Google May Adopt Vibe Coding For Search Algorithms via @sejournal, @martinibuster

A new trend in Silicon Valley, Vibe Coding, is driving an exponential acceleration in how quickly engineers can develop products and algorithms. This approach aligns with principles outlined by Google co-founder Sergey Brin in a recent email to DeepMind engineers.

Top Silicon Valley insiders call Vibe Coding the “dominant way to code,” and Brin’s message suggests that Google will embrace it to dramatically speed up AI development. Given its potential, this approach may also extend to Google’s search algorithms, leading to more changes to how search results are ranked.

Vibe Coding Is Here To Stay

The four Y Combinator executives agreed that vibe coding is a very big deal but were surprised at how fast it has overtaken the industry. Jarede Friedman observed that it’s like something out of the fairy tale Jack and the Beanstalk, where the world-changing magic beans sprout into gigantic beanstalks over night.

Garry Tan agreed, saying:

“I think our sense right now is this isn’t a fad. This isn’t going away. This is actually the dominant way to code, and if you’re not doing it, you might be left behind. This is here to stay.”

What Is Vibe Coding?

Vibe coding is software engineering with AI:

  • Software engineers use AI to generate code rather than writing it manually.
  • Rely on natural language prompts to guide software development.
  • Prioritize speed and iteration.
  • Time isn’t spent on debugging as code is simply regenerated until it works.
  • Vibe coding shifts software engineering focus from writing code to choosing what kinds of problems to solve.
  • Leverage AI for rapid code regeneration instead of traditional debugging.
  • It is exponentially speeding up coding.

Vibe coding is a way creating code with AI with an emphasis on speed. That means it’s increasingly less necessary to debug code because an engineer can simply re-roll the code generations multiple times until the AI gets it right.

A recent tweet by Andrej Karpathy kicked off a wave of excitement in Silicon Valley. Karpathy, a prominent AI researcher and former director of AI at Tesla, described what Vibe Coding is and explained why it’s the fastest way to code with AI. It’s so reliable that he doesn’t even check the modifications the AI makes (referred to as “diffs”).

Karpathy tweeted:

“There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.

Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore.

When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while.

Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing.

I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Sergey Brin Emphasizes Vibe Coding Principles

A recent email from Google co-founder Sergey Brin to DeepMind engineers emphasized the need to integrate AI into their workflow to reduce time spent on coding. The email states that code matters most and that AI will improve itself, advising that if it’s simpler to prompt an AI for a solution, then that’s preferable to training an entirely new model. Brin describes this as highly important for becoming efficient coders. These principles align with Vibe Coding, which prioritizes speed, simplicity, and AI-driven development.

Brin also recommends using first-party code (code developed by Google) instead of relying on open-source or third-party software. This strongly suggests that Google intends to keep its AI advancements proprietary rather than open-source. That may mean any advancements created by Google will not be open-sourced and may not show up in research papers but instead may be discoverable through patent filings.

Brin’s message de-emphasizes the use of LoRA, a machine learning technique used to fine-tune AI models efficiently. This implies that he wants DeepMind engineers to prioritize efficient workflows rather than spending excessive time fine-tuning models. This also suggests that Google is shifting focus toward simpler, more scalable approaches like vibe coding which rely on prompt engineering.

Sergey Brin wrote:

“Code matters most — AGI will happen with takeoff, when the Al improves itself. Probably initially it will be with a lot of human help so the most important is our code performance. Furthermore this needs to work on our own 1p code. We have to be the most efficient coder and Al scientists in the world by using our own Al.

Simplicity — Lets use simple solutions where we can. Eg if prompting works, just do that, don’t posttrain a separate model. No unnecessary technical complexities (such as lora). Ideally we will truly have one recipe and one model which can simply be prompted for different uses.

Speed — we need our products, models, internal tools to be fast. Can’t wait 20 minutes to run a bit of python on borg.”

Those statements align with the principles of vibe coding so it’s important to understand what it is and how it may affect how Google develops search algorithms and AI which may be used for the purposes of ranking websites.

Software Engineers Transitioning To Product Engineers

A recent podcast by Y Combinator, a Silicon Valley startup accelerator company, discussed how vibe coding is changing what it means to be a software engineer and how it will affect hiring practices.

The podcast hosts quoted multiple people:

Leo Paz, Founder of Outlit observed:

“I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Abhi Aiyer of Mastra shared how their coding practices changed:

“I don’t write code much. I just think and review.”

One of the podcast hosts, Jarede Friedman, Managing Partner, Y Combinator said:

“This is a super technical founder who’s last company was also a dev tool. He’s extremely able to code and so it’s fascinating to have people like that saying things like this.

They next quoted Abhi Balijepalli of Copycat:

“I am far less attached to my code now, so my decisions on whether we decide to scrap or refactor code are less biased. Since I can code 3 times as fast, it’s easy for me to scrap and rewrite if I need to.”

Garry Tan, President & CEO, Y Combinator commented:

“I guess the really cool thing about this stuff is it actually parallelizes really well.”

He quoted Yoav Tamir of Casixty:

“I write everything with Cursor. Sometimes I even have two windows of Cursor open in parallel and I prompt them on two different features.”

Tan commented on how much sense that makes and why not have three instances of Cursor open in order to accomplish even more.

The panelists on the podcast then cited Jackson Stokes of Trainloop who explains the exponential scale of how fast coding has become:

“How coding has changed six to one months ago: 10X speedup. One month ago to now: 100X speedup. Exponential acceleration. I’m no longer an engineer, I’m a product person.”

Garry Tan commented:

“I think that might be something that’s happening broadly. You know, it really ends up being two different roles you need. It actually maps to how engineers sort of self assign today, in that either you’re front-end or backend. And then backend ends up being about actually infrastructure and then front-end is so much more actually being a PM (product manager)…”

Harj Taggar, Managing Partner, Y Combinator observed that the LLMs are going to push people to the role of making choices, that the actual writing of the code will become less important.

Why Debugging With AI Is Unnecessary

An interesting wrinkle in Code Vibing is that one of the ways it speeds up development is that software engineers no longer have to spend long hours debugging. In fact, they don’t have to debug anymore. This means that they are able to push code out the door faster than ever before.

Tan commented on how poor AI is at debugging:

“…one thing the survey did indicate is that this stuff is terrible at debugging. And so… the humans have to do the debugging still. They have to figure out well, what is the code actually doing?

There doesn’t seem to be a way to just tell it, debug. You were saying that you have to be very explicit, like as if giving instructions to a first time software engineer.”

Jarede offered his observation on AI’s ability to debug:

“I have to really spoon feed it the instructions to get it to debug stuff. Or you can kind of embrace the vibes. I’d say Andrej Karpathy style, sort of re-roll, just like tell it to try again from scratch.

It’s wild how your coding style changes when actually writing the code becomes a 1000x cheaper. Like, as a human you would never just like blow away something that you’d worked on for a very long time and rewrite from scratch because you had a bug. You’d always fix the bug. But for the LLM, if you can just rewrite a thousand lines of code in just six seconds, like why not?”

Tan observed that it’s like how people use AI image generators where if there’s something they don’t like they just reiterate without even changing the prompt, they just simply click re-roll five times and then at the fifth time it works.

Vibe Coding And Google’s Search Algorithms

While Sergey Brin’s email does not explicitly mention search algorithms, it advocates AI-driven, prompt-based development at scale and high speed. Since Vibe Coding is now the dominant way to code, it is likely that Google will adopt this methodology across its projects, including the development of future search algorithms.

Watch the Y Combinator Video Roundtable

Vibe Coding Is The Future

Featured Image by Shutterstock/bluestork

AI Writing Fingerprints: How To Spot (& Fix) AI-Generated Content via @sejournal, @MattGSouthern

New research shows that ChatGPT, Claude, and other AI systems leave distinctive “fingerprints” in their writing.

Here’s how you can use this knowledge to identify AI content and improve your AI-assisted output.

The AI Fingerprint: What You Need to Know

Researchers have discovered that different AI writing systems produce text with unique, identifiable patterns.

Analyzing these patterns, researchers achieved 97.1% accuracy in determining which AI wrote a particular piece of content.

The study (PDF link) reads:

“We find that a classifier based upon simple fine-tuning text embedding models on LLM outputs is able to achieve remarkably high accuracy on this task. This indicates the clear presence of idiosyncrasies in LLMs.”

This matters for two reasons:

  • For readers: As the web becomes increasingly saturated with AI-generated content, knowing how to spot it helps you evaluate information sources.
  • For writers: Understanding these patterns can help you better edit AI-generated drafts to sound more human and authentic.

How To Spot AI-Generated Content By Model

Each major AI system has specific writing habits that give it away.

The researchers discovered these patterns remain even in rewritten content:

“These patterns persist even when the texts are rewritten, translated, or summarized by an external LLM, suggesting that they are also encoded in the semantic content.”

1. ChatGPT

Characteristic Phrases

  • Frequently uses transition words like “certainly,” “such as,” and “overall.”
  • Sometimes begins answers with phrases like “Below is…” or “Sure!”
  • Periodically employs qualifiers (e.g., “typically,” “various,” “in-depth”).

Formatting Habits

  • Utilizes bold or italic styling, bullet points, and headings for clarity.
  • Often includes explicit step-by-step or enumerated lists to organize information.

Semantic/Stylistic Tendencies

  • Provides more detailed, explanatory, and context-rich answers.
  • Prefers a somewhat formal, “helpful explainer” tone, often giving thorough background details.

2. Claude

Characteristic Phrases

  • Uses language like “according to the text,” “based on,” or “here is a summary.”
  • Tends to include shorter transitions: “while,” “both,” “the text.”

Formatting Habits

  • Relies on simple bullet points or minimal lists rather than elaborate markdown.
  • Often includes direct references back to the prompt or text snippet.

Semantic/Stylistic Tendencies

  • Offers concise and direct explanations, focusing on the key point rather than lengthy detail.
  • Adopts a practical, succinct voice, prioritizing clarity over elaboration.

3. Grok

Characteristic Phrases

  • May use words like “remember,” “might,” “but also,” or “helps in.”
  • Occasionally starts with “which” or “where,” creating direct statements.

Formatting Habits

  • Uses headings or enumerations but may do so sparingly.
  • Less likely to embed rich markdown elements compared to ChatGPT.

Semantic/Stylistic Tendencies

  • Often thorough in explanations but uses a more “functional” style, mixing direct instructions with reminders.
  • Doesn’t rely heavily on nuance phrases like “certainly” or “overall,” but rather more factual connectors.

4. Gemini

Characteristic Phrases

  • Known to use “below,” “example,” “for instance,” sometimes joined with “in summary.”
  • Might employ exclamation prompts like “certainly! below.”

Formatting Habits

  • Integrates short markdown-like structures, such as bullet points and occasional headers.
  • Occasionally highlights key instructions in enumerated lists.

Semantic/Stylistic Tendencies

  • Balances concise summaries with moderately detailed explanations.
  • Prefers a clear, instructional tone, sometimes with direct language like “here is how…”

5. DeepSeek

Characteristic Phrases

  • Uses words like “crucial,” “key improvements,” “here’s a breakdown,” “essentially,” “etc.”
  • Sometimes includes transitional phrases like “at the same time” or “also.”

Formatting Habits

  • Frequently employs enumerations and bullet points for organization.
  • May have inline emphasis (e.g., “key improvements”) but not always.

Semantic/Stylistic Tendencies

  • Generally thorough responses that highlight the main takeaways or “breakdowns.”
  • Maintains a relatively explanatory style but can be more succinct than ChatGPT.

6. Llama (Instruct Version)

Characteristic Phrases

  • “Including,” “such as,” “explanation the,” “the following,” which signal examples or expansions.
  • Sometimes references step-by-step guides or “how-tos” within text.

Formatting Habits

  • Levels of markdown usage vary; often places important points in numbered lists or bullet points.
  • Can include simple headers (e.g., “## Topic”) but less likely to use intricate formatting than ChatGPT.

Semantic/Stylistic Tendencies

  • Maintains a somewhat formal, academic tone but can shift to more conversational for instructions.
  • Sometimes offers deeper analysis or context (like definitions or background) embedded in the response.

7. Gemma (Instruct Version)

Characteristic Phrases

  • Phrases like “let me,” “know if,” or “remember” often appear.
  • Tends to include “below is,” “specific,” or “detailed” within clarifications.

Formatting Habits

  • Similar to Llama, frequently uses bullet points, enumerations, and occasionally bold headings.
  • May incorporate transitions (e.g., “## Key Points”) to segment content.

Semantic/Stylistic Tendencies

  • Blends direct instructions with explanatory detail.
  • Often partial to a more narrative approach, referencing how or why a task is done.

8. Qwen (Instruct Version)

Characteristic Phrases

  • Includes “certainly,” “in summary,” or “title” for headings.
  • May appear with transitions like “comprehensive,” “based,” or “example use.”

Formatting Habits

  • Uses lists (sometimes nested) for clarity.
  • Periodically includes short code blocks or snippet-like formatting for technical explanations.

Semantic/Stylistic Tendencies

  • Detailed, with emphasis on step-by-step instructions or bullet-labeled points.
  • Paraphrase-friendly structure, meaning it can rephrase or re-organize content extensively if prompted.

9. Mistral (Instruct Version)

Characteristic Phrases

  • Words like “creating,” “absolutely,” “subject,” or “yes” can appear early in responses.
  • Tends to rely on direct verbs for commands (e.g., “try,” “build,” “test”).

Formatting Habits

  • Usually applies straightforward bullet points without heavy markdown.
  • Occasionally includes headings but often keeps the structure minimal.

Semantic/Stylistic Tendencies

  • Prefers concise, direct instructions or overviews.
  • Focuses on brevity while still aiming to be thorough, giving core details in an organized manner.

How to Make AI-Generated Content More Human

The study revealed that word choice is a primary identifier of AI-generated text:

“After randomly shuffling words in the LLM-generated responses, we observe a minimal decline in classification accuracy. This suggests that a substantial portion of distinctive features is encoded in the word-level distribution.”

If you’re using AI writing tools, here are practical steps to reduce these telltale patterns:

  • Vary your beginnings: The research found that first words are highly predictable in AI content. Edit opening sentences to avoid typical AI starters.
  • Replace characteristic phrases: Watch for and replace model-specific phrases mentioned above.
  • Adjust formatting patterns: Each AI has distinct formatting preferences. Modify these to break recognizable patterns.
  • Restructure content: AI tends to follow predictable organization. Rearrange sections to create a more unique flow.
  • Add personal elements: Incorporate your own experiences, opinions, and industry-specific insights that an AI couldn’t generate.

Top Takeaway

While this research focuses on distinguishing different AI models, it also demonstrates how AI-generated text differs from human writing.

As search engines improve their ability to spot AI content, heavily templated AI writing may lose value.

By understanding how to identify AI text, you can create content that rises above the average chatbot output, appealing to both readers and search engines.

Combining AI’s efficiency with human creativity and expertise is the best approach.

Featured Image: Pixel-Shot/Shutterstock

The Top 3 YouTube Trends To Pay Attention To Right Now via @sejournal, @gregjarboe

The search and marketing community should pay closer attention to YouTube.

According to the latest Global Digital Report, YouTube.com is the second most visited website in the world, behind Google.com but ahead of Facebook.com in third place.

Perhaps more importantly, YouTube is one of the top referrers of traffic to other websites – 18.73% of the traffic to the top 10,000 websites comes from Google.com, but 6.44% comes from YouTube.com.

This means we should also pay closer attention to the top three YouTube trends.

1. BrightEdge Data Reveals YouTube’s Growing Influence On AI Search Results

The first YouTube trend that we should focus on comes from new data that says AI Overviews are increasingly citing YouTube content, particularly instructional and how-to videos.

BrightEdge’s Generative Parser has uncovered a significant upward trend in YouTube citations within AI-generated search results, showing a 25.21% increase since January 1 and a 36.66% month-over-month growth from January to February.

Analysis of keywords used in AI Overviews that cite YouTube reveals a strong emphasis on:

  • Instructional content with 35.6%, including “how-to” queries with 22.4%.
  • Visual demonstrations with 32.5%, including physical techniques and style guides.
  • Verification/examples with 22.5%, including product comparisons and visual proof.
  • Current events with 9.4%, including breaking news and live coverage.

This highlights a clear shift towards AI Overviews leveraging video content for visual demonstrations, step-by-step tutorials, product comparisons, and real-world examples.

However, the use of YouTube citations varies across industries.

Healthcare leads with 41.97% of AI Overviews citing YouTube, followed by ecommerce at 30.87% and B2B tech at 18.68%.

Other industries like finance (9.52%), travel (8.65%), insurance (8.62%), and education (3.87%) also utilize YouTube in their AI Overviews, though at lower rates.

These patterns suggest that AI Overviews actively integrate video content to provide richer and more comprehensive answers, especially in areas requiring visual or practical demonstrations.

BrightEdge’s analysis suggests prioritizing product demos, step-by-step tutorials, and comparison content in video strategies to align with these citation patterns.

Since the full rollout of AI Overviews in May 2024, BrightEdge has tracked their impact across various industries.

While presence varies over time, healthcare and education continues to show notable trends.

Healthcare maintains a strong AIO presence, appearing in 63% of queries, which rises to 80% for question-based searches. In education, only 14% of queries trigger an AIO, but jumps to 75% when the keyword includes “skills.”

Significant increases in AIO presence have also been observed in B2B tech, ecommerce, and insurance.

Entertainment has seen a decrease in AIO presence, possibly due to the strong presence of knowledge graphs and universal results, which already fulfill user intent.

With the introduction of Gemini 2, further growth in AIO presence is expected.

Furthermore, BrightEdge’s research indicates a trend toward AI Overviews favoring authoritative brands.

In healthcare, where accuracy is crucial, 72% of AI Overview answers come from established medical research centers, up from 54% in January.

Similarly, in B2B technology, 15-22% of search queries are answered by content from leading companies like Amazon, IBM, and Microsoft.

This suggests that AI Overviews prioritize trustworthy sources to enhance the quality and reliability of their responses.

“This growth in YouTube is significant for two reasons,” Jim Yu, CEO of BrightEdge, said in an email.

“First, as a user generated content platform, it provides businesses with an additional platform to either leverage their community’s voice or directly build content that could function as a citation. This means brands have additional resource that could function as an AI Citation,” he said.

“The second significant reason is that sourcing YouTube is a clear demonstration of Gemini’s (the AI powering AIO’s) multi-modal capabilities.

Videos have been part of a search result for years, but in these instances, text such as the descriptions or transcripts, if available, are used to rank the video.

Now, with Gemini, we’re seeing that actual content in the video can be cited to help generate an answer to a query. This means that details in the video product review that may not be available via text can be used to generate an answer,” he added.

2. YouTube Shorts Creators Get AI Boost With Veo 2-Powered Dream Screen

The second YouTube trend we should focus on is from a YouTube announcement that it enhanced its Dream Screen feature on Shorts by integrating its new Veo 2 video generation model.

This upgrade significantly improves the process of creating AI-powered backgrounds and even standalone video clips for Shorts.

Dream Screen, which allows users to generate AI-driven backgrounds using text prompts, now benefits from Veo 2’s faster generation speeds, higher-quality visuals, and expanded capabilities.

Veo 2 enables more detailed visual output and supports a wider range of subjects. Beyond backgrounds, users can now generate entire short video clips.

To create a video background, users select the Shorts camera, choose the Green Screen option, then Dream Screen, and input their desired background description.

Dream Screen then provides AI-generated options.

For standalone clips, users open the Shorts camera and media picker, tap “Create,” enter a prompt, select the format (image or video), edit the length, and add it to their Short.

Google will apply SynthID watermarks and clear labels to all AI-generated content.

Veo 2 is currently available in the United States, Canada, Australia, and New Zealand, with a global rollout planned for the future. This should accelerate the growth of YouTube Shorts.

According to a Pixability report, the number of daily views has skyrocketed since 2021, reaching over 70 billion in 2024. This massive amount of viewing is equal to over 100,000 years of watch time every single day.

While YouTube Shorts are primarily designed for viewing on mobile devices, they are accessible on any device with a screen, including televisions.

Interestingly, TV viewership of Shorts is increasing at a fast pace, suggesting that it will soon account for a significant portion of all Shorts views.

The Pixability report also provided the following advice:

  • Boasting over 70 billion daily views, YouTube Shorts are a crucial platform that every brand needs to consider in their marketing strategy.
  • Brands should actively create organic Shorts content for their YouTube channels, alongside their longer videos, to boost audience engagement.
  • Even when not specifically targeting Shorts content, using vertical video ads can be surprisingly effective within standard horizontal ad placements.
  • However, if your main objective is video completion, then advertising on Shorts content may not be the most effective approach.
  • Maintaining brand suitability on Shorts requires careful use of placement targeting and exclusion.
  • YouTube offers a unique opportunity to combine long-form video ads on TV screens with short-form ads on mobile devices within a single campaign, allowing brands to maximize their reach across different viewing experiences.

According to data from Tubular Intelligence, 30.4 million accounts uploaded 1.1 billion videos to YouTube worldwide over the last 365 days. These videos got a total of 31.5 trillion views and 947 billion engagements (e.g., likes, comments, and shares).

Of this total, 20.2 million accounts uploaded 516 million Shorts during that period. And these Shorts got a total of 24.6 trillion views and 768 billion engagements.

So, close to half of the videos uploaded to YouTube in the last year were Shorts, and they got 78% of the views and 81% of the engagements on the social video platform.

3. US Viewers Now Favor TVs For Watching YouTube Content

This brings us to the third YouTube trend, where YouTube CEO Neal Mohan announced a significant shift in viewing habits: TV screens have become the most popular way to watch YouTube in the United States, surpassing mobile devices.

This milestone was supported by data showing over 1 billion hours of daily YouTube consumption on TVs, and Nielsen data confirming YouTube’s dominance in streaming watch time for the past two years.

Mohan attributed this growth to YouTube’s seamless integration with smart TVs and streaming devices.

This change in audience behavior presents marketers with dramatically new advertising opportunities.

For example, this shift towards TV viewing opens doors for innovative ad formats.

QR codes displayed on TV screens can bridge the gap between big-screen viewing and mobile engagement, allowing viewers to instantly connect with content on their phones.

Pause ads can offer a non-intrusive way to deliver targeted messages when viewers pause a video.

In addition, YouTube is exploring “second screen experiences” that enable viewers to interact with TV content using their mobile devices, such as leaving comments, sharing videos, or making purchases.

These interactive tools can significantly enhance viewer engagement and provide measurable results, offering a more dynamic approach compared to traditional TV advertising.

However, this evolving landscape requires marketers to adapt their strategies.

For example, longer watch times on TV suggest opportunities for deeper engagement with ads and branded content.

The “big-screen mindset” necessitates high-quality, broadcast-like production for YouTube ads designed for a living room environment.

Accurate performance measurement will be crucial, requiring sophisticated cross-platform analytics and attribution models to track conversions across TV and mobile devices.

Beyond TV, YouTube has also become a leading platform for podcast consumption.

Mohan stated that YouTube is now the most frequently used service for listening to podcasts in the U.S.

This trend allows marketers to leverage the popularity of both TV and podcasts by creating video podcasts, which can effectively connect with audiences through engaging long-form content on larger screens.

This strategy aligns with the increasing preference for longer videos viewed in the living room setting.

In other words, YouTube’s transformation into a TV-centric platform demands that marketers adapt their strategies to capitalize on these new trends.

Key actions include leveraging new ad formats like QR codes and pause ads, creating high-quality content optimized for big screens, and exploring opportunities like video podcasts.

With longer watch times and increased TV engagement, brands have a unique opportunity to connect with audiences more effectively.

However, success will depend on implementing robust cross-platform analytics and thoughtfully integrating mobile and TV experiences to stay ahead of this evolving digital landscape.

According to Pixability, 56% of advertising agencies reported that their clients had increased their YouTube advertising spend in 2024.

Looking ahead to 2025, 44% of agencies anticipated further increases in YouTube investment, although 42% believed budgets would remain at the same level after the substantial growth seen in 2024.

Only 9% of agencies observed a decrease in YouTube spending in 2024, and just 3% predicted a decline in 2025.

One Of These Top 3 YouTube Trends May Change This Year

Later this year, YouTube will celebrate its 20th birthday. And I learned a long time ago that the video-sharing site changes about one-third of its major features every year.

So, the search and marketing community needs to stay on the lookout for the next new trend to keep pace with the creators who are focused on building their community, growing their channel, and making money on YouTube.

The very first YouTube video, “Me at the zoo“, was uploaded on April 23, 2005. So, we might anticipate another significant announcement around the end of April.

The IAB NewFronts will be held May 5-8, 2025, and YouTube might make significant announcements then, or the online video platform may wait until their annual YouTube Brandcast event during the TV Upfronts, which will be held May 12-14, 2025, to announce their biggest news.

And we will want to pay exceptionally close attention.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

3 Ways AI Is Changing PPC Reporting (With Examples To Streamline Your Reporting) via @sejournal, @siliconvallaeys

PPC reporting has always been both essential and frustrating. It’s essential to keep clients engaged by informing them of the results you’re driving.

But it’s also frustrating because of data discrepancies, cumbersome analysis, and the time required to share understandable, jargon-free reports with different stakeholders.

Fortunately, AI is turning these obstacles into opportunities by filling in gaps left by privacy-compliant tracking, surfacing insights hidden in overwhelming data sets, and automating reporting so it meets the needs of every stakeholder.

In this article, I’ll walk you through some of the technology used by modern marketers and share examples of how I’ve used AI to streamline my PPC reporting.

1. Collect Complete And High-Quality PPC Data

We need data to guide us before we can optimize accounts and share our wins, so let’s start there.

The Problems With Data Before AI

Inconsistent and missing data plague PPC efforts.

Google, Meta, Microsoft, and Amazon operate in their own silos, each taking credit for all conversions that have any touchpoint with their platforms. This leads to double counting, making it difficult to decide where to allocate budgets for optimal results.

In other words, the data between the various ad platforms is inconsistent. Specifically, the conversion value advertisers see in their business data may be lower than the sum of all conversion values reported by the ad platforms.

Add to this the challenge of missing data. Privacy regulations like GDPR and Apple’s iOS changes limit tracking capabilities, which causes data loss, incomplete conversion paths, and gaps in attribution.

Marketers who rely heavily on pixel-based or third-party cookie tracking, both of which became unreliable due to browser restrictions and user opt-outs, see a continuous decline in the quality of the data they need to operate.

While AI can’t magically give us perfect data, it can fill in gaps and restore insights, so let’s take a look at some of the solutions in this space.

AI-Driven Solutions For Data Hygiene And Compliance

1. Data Clean Rooms And Privacy-First Measurement

Clean rooms like Amazon Marketing Cloud (AMC) and Google Ads Data Hub allow advertisers to securely analyze anonymized cross-channel performance data without violating privacy laws.

These platforms aggregate data from multiple sources, giving marketers a comprehensive view of the customer journey.

Example:

A retail brand can use AMC to evaluate how its Google and Facebook ads influence Amazon purchases. Based on what they find, they can re-allocate budgets between platforms to maximize overall return on investment (ROI).

Clean rooms themselves aren’t an AI innovation; however, they benefit significantly from several AI capabilities.

For example, Meta’s Advantage+ uses clean room insights to build lookalike audiences while staying privacy-compliant.

2. Modeled Conversions

While clean rooms are great for unifying cross-platform data, their usefulness is predicated on data completeness.

When privacy regulations make it impossible to get all the data, clean rooms like Google Ads Data Hub and Amazon Marketing Cloud use AI-powered modeled conversions to estimate user journeys that can’t be fully tracked.

Modeled data is also used by tools like Smart Bidding, which leverages machine learning to predict conversions for users who opted out of tracking.

For users who opt out of tracking, Consent Mode still allows the collection of anonymized signals, which machine learning models can then use to predict conversion likelihood.

Example:

Google’s Smart Bidding leverages machine learning to optimize bids for conversions or conversion value.

In cases where conversion data is incomplete due to user consent choices or other factors, Smart Bidding can use modeled conversions to fill in gaps and make good bidding decisions.

The models do this by identifying patterns and correlations between user attributes, actions, and conversion outcomes.

While modeled conversions offer significant benefits in their ease of use (they’re basically provided without any extra effort by the ad platforms), it’s important to remember that they are only estimates and may not be perfectly accurate in all cases.

Advertisers should consider using modeled conversions in conjunction with other ways to get a more complete picture of campaign performance.

For example, advertisers can use Media Mix Models (MMM), a Marketing Efficiency Ratio (MER), or incrementality lift tests to validate that the data they are using is directionally correct.

3. Server-Side Tagging And First-Party Data Integration

Server-side tagging lets marketers control data collection on their servers, bypassing cookie restrictions.

Platforms like Google Tag Manager now support server-side implementations that improve tracking accuracy while maintaining privacy compliance.

Server-side tagging captures anonymous pings even when cookies are declined, feeding better signals into Google’s AI models for more accurate conversion modeling.

This gives AI more complete data when doing things like data-driven attribution (DDA) or automated bidding.

Illustration by author, February 2025

Example:

An ecommerce company transitions to server-side tagging to retain high-quality data even when technologies like Safari’s Intelligent Tracking Prevention (ITP) break JavaScript-based tracking.

As a result, the advertiser sees a complete picture of all the conversions driven by digital marketing and can now justify higher bids, which makes them more competitive in the ad auction and boosts total sales for their brand.

Actionable Tips:

  • Implement GA4 Consent Mode and server-side tagging to maintain accurate performance data.
  • Leverage data clean rooms to analyze cross-platform conversions securely.
  • Use modeled conversions to fill tracking gaps caused by privacy restrictions.

2. Extract Data Insights And Make Smarter Decisions

Now that we’ve covered technologies that can stem the decline in access to data, let’s examine how AI can help make sense of it all.

The Problem With Data Analysis Before AI

Marketers may struggle to extract actionable insights when looking at a mountain of PPC data.

Humans simply aren’t as good as machines at detecting patterns or spotting anomalies in large data sets.

While statistical methods have long been used to find these patterns, many marketing teams lack the expertise to do it themselves or have no access to a qualified analyst to help them.

As a result, teams miss opportunities or spend more time than they can afford looking for signals to guide optimization efforts.

AI Solutions For Data Analysis And Attribution

1. Data-Driven Attribution Models (DDA)

DDA isn’t the newest solution in attribution modeling, but it exists largely because AI has become cheaper and more accessible.

It solves the problem of assigning values to different parts of the consumer journey when users take a multitude of paths from discovery to purchase.

Static attribution models lack the sophistication to account for this and cause advertisers to bid incorrectly.

Google’s data-driven attribution (DDA) uses machine learning to analyze conversion paths and assign credit based on a more complete analysis of a user’s consumer journey.

Unlike static models, DDA dynamically adjusts credit allocation to reflect the many ways consumers behave.

Machine learning, a form of AI, is what enabled Google to make this more advanced attribution model available to all advertisers and what has driven the steady improvement in results from Smart Bidding.

2. Automating Auction Insights Visualization

Generative AI is not only enhancing attribution but also automating repetitive tasks.

Recently, I tested GPT Operator to streamline several PPC reporting workflows.

Operator is OpenAI’s tool that lets the AI use a web browser to achieve tasks. It goes beyond searching on the web; it allows you to follow links, fill in forms, and interact intelligently with websites.

In one task, I asked Operator to download auction insights, visualize the data using Optmyzr’s Auction Insights Visualizer, and email a report.

It handled the data transfer and visualization steps flawlessly, though it struggled with taking a clean screenshot instead of attempting to attach HTML.

Illustration by author, February 2025

This illustrates how AI agents can help when data lives in disparate places. There are no APIs available to move it, as is the case with auction insights data from Google.

While Operator still needs too much hand-holding to be helpful today, it seems likely that we’re less than a year away from when it can do many tedious tasks for us.

3. Advanced Statistical Analysis Available To Anyone

Before AI advancements, conducting a statistical analysis could be a labor-intensive process requiring specialized software or data science expertise.

But today, generative AI enables marketers to explore these areas that were previously firmly outside their realm of expertise.

For example, GPT can explain and execute a process like a seasonality decomposition. AI can quickly write Python code that breaks down campaign data into trend, seasonal, and residual components, helping marketers uncover patterns they can act on.

How AI Automates Seasonal Analysis

In one of my PPC Town Hall podcast episodes, Cory Lindholm demonstrated how GPT can handle complex seasonality analysis in minutes.

Inspired by this, I used GPT’s Advanced Data Analysis feature to upload weekly Google Ads data and run a full decomposition.

GPT efficiently cleaned the data, identified issues like formatting errors, and generated a breakdown of trends, seasonal variations, and residual fluctuations.

In the analysis, GPT flagged recurring trends, allowing me to pinpoint peak demand periods and optimize bid strategies ahead of time. Tasks that previously took hours now take just a few minutes.

On a side note, I have found large language models (LLMs) so helpful with coding that I am now using v0.dev almost weekly to create apps, browser extensions, and scripts on a weekly basis.

3. Communicate Results Effectively Across Teams

With solid data in place and AI-fueled ways to speed up analysis, we should have some great results to share with stakeholders.

But sharing results through reports has traditionally been one of the most time-consuming and least loved tasks that fall on the plate of the typical account manager. And there were other problems, too.

The Problem With Sharing Reports Before AI

Reports were often static, one-size-fits-all documents that failed to meet the needs of different stakeholders.

Executives required high-level summaries focused on ROI, marketing strategists needed cross-channel insights, and PPC specialists required detailed campaign data.

Customizing reports for each audience was time-consuming and prone to error.

AI Solutions For Tailored Reporting

1. LLM Report Summarization

LLMs such as Claude, Gemini, and ChatGPT can quickly generate different explanations of reports from the same underlying data, enabling efficient customization for each audience.

For example, ChatGPT can produce a concise executive summary alongside a more detailed keyword-level report for PPC teams.

But that customization can and should be taken even further. In OpenAI, it’s possible to create custom GPTs, each with its own instructions. This can be used to create a different ChatGPT flavor for every client.

Whereas today, agencies depend on their people to remember how each client likes to get their reports, GPT can be trained to remember these preferences.

Things like how well they know PPC, what jargon they tend to use at their company, and even what the year’s strategic initiatives are.

Then, the LLM can word the summary in a way that resonates with the reader and even explain how the search marketing campaign’s results are key to the company’s strategic objectives for the year.

2. Interactive Dashboards For Real-Time Transparency

AI-driven dashboards provide live, customizable views of campaign performance. Stakeholders can explore data interactively, filtering by date ranges, platforms, or key performance indicators (KPIs), reducing the need for frequent manual report updates.

And while dashboards have been around for a long time, AI can be used to quickly highlight the most salient insights.

For example, AMC lets marketers use AI to generate SQL to explore the data by using natural language.

At my company, Optmyzr, we deployed Sidekick, which can instantly answer questions about data in any account, for example, the biggest optimization opportunities or wins in the last month.

Before AI, these insights might have remained hidden in the data.

Actionable Tips:

  • Set up custom GPTs for every client you work with.
  • Implement reporting tools that use natural language to explore the data.

Conclusion: From Reporting To Strategic Decision-Making With Generative AI

Generative AI has redefined PPC reporting, transforming a once fragmented and time-consuming process into a streamlined, insight-driven workflow.

It doesn’t just automate data collection and report generation; it also surfaces hidden trends, correlations, and anomalies that might otherwise go unnoticed.

This enables marketers to make smarter, faster, and more strategic decisions based on real-time insights.

With AI-driven tools, marketers can see beyond surface-level metrics, discovering patterns and opportunities that traditional reporting might take hours or days to uncover.

This improved understanding of performance empowers teams to refine budget allocation, creative strategy, and campaign targeting more effectively, leading to more substantial outcomes and greater profitability.

The conclusion is simple. With Generative AI, PPC managers have more complete data, leading to better insights and better decisions – all of which can be shared more meaningfully with all involved stakeholders.

More Resources:


Featured Image: Igor Link/Shutterstock

The Pricing Strategy of Temu Sellers

Temu sellers show massive discounts to boost perceived savings and win customers. The strategy is working.

Launched in September of 2022, Temu has grown rapidly. By December 2023, Temu had captured 17% of the U.S. dollar store market — Five Below, Dollar General, more — according to Reuters.

Certainly Temu’s prices are low. A recent comparison from Omnisend showed shoppers can often save 40% on Temu versus similar products on Amazon (but not Amazon Haul).

However, seller success on Temu relies as much on discount psychology as the savings.

Home page of Temu on a web browser

Temu’s success relies in part on the psychology of discounts.

Cheap Headphones

Consider a hypothetical contract manufacturer that sells on Temu. Among the many products the company produces for various brands are noise-canceling, over-the-ear headphones that retail for $99.

This manufacturer-turned-Temu seller has the materials, models, and expertise to build a nearly identical set of headphones. Stopping short of using the original brand’s patented and expensive noise-canceling technology, the company tweaks the earpieces.

The “nearly identical” headphones cost $5 to manufacture. This company lists the headphones on Temu at a regular price of $99, with a 75% discount. That $24.75 price could produce more profit than it gets from building the client’s high-end version.

This made-up scenario is extreme but instructive.

Temu Prices

Remember that Temu is a marketplace with thousands of sellers. Temu does not buy and sell inventory. It connects manufacturers and brands to shoppers.

Thus the prices are not Temu’s but those of sellers. Yet Temu’s success relies on low-cost and, perhaps, low-quality alternatives to popular and brand-name products.

For example, the Omnisend comparison found that Temu listed an item that “matched” a product in Amazon’s camera and photo category about 96% of the time. But none — 0.0% — of those Temu items were the same brand or known quality as on Amazon. Instead, the items were unbranded and generic and “closely resemble well-known brands, often with blurred logos or modified packaging,” per Omnisend.

For identical items and brands, Temu’s prices are roughly the same as Amazon’s.

Temu Discounts

Sellers on Amazon and Temu differ significantly on the frequency and amount of discounts.

“While Amazon offers discounts on around 47% of its products, 65% of Temu’s listings are marked down. In addition, some items on Temu see discounts as high as 98%, compared to Amazon’s highest discount being 67%,” according to Omnisend.

Collectively, Temu’s deep discounts make products feel significantly cheaper. This is the psychology part.

For example, a February 2018 study in the academic journal “Fashion and Textiles” found that deep discounts can enhance shoppers’ perception of savings, making them feel like they are getting a better deal.

The study noted that discounts elicit an emotional response impacting shoppers’ perception of savings, quality, and value.

These findings are more or less common sense to seasoned marketers and precisely what is happening with Temu.

Product Perception

Yet deep and frequent discounting can be dangerous. Substantial markdowns may lead to doubts about product quality or authenticity, and rightly so.

Imagine a machine with three dials:

  • Quality,
  • Savings,
  • Value.

Ecommerce sellers can manipulate the dials. Changing one has the opposite effect on another. The proper mix leads to an ecommerce sale.

Many Temu sellers have learned that turning up the perceived savings dial overcomes the associated decline in perceived quality. Temu customers presumably realize they are buying what could be a subpar item, but the perceived savings make it worthwhile.

To be sure, Temu’s success does not rely solely on psychology. “Many of Temu’s products ship directly from manufacturers to consumers so that product prices can remain low by bypassing distributors and other third-party handling fees,” explained Greg Zakowicz, senior ecommerce expert at Omnisend, in an email to Practical Ecommerce. “They also use the least expensive shipping method to reduce prices further.”

Nonetheless, the quality-savings-value exercise is a lesson for all ecommerce sellers. Every merchant has access to the purchase intent machine with its various dials, knobs, and switches. Good marketing is tuning the mix for success.