Waabi says its virtual robotrucks are realistic enough to prove the real ones are safe

The Canadian robotruck startup Waabi says its super-realistic virtual simulation is now accurate enough to prove the safety of its driverless big rigs without having to run them for miles on real roads. 

The company uses a digital twin of its real-world robotruck, loaded up with real sensor data, and measures how the twin’s performance compares with that of real trucks on real roads. Waabi says they now match almost exactly. The company claims its approach is a better way to demonstrate safety than just racking up real-world miles, as many of its competitors do.

“It brings accountability to the industry,” says Raquel Urtasun, Waabi’s firebrand founder and CEO (who is also a professor at the University of Toronto). “There are no more excuses.”

After quitting Uber, where she led the ride-sharing firm’s driverless-car division, Urtasun founded Waabi in 2021 with a different vision for how autonomous vehicles should be made. The firm, which has partnerships with Uber Freight and Volvo, has been running real trucks on real roads in Texas since 2023, but it carries out the majority of its development inside a simulation called Waabi World. Waabi is now taking its sim-first approach to the next level, using Waabi World not only to train and test its driving models but to prove their real-world safety.

For now, Waabi’s trucks drive with a human in the cab. But the company plans to go human-free later this year. To do that, it needs to demonstrate the safety of its system to regulators. “These trucks are 80,000 pounds,” says Urtasun. “They’re really massive robots.”

Urtasun argues that it is impossible to prove the safety of Waabi’s trucks just by driving on real roads. Unlike robotaxis, which often operate on busy streets, many of Waabi’s trucks drive for hundreds of miles on straight highways. That means they won’t encounter enough dangerous situations by chance to vet the system fully, she says.  

But before using Waabi World to prove the safety of its real-world trucks, Waabi first has to prove that the behavior of its trucks inside the simulation matches their behavior in the real world under the exact same conditions.

Virtual reality

Inside Waabi World, the same driving model that controls Waabi’s real trucks gets hooked up to a virtual truck. Waabi World then feeds that model with simulated video—radar and lidar inputs mimicking the inputs that real trucks receive. The simulation can re-create a wide range of weather and lighting conditions. “We have pedestrians, animals, all that stuff,” says Urtasun. “Objects that are rare—you know, like a mattress that’s flying off the back of another truck. Whatever.”

Waabi World also simulates the properties of the truck itself, such as its momentum and acceleration, and its different gear shifts. And it simulates the truck’s onboard computer, including the microsecond time lags between receiving and processing inputs from different sensors in different conditions. “The time it takes to process the information and then come up with an outcome has a lot of impact on how safe your system is,” says Urtasun.

To show that Waabi World’s simulation is accurate enough to capture the exact behavior of a real truck, Waabi then runs it as a kind of digital twin of the real world and measures how much they diverge.

WAABI

Here’s how that works. Whenever its real trucks drive on a highway, Waabi records everything—video, radar, lidar, the state of the driving model itself, and so on. It can rewind that recording to a certain moment and clone the freeze-frame with all the various sensor data intact. It can then drop that freeze-frame into Waabi World and press Play.

The scenario that plays out, in which the virtual truck drives along the same stretch of road as the real truck did, should match the real world almost exactly. Waabi then measures how far the simulation diverges from what actually happened in the real world.

No simulator is capable of recreating the complex interactions of the real world for too long. So Waabi takes snippets of its timeline every 20 seconds or so. They then run many thousands of such snippets, exposing the system to many different scenarios, such as lane changes, hard braking, oncoming traffic and more.  

Waabi claims that Waabi World is 99.7% accurate. Urtasun explains what that means: “Think about a truck driving on the highway at 30 meters per second,” she says. “When it advances 30 meters, we can predict where everything will be within 10 centimeters.”

Waabi plans to use its simulation to demonstrate the safety of its system when seeking the go-ahead from regulators to remove humans from its trucks this year. “It is a very important part of the evidence,” says Urtasun. “It’s not the only evidence. We have the traditional Bureau of Motor Vehicles stuff on top of this—all the standards of the industry. But we want to push those standards much higher.”

“A 99.7% match in trajectory is a strong result,” says Jamie Shotton, chief scientist at the driverless-car startup Wayve. But he notes that Waabi has not shared any details beyond the blog post announcing the work. “Without technical details, its significance is unclear,” he says.

Shotton says that Wayve favors a mix of real-world and virtual-world testing. “Our goal is not just to replicate past driving behavior but to create richer, more challenging test and training environments that push AV capabilities further,” he says. “This is where real-world testing continues to add crucial value, exposing the AV to spontaneous and complex interactions that simulation alone may not fully replicate.”

Even so, Urtasun believes that Waabi’s approach will be essential if the driverless-car industry is going to succeed at scale. “This addresses one of the big holes that we have today,” she says. “This is a call to action in terms of, you know—show me your number. It’s time to be accountable across the entire industry.”

Everyone in AI is talking about Manus. We put it to the test.

Since the general AI agent Manus was launched last week, it has spread online like wildfire. And not just in China, where it was developed by the Wuhan-based startup Butterfly Effect. It’s made  its way into the global conversation, with influential voices in tech, including Twitter cofounder Jack Dorsey and Hugging Face product lead Victor Mustar, praising its performance. Some have even dubbed it “the second DeepSeek,” comparing it to the earlier AI model that took the industry by surprise for its unexpected capabilities as well as its origin.  

Manus claims to be the world’s first general AI agent, leveraging multiple AI models (such as Anthropic’s Claude 3.5 Sonnet and fine-tuned versions of Alibaba’s open-source Qwen) and various independently operating agents to act autonomously on a wide range of tasks. (This makes it different from AI chatbots, including DeepSeek, which are based on a single large language model family and are primarily designed for conversational interactions.) 

Despite all the hype, very few people have had a chance to use it. Currently, under 1% of the users on the wait list have received an invite code. (It’s unclear how many people are on this list, but for a sense of how much interest there is, Manus’s Discord channel has more than 186,000 members.)

MIT Technology Review was able to obtain access to Manus, and when I gave it a test-drive, I found that using it feels like collaborating with a highly intelligent and efficient intern: While it occasionally lacks understanding of what it’s being asked to do, makes incorrect assumptions, or cuts corners to expedite tasks, it explains its reasoning clearly, is remarkably adaptable, and can improve substantially when provided with detailed instructions or feedback. Ultimately, it’s promising but not perfect.

Just like its parent company’s previous product, an AI assistant called Monica that was released in 2023, Manus is intended for a global audience. English is set as the default language, and its design is clean and minimalist.

To get in, a user has to enter a valid invite code. Then the system directs users to a landing page that closely resembles those of ChatGPT or DeepSeek, with previous sessions displayed in a left-hand column and a chat input box in the center. The landing page also features sample tasks curated by the company—ranging from business strategy development to interactive learning to customized audio meditation sessions.

Like other reasoning-based agentic AI tools, such as ChatGPT DeepResearch, Manus is capable of breaking tasks down into steps and autonomously navigating the web to get the information it needs to complete them. What sets it apart is the “Manus’s Computer” window, which allows users not only to observe what the agent is doing but also to intervene at any point. 

To put it to the test, I gave Manus three assignments: (1) compile a list of notable reporters covering China tech, (2) search for two-bedroom property listings in New York City, and (3) nominate potential candidates for Innovators Under 35, a list created by MIT Technology Review every year. 

Here’s how it did:

Task 1: The first list of reporters that Manus gave me contained only five names, with five “honorable mentions” below them. I noticed that it listed some journalists’ notable work but didn’t do this for others. I asked Manus why. The reason it offered was hilariously simple: It got lazy. It was “partly due to time constraints as I tried to expedite the research process,” the agent told me. When I insisted on consistency and thoroughness, Manus responded with a comprehensive list of 30 journalists, noting their current outlet and listing notable work. (I was glad to see I made the cut, along with many of my beloved peers.) 

I was impressed that I was able to make top-level suggestions for changes, much as someone would with a real-life intern or assistant, and that it responded appropriately. And while it initially overlooked changes in some journalists’ employer status, when I asked it to revisit some results, it quickly corrected them. Another nice feature: The output was downloadable as a Word or Excel file, making it easy to edit or share with others. 

Manus hit a snag, though, when accessing journalists’ news articles behind paywalls; it frequently encountered captcha blocks. Since I was able to follow along step by step, I could easily take over to complete these, though many media sites still blocked the tool, citing suspicious activity. I see potential for major improvements here—and it would be useful if a future version of Manus could proactively ask for help when it encounters these sorts of restrictions.

Task 2: For the apartment search, I gave Manus a complex set of criteria, including a budget and several parameters: a spacious kitchen, outdoor space, access to downtown Manhattan, and a major train station within a seven-minute walk. Manus initially interpreted vague requirements like “some kind of outdoor space” too literally, completely excluding properties without a private terrace or balcony access. However, after more guidance and clarification, it was able to compile a broader and more helpful list, giving recommendations in tiers and neat bullet points. 

The final output felt straight from Wirecutter, containing subtitles like “best overall,” “best value,” and “luxury option.” This task (including the back-and-forth) took less than half an hour—a lot less time than compiling the list of journalists (which took a little over an hour), likely because property listings are more openly available and well-structured online.

Task 3: This was the largest in scope: I asked Manus to nominate 50 people for this year’s Innovators Under 35 list. Producing this list is an enormous undertaking, and we typically get hundreds of nominations every year. So I was curious to see how well Manus could do. It broke the task into steps, including reviewing past lists to understand selection criteria, creating a search strategy for identifying candidates, compiling names, and ensuring a diverse selection of candidates from all over the world.

Developing a search strategy was the most time-consuming part for Manus. While it didn’t explicitly outline its approach, the Manus’s Computer window revealed the agent rapidly scrolling through websites of prestigious research universities, announcements of tech awards, and news articles. However, it again encountered obstacles when trying to access academic papers and paywalled media content.

After three hours of scouring the internet—during which Manus (understandably) asked me multiple times whether I could narrow the search—it was only able to give me three candidates with full background profiles. When I pressed it again to provide a complete list of 50 names, it eventually generated one, but certain academic institutions and fields were heavily overrepresented, reflecting an incomplete research process. After I pointed out the issue and asked it to find five candidates from China, it managed to compile a solid five-name list, though the results skewed toward Chinese media darlings. Ultimately, I had to give up after the system warned that Manus’s performance might decline if I kept inputting too much text.

My assessment: Overall, I found Manus to be a highly intuitive tool suitable for users with or without coding backgrounds. On two of the three tasks, it provided better results than ChatGPT DeepResearch, though it took significantly longer to complete them. Manus seems best suited to analytical tasks that require extensive research on the open internet but have a limited scope. In other words, it’s best to stick to the sorts of things a skilled human intern could do during a day of work.​

Still, it’s not all smooth sailing. Manus can suffer from frequent crashes and system instability, and it may struggle when asked to process large chunks of text. The message “Due to the current high service load, tasks cannot be created. Please try again in a few minutes” flashed on my screen a few times when I tried to start new requests, and occasionally Manus’s Computer froze on a certain page for a long period of time. 

It has a higher failure rate than ChatGPT DeepResearch—a problem the team is addressing, according to Manus’s chief scientist, Peak Ji. That said, the Chinese media outlet 36Kr reports that Manus’s per-task cost is about $2, which is just one-tenth of DeepResearch’s cost. If the Manus team strengthens its server infrastructure, I can see the tool becoming a preferred choice for individual users, particularly white-collar professionals, independent developers, and small teams.

Finally, I think it’s really valuable that Manus’s working process feels relatively transparent and collaborative. It actively asks questions along the way and retains key instructions as “knowledge” in its memory for future use, allowing for an easily customizable agentic experience. It’s also really nice that each session is replayable and shareable.

I expect I will keep using Manus for all sorts of tasks, in both my personal and professional lives. While I’m not sure the comparisons to DeepSeek are quite right, it serves as further evidence that Chinese AI companies are not just following in the footsteps of their Western counterparts. Rather than just innovating on base models, they are actively shaping the adoption of autonomous AI agents in their own way.

‘Do’ Queries Are an SEO Priority

Search engine optimizers have long segregated searchers’ intent into three types: to gain information, make a purchase, or locate a business or a person.

Google’s latest quality raters’ guidelines offer a different intent approach: “know simple,” “know,” and “do.” Google’s method helps prioritize optimization efforts for today’s AI-driven search and elements.

Here’s how to adjust your organic search strategy based on the new guidelines.

‘Know simple’ queries

Per Google, searchers needing a quick answer have a “know simple” intent. Examples include “weather,” “when was the Empire State Building constructed,” or “how much protein in an egg.”

Those are low-priority keywords because Google now provides very satisfying answers in search results, removing the need to click (especially with AI Overviews).

That doesn’t mean ignoring “know simple” questions on your site. Respond to visitor queries to keep them from leaving, but don’t expect the answers to drive traffic.

‘Know’ queries

“Know” queries demand longer, more detailed answers. Before AI Overviews, Google served only “featured snippets” for quick answers that required searchers to click links for the full explanation.

AI Overviews provide much more detail than featured snippets, and there’s often no need to click elsewhere. Consider, for example, the AI Overviews response to “why take probiotics.” The response includes links, but its thoroughness suggests no need for further research.

Nonetheless, creating and optimizing content for “know” queries could generate relevant traffic and, like “know simple” answers, help visitors.

The response in AI Overviews to “why to take probiotics” is thorough and detailed. Click image to enlarge.

‘Do’ queries

“Do” searches imply an action and represent huge organic search potential. The action could be to purchase an item (“magnesium online”) or, say, to take a vacation (“what to see in Hawaii”).

Google states some “do” queries are “open-ended,” meaning many types of content could help. For example, searchers of “bathroom organization ideas” may want to browse images and videos, read how-to guides, or both.

Many “do” queries are impossible to answer in an AI Overview or featured snippet. Searchers will likely click off the results to perform the action.

That is why such queries have solid SEO potential.

The intent of “do” queries could be commercial or informational —  both are important for SEO. Informational queries offer an opportunity to show products in context, such as a cabinet seller responding to the “bathroom ideas” search. This problem-solving content can drive sales while helping AI understand your items.

For example, Home Depot produces many how-to guides that list and link to relevant products. For a “drywall repair” query, Home Depot’s tutorial ranks number 1 in organic search on Google, explaining how to patch and repair drywall and linking to those products.

The tutorial includes a video, difficulty level, and duration to complete. Users can add products without leaving the page. The tutorial responds to a “do” query and generates organic search traffic that drives conversions.

Home Depot’s drywall tutorial includes a video, difficulty level, and duration. Users can add products without leaving the page. Click image to enlarge.

Is Google’s Use Of Compressibility An SEO Myth? via @sejournal, @martinibuster

I recently came across an SEO test that attempted to verify whether compression ratio affects rankings. It seems there may be some who believe that higher compression ratios correlate with lower rankings. Understanding compressibility in the context of SEO requires reading both the original source on compression ratios and the research paper itself before drawing conclusions about whether or not it’s an SEO myth.

Search Engines Compress Web Pages

Compressibility, in the context of search engines, refers to how much web pages can be compressed. Shrinking a document into a zip file is an example of compression. Search engines compress indexed web pages because it saves space and results in faster processing. It’s something that all search engines do.

Websites & Host Providers Compress Web Pages

Web page compression is a good thing because it helps search crawlers quickly access web pages which in turn sends the signal to Googlebot that it won’t strain the server and it’s okay to grab even more pages for indexing.

Compression speeds up websites, providing site visitors a high quality user experience. Most web hosts automatically enable compression because it’s good for websites, site visitors and also good for web hosts because it saves on bandwidth loads. Everybody wins with website compression.

High Levels Of Compression Correlate With Spam

Researchers at a search engine discovered that highly compressible web pages correlated with low-quality content. The study called Spam, Damn Spam, and Statistics: Using Statistical Analysis to Locate Spam Web Pages  (PDF) was conducted in 2006 by two of the world’s leading researchers, Marc Najork and Dennis Fetterly.

Najork currently works at DeepMind as Distinguished Research Scientist. Fetterly, a software engineer at Google, is an author of many important research papers related to search, content analysis and other related topics. This research paper isn’t just any research paper, it’s an important one.

What the research paper shows is that 70% of web pages that compress at a level of 4.0 or higher tended to be low quality pages with a high level of redundant word usage. The average compression level of sites was around 2.0.

Here are the averages of normal web pages listed by the research paper:

  • Compression ratio of 2.0:
    The most frequently occurring compression ratio in the dataset is 2.0.
  • Compression ratio of 2.1:
    Half of the pages have a compression ratio below 2.1, and half have a compression ratio above it.
  • Compression ratio of 2.11:
    On average, the compression ratio of the pages analyzed is 2.11.

It would be an easy first-pass way to filter out the obvious content spam so it makes sense that they would do that to weed out heavy-handed content spam. But weeding out spam is more complicated than simple solutions. Search engines use multiple signals because it results in a higher level of accuracy.

The researchers reported that 70% of sites with a compression level of 4.0 or higher were spam. That means that the other 30% were not spam sites. There are always outliers in statistics and that 30% of non-spam sites is why search engines tend to use more than one signal.

Do Search Engines Use Compressibility?

It’s reasonable to assume that search engines use compressibility to identify heavy handed obvious spam. But it’s also reasonable to assume that if search engines employ it they are using it together with other signals in order to increase the accuracy of the metrics. Nobody knows for certain if Google uses compressibility.

Is There Proof That Compression Is An SEO Myth?

Some SEOs have published research analyzing the rankings of thousands of sites for hundreds of keywords. They found that both the top-ranking and bottom-ranked sites had a compression ratio of about 2.4. The difference between their compression ratios was just 2%, meaning the scores were essentially equal. Those results are close to the normal average range of 2.11 reported in the 2006 scientific study.

The SEOs claimed that the mere 2% higher compression levels of the top-ranked sites over the bottom-ranked sites prove that compressibility is an SEO myth. Of course, that claim is incorrect. The average compression ratio of normal sites in 2006 was 2.11, which means the average 2.4 ratio in 2025 falls well within the range of normal, non-spam websites.

The ratio for spam sites is 4.0, so the fact that both sets of top and bottom ranked sites are about 2.4 ratio is meaningless since both scores fall within the range of normal.

If we assume that Google is using compressibility, a site would have produce a compression ratio of 4.0, plus send other low quality signals, to trigger an algorithmic action. If that happened those sites wouldn’t be in the search results at all because they wouldn’t be in the index and therefore there is no way to test that with the SERPs, right?

It would be reasonable to assume that the sites with high 4.0 compression ratios were removed. But we don’t know that, it’s not a certainty.

Is Compressibility An SEO Myth?

Compressibility may not be an SEO myth. But it’s probably not anything publishers or SEOs should be worry about as long as they’re avoiding heavy-handed tactics like keyword stuffing or repetitive cookie cutter pages.

Google uses de-duplication which removes duplicate pages from their index and consolidates the PageRank signals to whichever page they choose to be the canonical page (if they choose one). Publishing duplicate pages will likely not trigger any kind of penalty, including anything related to compression ratios, because, as was already mentioned, search engines don’t use signals in isolation.

Deep SEO: The Potential Impact Of AI Mode And Deep Search Models via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Last week, Google officially launched AI Mode, an AI Overview on steroids, in beta.

Almost one year after the launch of AI Answers in the search results, we have enough data to show that the net impact on the open web is as positive as Trump’s tariffs on the U.S. economy. Spoiler: Not very positive.

AI Overviews take more clicks away from classic search results than they give back to cited sources. AI Mode has the potential to rip the gaping trade deficit AI Overviews are already causing on traffic wide open.

I argue that the appearance of AI Mode is the natural response to Deep Search and the potential outlook of the future of AI Search.

Image Credit: Lyna ™

Deep Research Threatens Google

Deep Search is an AI Agent by Open AI that creates lengthy reports about a subject of your choice: “An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you.”

The most obvious application is market research, but the agent can also deliver rich insights into consumer topics like buying a car, booking a trip, or getting a credit.

Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It can be equally useful for discerning shoppers looking for hyper-personalized recommendations on purchases that typically require careful research, like cars, appliances, and furniture.

Image Credit: Kevin Indig

Deep Search performs tens to hundreds of searches to compile a report. I’ve tried prompts for purchase decisions.

When I asked for “the best hybrid family car with 7 seats in the price range of $50,000 to $80,000,” Deep Research browsed through 41 search results and reasoned its way through the content.

Some examples of search reasoning:

I found a Kelley Blue Book article on 7-seat hybrids. It’s a good starting point, though not all details are always included.

I’m digging into cargo capacities for the Toyota Highlander and Grand Highlander. The Grand Highlander is proving to be more spacious, making it a solid option for larger families.

I’m looking at the XC90 Recharge’s electric and combined ranges, and the discrepancies in MPGe figures from different sources.

The report took 10 minutes to put together but probably saved human hours of research and at least 41 clicks – clicks that could’ve gone to Google ads.

The Case For Deep Search

Are Deep Research agents a threat to Google? I think so.

Here’s why:

  1. The results are impressive, and the time savings are massive. Early on, Google bragged about how quickly it put the search results together. But that was speed to results, not speed to answer. Today, Deep Search agents take minutes to get an answer, but it’s all you need.
  2. There is massive potential for personalization, from sources to search criteria.
  3. Conversational back and forth, just like with a salesperson in a store. Deep Re-search agents provide a concise summary that users can expand and explore at their own pace.
  4. It turns out that every search engine or AI chatbot either already has a Deep Search agent or is working on one. It could truly be the future of Search for complex queries.
Image Credit: Kevin Indig

Bing had a “Deep Search” feature since December 2023! And it does exactly what the name promises, just faster and not as deep as ChatGPT’s agent.

Today’s search engines are powerful tools that help us find information on the web, but sometimes they fall short of our expectations. When we have complex, nuanced, or specific questions, we often struggle to find the answers we need. We ourselves know what we’re looking for, but the search engine just doesn’t seem to understand.

That’s why we created deep search, a new Microsoft Bing feature that provides even more relevant and comprehensive answers to the most complex search queries. Deep search is not a replacement for Bing’s existing web search, but an enhancement that offers the option for a deeper and richer exploration of the web.1

I didn’t think I’d live long enough to see the day that Google copies Bing … But they’re not alone.

Grok has “Deep Search” and Gemini and Perplexity have “Deep Research.” Everyone is copying each other, and they’re not even putting in the effort to choose a different name. What a strong sign of commoditization.

Google’s AI Mode (source)

My theory: Google modeled AI Mode after Bing’s Deep Search after seeing what ChatGPT’s Deep Search can do.

Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.2

Interestingly, AI Mode has the opposite of AI Overviews: In Google’s Q3 earnings announcement, Sundar PichAI said Google sees an “increase in search usage among people who use the new AI overviews”.3

So, AI Overviews lead to more searches, but AI Mode saves users time and queries:

You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.4

I don’t think we’ll ever go back to the pre-AI way of search. The universal key challenge of AI answers, whatever their form, is trust. The obvious problem is hallucination.

It’s ironic that ChatGPT Deep Research tells me it browsed through 29 sources, but when I counted, I found 41.

However, reasoning models are getting better at solving this problem with raw computing, i.e., by “thinking harder” about their answers.

The bigger solvable problem for Deep Search agents is source selection.

Untrustworthy sources are the microplastics of AI answers. There is a good reason why all reasoning models openly show their reasoning.

Even though we might pay as much attention to the reasoning details as to any Terms of Service, they make us feel like a lot is happening in the background.

Perception is important for trust. However, source selection is a very solvable problem: Users can simply tell the model to ignore the sources they don’t want, and the model memorizes that behavior over time.

Two less solvable problems remain:

  • Bias: In my analysis of AI chatbot research, I pointed out that LLMs have a bias towards global brands, luxury brands, corporate sources and prompt sentiment.
  • Access: Information needs to be on the internet for Deep Search agents to find it (that’s where Google and Bing have a big competitive advantage).

The biggest question, of course, is whether Deep Search Agents will find broad adoption or stay in the knowledge worker bubble.

AI Mode could bring it to the masses and drive the stake deeper into the heart of informational clicks.

The Impact On SEO

AI Overviews spiked in November ‘24 and February ‘25 (Image Credit: Kevin Indig)

The impact of AI Overviews on SEO traffic is negative.

In my meta-analysis of 19 studies about AI Overviews, I found that AIOs reduce click-through rates across the board. Will AI Mode make it worse? Most likely. But there is hope.

First of all, Deep Search agents are very transparent with their sources and sometimes queries.

ChatGPT’s Deep Search literally calls out what it’s searching for, so we can hopefully track and optimize for these queries. So far, LLMs still rely on search results a lot.

Second, only because searchers get answers before clicking to websites, their purchase intent doesn’t go away.

What goes away for marketers is the ability to influence buyers on their website before they buy  – as long as AI chatbots don’t offer a direct checkout.

We’ll need to find other ways to influence buyers: brand marketing, Reddit, YouTube, social media, advertising.

Third, there is a chance that AI Mode shows up predominantly for informational keywords, just like AI Overviews. In that case, a lot of weight will fall on high-intent keywords, like “buy x” or “order y.”

Fourth, Bing doesn’t separate the Deep Search answer but parks it in the middle of organic and paid results, garnished with links to sources. Hopefully, users will still click outside the deep answer.

I wonder how Google plans to monetize AI Mode, which must be more costly and resource-intensive.

To be fair, Google reduced the cost of an AI Overview by 90%, which tells me they figured out the unit economics. So, it’s possible.

But could this be an opportunity to bring the idea of monetizing Search partially with subscriptions back on the table?

Based on a report by The Information, OpenAI considers charging “up to $20,000 per month for specialized AI agents” that could perform PhD level research, $10,000 for a software developer agent, and $2,000 for a knowledge worker agent.5

Still a long way to go, but it brings up a nice theory about AI Mode: What if Google users could pay for better models that give better answers, or have better skills?


1 Introducing deep search

2 Expanding AI Overviews and introducing AI Mode

3 Q3 earnings call: CEO’s remarks

4 Expanding AI Overviews and introducing AI Mode

5 OpenAI Plots Charging $20,000 a Month For PhD-Level Agents


Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: How Can You Distinguish Yourself In This Era Of AI Search Engines? via @sejournal, @HelenPollitt1

Today’s question comes from FC, who asks:

“As an SEO specialist for over 6 years now, what and where does one need to focus with regard to SEO in this current dispensation.

How can you distinguish yourself and standout as an SEO specialist in this era of generative AI and AI search engines?”

This is an excellent question because it goes right to the heart of concerns I hear from a lot of SEO professionals. They have managed to build a solid career and name for themselves as an SEO specialist, but it now feels like the game has changed.

They worry that the skills and experience that got them to this point will not be enough to keep them excelling.

I want to address those concerns, both from the perspective of job seekers and those looking to make an impression in their current role.

What’s Changed

Up until a couple of years ago, it felt like there were clear career choices for SEO specialists to make.

Employed or self-employed? In-house or agency? Technical SEO or content SEO? Small business or enterprise sites? People manager or hands-on practitioner?

These series of decisions, or simply circumstances we found ourselves in, shaped our career paths.

There were central components to SEO. Primarily, you would be working with Google. You would be measured on key performance indicators (KPIs) like clicks and conversions.

You could impress stakeholders by linking your work directly to revenue.

It doesn’t seem as simple as that now, though.

LLMs And Social Media

More recently, there has been a focus on looking at optimizing brands’ presence in other search platforms, not just Bing, Yandex, Baidu, and other regionally relevant search engines.

It now includes platforms not traditionally thought of as belonging to the purview of SEO: TikTok, Perplexity AI, and app stores.

KPIs And Metrics

Google’s walled garden is growing larger, and proving the worth of SEO is getting harder. It’s increasingly difficult to show growth in your share of organic clicks when the pot is getting smaller.

With more answers being given in the search results themselves, and a reduction in the need for clicks off the SERPs, tracking the impact of SEO isn’t straightforward.

With potential – and current – employers still looking at year-on-year clicks, impressions, and revenue growth as their measure of an SEO’s success, this makes standing out quite challenging.

The Skills That Remain Important

I fundamentally believe that the foundational principles of SEO remain unchanged.

However, how we apply them may change with the advent of LLMs and other search platforms.

Technical SEO

A crawl issue that is preventing Googlebot smartphone from accessing the key pages on your site will likely also affect PerplexityBot and OpenAI’s OAI-SearchBot.

As an SEO, we will need to be able to identify where these bots are struggling to crawl pages. We will need to find solutions that enable them to access the pages we want to have served in their search results.

To stand out, make sure you are not just thinking Google-first with your technical solutions.

Consider the other sources of traffic, like LLMs and social media, which might be impacted by the decisions you are making.

Ensure you are also tracking and reporting on the impact of these changes across these other platforms.

Content SEO

Understanding what content searchers are looking for, how search engines perceive it, and what they are choosing to serve as search results is a fundamental aspect of SEO. This won’t change.

However, how you discuss it and the actions you take will change.

From now on, not only are the Google algorithms important for how you create and optimize content, but so are a host of other algorithms.

You will need to consider how searchers are surfacing content through other search platforms. You will also need to know how to make sure your content is served as the result.

Make sure you are moving away from Google as the only algorithm to optimize for and towards the other drivers of traffic and visibility.

Digital PR

I would suggest that digital PR is becoming even more important.

As the search engines we are optimizing for become more numerous, the key factor that seems to unite them is a reward of “authority.”

That is, to give your content a chance of being served as a result in any search engine, it needs to be perceived as authoritative on the subject.

These newer search platforms will still need to use similar methods to Google in identifying expertise and authoritativeness. Digital PR will be key in that.

I do feel that we need to stop making backlinks the main priority of digital PR, however.

Instead, we need to start focusing on how we report on mentions, citations, and conversations about brands and products.

For example, we can look at social media engagement metrics as an indicator of authority. Brand perception may well be formed through forum discussions, reviews, and comments on social media sites.

Just because we know that Googlebot discounts links from some social media platforms in attributing authority doesn’t mean that the newer search engines will. Indeed, they will not rely on social media sites heavily to understand brands.

For now, set yourself apart by rethinking the purpose of digital PR for SEO. Look at the benefits to the brand as a whole and start factoring this into your strategies.

“Soft” Skills

I maintain that the most successful SEO professionals are those who have mastered the non-SEO-specific skills that make businesses work.

Strategic thinking, stakeholder management, and leadership skills are all critical to success not only in SEO, but also in any career.

To really stand out in the changing SEO industry, focus on how these skills will need to be applied.

For example, factor in social media and LLMs into your SEO strategies. Make sure you are not just focusing on Google, but introducing the idea that SEO is broader than that.

Make sure you are liaising with development teams to loop them into your ideas for how to make the site accessible to AI bots. Work on being a thought leader in LLMs and new search platforms for your company.

These sorts of skills are those that will really make you stand out, but you need to apply them with the future of SEO in mind. Future-proof your careers as well as your websites!

Cross-Platform Knowledge

This is probably the hardest one for some SEO specialists to do. Stop looking at Google as the source of all SEO performance and widen the net.

Get comfortable with the other AI search platforms that are beginning to send traffic to your site. Use them yourself, and get familiar with what sort of content they serve and why.

Use social media sites and forums that are where your audience discusses brands like yours. Make sure that you are aware of how they work, and how to participate in those discussions without negative backlash.

Stand out by looking outside of the narrow “Google is SEO” box.

Being An Expert In The New Era Of SEO

How, then, can you guarantee that you are still perceived as an expert in SEO while the goalposts are changing?

What will make you stand out when you are applying for new jobs right now?

How can you prove that your skillset is still relevant whilst others are proclaiming “SEO is dead” (again)?

Demonstrate Impact Through Other Channels

Look at how you can collaborate more with adjacent channels.

For example, I’ve mentioned that social media and forums will be key areas where LLMs will discern brand relevancy and trustworthiness. Work with your teams who are already on those platforms.

Start helping them in areas that you are already an expert, for example: understanding algorithms, creating optimized content and measuring brand authority.

Drive impact in those areas and report on it alongside your more traditional SEO metrics.

Demonstrate Impact Through Other Metrics That Still Line Up With Corporate Goals

Although we are used to reporting on metrics like clicks, rankings, and impressions for SEO, we may need to start looking at other metrics if we want to continue showing the worth of SEO.

For example, consider utilizing tools like Otterly and Goodie to measure visibility in AI search platforms. Or, at the very least, some of the more traditional search engine rankings tools also cover Google’s AI Overview visibility.

Use these tools to demonstrate how the work you are doing is impacting the brand’s performance in AI search platforms.

Continue to relate all work you do back to revenue, or other core conversion goals for your business. Don’t forget to show how traffic from LLMs is converting on your site.

Continue Learning

A key way to stand out in your SEO career at the moment is to show a willingness to upskill and diversify your skillset.

The SEO landscape is shifting, and as such, it’s important to stay on top of new platforms and how they work.

Make sure you are utilizing training that is available on LLM optimization. Use the platforms yourself so you can understand what search real estate is available on them.

Share your findings in interviews and discussions with colleagues so you are highlighting what you’ve learned.

Although this may seem basic, you may find there are a lot of SEO professionals out there with their heads still buried in the sand when it comes to the evolution of the discipline.

Stand Out By Being Adaptable

At the end of the day, SEO is changing. That doesn’t mean that the skills we’ve developed over the past years are obsolete.

Instead, they are even more in demand as new platforms promise new avenues to reach prospective audiences.

The best way to stand out as an SEO in the current era of SEO is by being adaptable.

Learn how to apply your SEO skills to these emerging platforms and track your success.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

The cheapest way to supercharge America’s power grid

US electricity consumption is rising faster than it has in decades, thanks in part to the boom in data center development, the resurgence in manufacturing, and the increasing popularity of electric vehicles. 

Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. This challenge has created one of the biggest obstacles to getting more electricity generation online, reducing investment in new power plants and stranding others in years-long “interconnection queues” while they wait to join the grid.

Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. They could help us quickly bring online a significant portion of the nearly 2,600 gigawatts of backlogged generation and storage projects awaiting pathways to connect to the electric grid. 

The opportunity to leverage advanced transmission technologies to update the way we deliver and consume electricity in America is as close to a $20 bill sitting on the sidewalk as policymakers may ever encounter. Promoting the development and use of these technologies should be a top priority for politicians in Washington, DC, as well as electricity market regulators around the country.

That includes the new Trump administration, which has clearly stated that building greater electricity supply and keeping costs low for consumers are high priorities. 

In the last month, Washington has been consumed by the Trump team’s efforts to test the bounds of executive power, fire civil servants, and disrupt the basic workings of the federal government. But when or if the White House and Congress get around to enacting new energy policies, they would be wise to pick up the $20 bill by enacting bipartisan measures to accelerate the rollout of these innovative grid technologies.

ATTs generally fall into four categories: dynamic line ratings, which combine local weather forecasts and measurements on or near the transmission line to safely increase their capacity when conditions allow; high-performance conductors, which are advanced wires that use carbon fiber, composite cores, or superconducting materials to carry more electricity than traditional steel-core conductors; topology optimization, which uses software to model fluctuating conditions across the grid and identify the most efficient routes to distribute electricity from moment to moment; and advanced power flow control devices, which redistribute electricity to lines with available capacity. 


“This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.”


Other countries from Belgium to India to the United Kingdom are already making large-scale use of these technologies. Early projects in the United States have been remarkably successful as well. One recent deployment of dynamic line ratings increased capacity by more than 50% for only $45,000 per mile—roughly 1% of the price of building new transmission.

So why are we not seeing an explosion in ATT investment and deployment in the US? Because despite their potential to unlock 21st-century technology, the 20th-century structure of the nation’s electricity markets discourages adoption of these solutions. 

For one thing, under the current regulatory system, utilities generally make money by passing the cost of big new developments along to customers (earning a fixed annual return on their investment). That comes in the form of higher electricity rates, which local public utility commissions often approve after power companies propose such projects.

That means utilities have financial incentives to make large and expensive investments, but not to save consumers money. When ATTs are installed in place of building new transmission capacity, the smaller capital costs mean that utilities make lower profits. For example, utilities might earn $600,000 per year after building a new mile of transmission, compared with about $4,500 per mile annually after installing the equipment and software necessary for line ratings. While these state regulatory agencies are tasked with ensuring that utilities act in the best interest of consumers, they often lack the necessary information to identify the best approach for doing so.

Overcoming these structural barriers will require action from both state and federal governments, and it should appeal to Democrats and Republicans alike. We’ve already seen some states, including Minnesota and Montana, move in this direction, but policy interventions to date remain insufficient. In a recent paper, we propose a new approach for unlocking the potential of these technologies.

First, we suggest requiring transmission providers to use ATTs in some “no regrets” contexts, where possible downsides are minor or nonexistent. The Federal Energy Regulatory Commission, for example, is already considering requiring dynamic line ratings on certain highly congested lines. Given the low cost of dynamic line ratings, and their clear benefit in cases of congestion, we believe that FERC should quickly move forward with, and strengthen, such a rule. Likewise, the Department of Energy or Congress should adopt an efficiency standard for the wires that carry electricity around the country. Every year, approximately 5% of electricity generated is lost in the transmission and distribution process. The use of high-performance conductors can reduce those losses by 30%.

In addition, federal agencies and state lawmakers should require transmission providers to evaluate the potential for using ATTs on their grid, or provide support to help them do so. FERC has recently taken steps in this direction, and it should continue to strengthen those actions. 

Regulators should also provide financial incentives to transmission providers to encourage the installation of ATTs. The most promising approach is a “shared savings” incentive, such as that proposed in the recent Advancing GETS Act. This would allow utilities to earn a profit for saving money, not just spending it, and could save consumers billions on their electricity bills every year.

Finally, we should invest in building digital tools so transmission owners can identify opportunities for these technologies and so regulators can hold them accountable. Developing these systems will require transmission providers to share information about electricity supply and demand as well as grid infrastructure. Ideally, with such data in hand, researchers can develop a “digital twin” of the current transmission system to test different configurations of ATTs and help improve the performance and efficiency of our grids. 

We are all too aware that the world often faces difficult policy trade-offs. But laws or regulations that facilitate the use of ATTs can quickly expand the grid and save consumers money. They should be an easy yes on both sides of the aisle.

Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.

The Download: supercharging the power grid, and a new Chinese AI agent

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The cheapest way to supercharge America’s power grid

—Brian Deese is an innovation fellow at the Massachusetts Institute of Technology and served as director of the White House National Economic Council from 2021 to 2023. Rob Gramlich is founder and president of Grid Strategies and was economic advisor to the chairman of the Federal Energy Regulatory Commission during the George W. Bush administration.

US electricity consumption is rising faster than it has in decades. Accommodating that growth will require building wind turbines, solar farms, and other power plants faster than we ever have before—and expanding the network of wires needed to connect those facilities to the grid.

But one major problem is that it’s expensive and slow to secure permits for new transmission lines and build them across the country. Fortunately, there are some shortcuts that could expand the capacity of the existing system without requiring completely new infrastructure: a suite of hardware and software tools known as advanced transmission technologies (ATTs), which can increase both the capacity and the efficiency of the power sector.

ATTs have the potential to radically reduce timelines for grid upgrades, avoid tricky permitting issues, and yield billions in annual savings for US consumers. So why are we not seeing an explosion in ATT investment and deployment in the US? Read the full story.

Interested in learning more about this topic? Read more of our stories:

+ What’s driving electricity demand? It isn’t just AI and data centers.

+ That said, AI’s search for energy is growing more urgent

+ Why this developer won’t quit fighting to connect the US’s grids. 

+ Here are four ways AI is making the power grid faster and more resilient. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China claims to have created the world’s first fully autonomous AI agent 
The agent, called Manus, can allegedly operate fully free of human intervention. (Forbes)
+ But it’s not clear if the hype can be justified at this stage. (TechCrunch)
+ Two former DeepMind researchers are chasing superintelligence. (Bloomberg $)
+ Four Chinese AI startups to watch beyond DeepSeek. (MIT Technology Review)

2 Meta went to extreme lengths to win China’s approval
Including developing a censorship system to comply with the CCP. (WP $)
+ However, its attempts to curry favor with the party did not bear fruit. (Gizmodo)

3 Anonymous Chinese investors are quietly funding Elon Musk’s ventures
They’re happy to invest tens of millions—so long as their identities remain under wraps. (FT $)
+ Despite the influx of cash, SpaceX isn’t having a great year. (NYT $)
+ Starlink is reaping the benefits of its founder’s proximity to the White House. (NBC News)

4 Ukraine doesn’t have minable rare earths
And even if it did, it would take at least 15 years to reach them. (IEEE Spectrum)
+ The country is preparing to hold negotiations with the US this week. (Economist $)

5 Farewell, the Athena lunar lander
It landed sideways in a crater and has been officially written off. (The Register)
+ Intuitive Machines, the company behind it, is contracted for another two landings. (AP News)
+ Firefly Aerospace, another private firm, had better luck. (Economist $)

6 The American public really doesn’t like DOGE
And Donald Trump is starting to pay attention. (The Atlantic $)
+ Musk represents the problem he is claiming he wants to solve. (Wired $)
+ The Trump administration is threatening scientific progress. (New Yorker $)
+ Anti-Musk protestors are targeting Tesla stores and infrastructure. (WP $)

7 Wikipedia is struggling to document the war in the Middle East
Certain editors have been forbidden from working on related pages. (Bloomberg $)

8 How to store the world’s data
Hard discs seem the obvious choice—for now. (WSJ $)
+ Music labels are going after the Internet Archive for copyright infringement. (Ars Technica)
+ The race to save our online lives from a digital dark age. (MIT Technology Review)

9 YouTube bros are peddling Taliban tourism
Inside the depressing rise of videos purporting to show “another side to Afghanistan.” (Insider $)

10 Amazon and Google’s AI calls Mein Kampf “a true work of art” 
That’s what happens when you search for positive reviews of the Nazi manifesto. (404 Media)
+ Want AI that flags hateful content? Build it. (MIT Technology Review)

Quote of the day

“The US won the internet and the US should win crypto.”

—Tyler Winklevoss, who runs crypto exchange Gemini with his twin brother Cameron, could not be happier with the outcome of Donald Trump’s crypto summit, according to a post on X.

The big story

How this Turing Award–winning researcher became a legendary academic advisor

October 2023

Every academic field has its superstars. But a rare few achieve superstardom not just by demonstrating individual excellence but also by consistently producing future superstars.

Computer science has its own such figure: Manuel Blum, who won the 1995 Turing Award—the Nobel Prize of computer science. He is the inventor of the captcha—a test designed to distinguish humans from bots online.

Three of Blum’s students have also won Turing Awards, and many have received other high honors in theoretical computer science. More than 20 hold professorships at top computer science departments. So what’s the formula to his success? Read the full story.

—Sheon Han

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Looking for some books to make you laugh out loud? Look no further.
+ What can’t White Lotus star Walton Goggins live without? An orange pen and 22-year old sand, apparently.
+ When it’s time to take a break, here’s how to recharge properly.
+ $40 for “magic” yogurt? What the hell, sure.

Create Google Ads Scripts via GenAI

I’ve addressed the benefits of Google Ads scripts to automate aspects of account management. Producing custom scripts is now much easier with generative AI platforms.

Scripts can go beyond automation to solve problems. For example, it’s not always evident when an account stops serving ads. The payment method may fail, or Google could reject them. The “Account Down” script runs hourly and sends an email when an account hasn’t seen an impression in a designated period.

I’ve struggled with campaigns that set a higher target cost per acquisition than the budget. It’s not always obvious. A budget might be $500, but the tCPA is $600. The tCPA should be at least three times the budget in my experience.

AI for Ad Scripts

Here’s where AI comes in. Tools such as ChatGPT, Gemini, and Claude (my favorite) can generate scripts quickly.

Sample prompt in Claude for the tCPA mismatch and Claude's response.

Generative AI platforms such as Claude can produce scripts quickly.

Here’s my prompt in Claude to create a script:

Please write a Google Ads script that reviews all campaigns and sends an email alert when the tCPA is higher than the campaign budget.

Here’s Claude’s response:

Google Ads tCPA Alert Script

This script:

1. Iterates through all enabled campaigns in your Google Ads account
2. Identifies campaigns using Target CPA bidding strategy
3. Compares each campaign’s tCPA value against its daily budget
4. Creates a list of campaigns where tCPA exceeds the budget
5. Sends you an email alert with details if any campaigns meet this condition

I pasted the script into Google Ads and received an error.

3/5/2025 3:47:20 PM TypeError: campaign.bidding(...).get is not a function
at main (Code:39:40)
at Object. (adsapp_compiled:20555:54)

I then prompted Claude to fix the error. Here’s Claude’s response:

Google Ads tCPA Alert Script – Fixed

I’ve fixed the error by correcting how the script retrieves the target CPA value. The issue was with the line `var bidding = campaign.bidding().get();` which was using an incorrect method call.

Key changes made:

1. Replaced the problematic method chain with a direct call to `campaign.bidding().getTargetCpa()`

2. Added error handling with try/catch blocks to gracefully handle different API versions and bidding strategy configurations

3. Added support for retrieving tCPA from shared bidding strategies (which have a different access method)

4. Added validation to ensure tCPA was successfully retrieved before comparing it to the budget

The script — download it here — now runs in Google. I solved a problem through a series of prompts.

Not perfect

My next AI task was to obtain a script that identified all active ad groups in live campaigns with no impressions in 21 days. The alert would trigger me to investigate the cause.

Claude’s script worked, but it had two errors:

  • It defaulted to any period, not 21 days.
  • It included custom experiments that ended 21 or more days previously. Experiments run only for a designated period, but the script included them as normal campaigns.

After sharing these errors and receiving replacement scripts, Claude produced new errors for invalid inputs. I ended the session without securing the script I wanted.

It’s a handy reminder that AI is not always as productive as humans. A developer could likely set up this script without the extended back-and-forth.

Claude’s script for tCPA conflicts worked. The no-impressions script did not.

Monitor the unexpected

The final script I created addressed rejected ads. Advertisers should monitor not just the ads but rejected keywords and audiences, which occur occasionally.

Any script in Google Ads times out after 30 to 60 minutes. I first prompted Claude to identify disapproved ad assets (sitelinks, callouts, and more). Fearing a timeout, I opted for two separate scripts.

Dialog showing the prompt and Claude's response.

The follow-up script and response focused on “limited” statuses.

Here’s the prompt for the non-asset disapproval.

Please update this script so it only looks at disapproved ads, keywords, and audiences. Entities with “limited” status should not be included.

And Claude’s response:

I’ll update the script to only include items with “DISAPPROVED” status and exclude those with “LIMITED” or any other non-disapproved status. Here’s the modified version:

Google Ads Script for Disapproved Items Only. Click to open code.

The key changes I made to the script:

1. Changed the condition checks for ads from:

```javascript
if (ad.getPolicyApprovalStatus() != 'APPROVED') {
```

to:

```javascript
if (ad.getPolicyApprovalStatus() == 'DISAPPROVED') {

Download the script.

No limits

My example scripts above alert advertisers via email, but I could have prompted Claude to output to a spreadsheet instead. I could have included paused entities. The possibilities are seemingly limitless.

U.S. DOJ Antitrust Filing Proposes 4 Ways To Break Google’s Monopoly via @sejournal, @martinibuster

The plaintiffs in an antitrust lawsuit against Google filed a revised proposed final judgment for the judge in the case to consider. The proposal comes after a previous ruling where the court determined that Google broke antitrust laws by illegally maintaining its monopoly.

The legal filing by the plaintiffs, the United States Department Of Justice and State Attorneys General, argue that Google has maintained monopolies in search services and text advertising through anticompetitive practices.

The filing proposes four ways to loosen Google’s monopolistic hold on search and advertising.

  1. Requiring Google to separate Chrome from its business—this could mean selling it or spinning it off into an independent company.
  2. Limiting Google’s payments to companies like Apple for making Google the default search engine, reducing its ability to secure exclusive deals.
  3. Stopping Google from favoring its own products over competitors in search results and other services, ensuring a more level playing field.
  4. Increasing transparency in Google’s advertising and data practices so competitors have fairer access to key information.

The proposal asks that Google be subjected to continuous oversight through mandatory reporting to ensure transparency in Google’s advertising and data practices:

“Google must provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes.”

It also suggests ongoing enforcement to guarantee that Google doesn’t impose new restrictions that undermine transparency requirements:

“Google must not limit the ability of advertisers to export in real time (by downloading through an interface or API access) data or information relating to their entire portfolio of ads or advertising campaigns bid on, placed through, or purchased through Google.”

The goal of the above section is to increase transparency in Google’s advertising system and make it easier for advertisers to analyze their ad performance, greater transparency.

Real-time access ensures advertisers can make immediate adjustments to their campaigns instead of waiting for delayed reports and it assures that advertisers aren’t locked into the Google advertising system by holding them hostage to their historical data.

The legal filing requires government-imposed restrictions and changes to Google’s advertising business practices. It proposes remedies for how Google should be regulated or restructured following the court’s earlier ruling that Google engaged in monopolistic practices. However, this is not the final judgment and the court must still decide whether to adopt, modify, or reject these proposed remedies.