Google has updated its guidelines on faceted navigation by turning an old blog post into an official help document.
What started as a blog post in 2014 is now official technical documentation.
This change reflects the complexity of ecommerce and content-heavy websites, as many sites adopt advanced filtering systems for larger catalogs.
Faceted Navigation Issues
Ever used filters on an e-commerce site to narrow down products by size, color, and price?
That’s faceted navigation – the system allowing users to refine search results using multiple filters simultaneously.
While this feature is vital for users, it can create challenges for search engines, prompting Google to release new official documentation on managing these systems.
Modern Challenges
The challenge with faceted navigation lies in the mathematics of combinations: each additional filter option multiplies the potential URLs a search engine might need to crawl.
For example, a simple product page with options for size (5 choices), color (10 choices), and price range (6 ranges) could generate 300 unique URLs – for just one product.
According to Google Analyst Gary Illyes, this multiplication effect makes faceted navigation the leading cause of overcrawling issues reported by website owners.
The impact includes:
Wasting Server Resources: Many websites use too much computing power on unnecessary URL combinations.
Inefficient Crawl Budget: Crawlers may take longer to find important new content because they are busy with faceted navigation.
Weakening SEO Performance: Having several URLs for the same content can hurt a website’s SEO.
Google has updated its Generative AI Prohibited Use Policy to clarify the proper use of its generative AI products and services.
The update simplifies the language, and lists prohibited behaviors with examples of unacceptable conduct.
Key Updates To Policy
The updated policy clarifies existing rules without adding new restrictions.
It specifically bans using Google’s AI tools to create or share non-consensual intimate images or to conduct security breaches through phishing or malware.
The policy states:
“We expect you to engage with [generative AI models] in a responsible, legal, and safe manner.”
Prohibited activities include dangerous, illegal, sexually explicit, violent, hateful, or deceptive actions, as well as content related to child exploitation, violent extremism, self-harm, harassment, and misinformation.
Prohibited Activities
The policy prohibits using Google’s generative AI for an expansive range of dangerous, illegal, and unethical activities:
Illegal Activities: Engaging in or facilitating child exploitation, violent extremism, terrorism, non-consensual intimate imagery, self-harm, or other illegal activities.
Security Violations: Compromising security through phishing, malware, spam, infrastructure abuse, or circumventing safety protections.
Explicit and Harmful Content: Generating sexually explicit content, hate speech, harassment, violence incitement, or other abusive content.
Deception and Misinformation: Impersonation without disclosure, misleading claims of expertise, misrepresenting content provenance, or spreading misinformation related to health, governance, and democratic processes.
Exceptions Allowed
New language in the policy carves out exceptions for some restricted activities in particular contexts.
Educational, documentary, scientific, artistic, and journalistic uses may be permitted, as well as other cases “where harms are outweighed by substantial benefits to the public.”
Why This Matters
The policy update addresses the rapid advancement of generative AI technologies that create realistic text, images, audio, and video.
This progress raises concerns about ethics, misuse, and societal impact.
Looking Ahead
Google’s updated policy is now in effect, and the old and new versions are publicly available.
Leading AI companies like OpenAI and Microsoft have released their own usage rules. However, raising awareness and consistently enforcing these rules still need to be improved.
As generative AI becomes more common, creating clear usage guidelines is essential to ensure responsible practices and reduce harm.
Former Google CEO Eric Schmidt said that the trajectory of AI is both “enticing” and “frightening.” He emphasized that AI is not just an evolution of technology, it’s about shaping the future of humanity. His comments reflect how the highest levels of technology leaders think about AI and carry implications for how this will play out for SEO.
Tech Companies Shouldn’t Be Making The Decisions
Asked if the decisions about the future of technology should be left to people like him, Eric Schmidt responded no. He cited Henry Kissinger who ten years ago said that people like Schmidt should not be making the decisions and used the example of social media to explain why.
“Let’s look at social media. We’ve now arrived at a situation where we have these huge companies in which I was part of. And they all have this huge positive implication for entertainment and culture, but they have significant negative implications in terms of tribalism, misinformation, individual harm, especially against young people, and especially against young women.
None of us foresaw that. Maybe if we’d had some non-technical people doing this with us, we would have foreseen the impact on society. I don’t want us to make that mistake again with a much more powerful tool.”
AI Is Both Frightening & Enticing
Eric Schmidt has been an active participant in the development of computer technology since 1975 to the present. The awe he expresses for the point in time we are in now is something that everyone at every level of search marketing, from publishing, SEO, advertising to ecommerce should be aware of. The precipice we find ourselves at right now should not be underestimated and at this point it barely seems possible to overestimate it.
Given that Sundar Pichai, Google’s current CEO, stated that search will be changing in profound ways in 2025 and the revelation that Google Gemini 2.0 will play a role in powering AI search, Schmidt’s declarations about the mind-boggling scale of computing capabilities should be of high importance to search marketers in both the enticing capabilities for them and frightening realities of what Google will be doing.
Schmidt observed:
“There are two really big things happening right now in our industry. One is the the development of what are called agents, where agents can do something. So you can say I want to build a house so you find the architect, go through the land use, buy the houses. Can all be done by computer not just by humans.
And then the other thing is the ability for the computer to write code. So if I say to you I wanted sort of study the audience for this show and I want you to figure out how to make a variant of my show for each and every person who’s watching it. The computer can do that. That’s how powerful the programming capabilities of AI are.
In my case, I’ve managed programmers my whole life and they typically don’t do what I want. You know, they do whatever they want.
With a computer, it’ll do exactly what you say. And the gains in computer programming from the AI systems are frightening, they’re both enticing because they will change the slope. Right now, the slope of AI is like this…”
Screenshot Of Schmidt Illustrating The Slope Of AI
He continued his answer:
“…and when you have AI scientists, that is computers developing AI, the slope will go this… it will go wham! But that development puts an awful lot of power in the hands of an awful lot of people.”
Screenshot Of Eric Schmidt Illustrating The Future AI Slope
Embedding The Intrinsic Goodness Of Humanity In AI
The interview ended with a question and answer around the possibility of embedding positive human values and ethical principles into AI systems during their development.
There are some people who complain about the ethical guardrails placed on AI, claiming that the guardrails are based on political or ideological values, reflecting the tension between those who feel entitled to the freedom to use AI to whatever ends they desire and those who fear that AI may be used for evil purposes.
Eric Schmidt addresses this tension by saying that machines can be embedded with the best of human goodness.
The interviewer noted that Schmidt, in his book, expressed confidence that machines will reflect “the intrinsic goodness in humanity” and asked whether humanity can truly be considered inherently good, especially when some people clearly aren’t.
Schmidt acknowledged that there is a certain percentage of people who are evil. But he also expressed that in general people tend to be good and that humans can put ethical rules into AI machines.
He explained:
“The good news is the vast majority of humans on the planet are well meaning, they’re social creatures. They want themselves to do well and they want their neighbors and especially their tribe, to do well.
I see no reason to think that we can’t put those rules into the computers.
One of the tech companies started its training of its model by putting in the Constitution and the Constitution was embedded inside of the model of how you treat things.
Now, of course, we can disagree on what the Constitution is. But these systems are under our control.
There are humans who are making decisions to train them, and furthermore, the systems that you use, whether it’s ChatGPT or Gemini or or Claude or what have you, have all been carefully examined after they were produced to make sure they don’t have any really horrific rough edges.
So humans are directly involved in the creation of these models, and they have a responsibility to make sure that nothing horrendous occurs as a result of them.”
That statement seems to presume that people like him shouldn’t be making the decisions but that they should be made with consultation with outsiders, as he said at the beginning of the interview. Nevertheless, the decisions are always made by corporations.
People Mean Well But Corporations Answer To Profits
The question that wasn’t asked is that with a few exceptions (like the outdoor clothing company Patagonia), considering that corporations generally aren’t motivated by “human goodness” or base their decisions on ethics, can they be trusted to imbue machines with human goodness?
Despite click bait articles to the contrary, Google still publishes their “don’t be evil” motto on their Code Of Conduct page, they simply moved it to the bottom of the page. Nevertheless, Google’s corporate decisions, including about search, are strongly based on profit.
On the issue of whether AI Search is strip mining Internet websites out of existence, Sundar Pichai, the current Google CEO, struggled to say what Google does to preserve the web ecosystem. That’s the outcome of a system that prioritizes profits.
Is that evil, or is it just the banality of a corporate system that prioritizes profit over everything else, leading to harmful outcomes? What does that say about the future of AI Search and the web ecosystem?
Screenshot of Google’s De-Prioritized Don’t Be Evil Motto
OpenAI has updated ChatGPT to make web search available to all registered users. The update also includes voice search and maps integration.
With voice search, you can ask questions about current events and local information in a natural way. This feature works in multiple languages and allows for real-time queries.
Additionally, ChatGPT’s mobile apps now include maps, which can help you find businesses and restaurants near you.
Lastly, for those using ChatGPT as their default search provider, OpenAI has improved its handling of navigational queries.
Search Available For Free
OpenAI announced that the web search feature of ChatGPT, which was previously only available to Plus subscribers, is now accessible to all logged-in users worldwide.
🌐ChatGPT search 🌐is starting to roll out to all Free users today.
This service can be accessed through chatgpt.com as well as the mobile and desktop applications.
For more on ChatGPT Search, see:
Advanced Voice Search Integration
A key improvement with this update is advanced voice search.
This lets you find current web information through natural conversation.
The system can now handle complex questions, including travel planning and local events. It also supports multiple languages and provides real-time information.
In a video about the advanced voice mode, an OpenAI representative demonstrates how you can have natural conversations with ChatGPT to get information about events and activities.
For instance, when asked about festive activities in Zurich, Switzerland, for the week of December 23rd, 2024, ChatGPT provided details on Christmas markets, singing Christmas tree concerts, and Circus Kinelli.
The video also shows that ChatGPT can give specific information, like the days and hours of the Christkindlmarkt at Zurich’s main station.
It easily switches to answer questions about family-friendly events in New York City during the same week, mentioning the New York Botanical Garden’s Holiday Train Show and the Bank of America Winter Village at Bryant Park.
Navigational Searches
OpenAI has improved the user experience when using ChatGPT as the default search engine in web browsers.
In another video, representatives from OpenAI explained that the company has prioritized making it faster to navigate directly to websites from the browser’s address bar.
Now, by simply typing in keywords such as “Netflix” or “hotel booking sites,” users can quickly access the most relevant links without needing to sift through lengthy AI-generated responses.
When you use ChatGPT as the default search engine in your browser, we’ve made it faster to get to where you want to go on the web. pic.twitter.com/q5N38GUPjS
OpenAI has added maps to the ChatGPT mobile apps to help you find local restaurants and businesses.
This feature gives you up-to-date information, so you can easily search for and discuss options while you’re on the go.
We’re also adding maps to ChatGPT in our mobile apps, so you can search for and chat about local restaurants and businesses with up-to-date information. pic.twitter.com/bQb4zilq6p
ChatGPT’s search features – previously Premium-only – are now free for all users.
The update adds voice search and maps, plus better direct navigation to websites.
To use these tools on the web or mobile, you only need a ChatGPT account. Voice search works in multiple languages, and the maps feature helps with local searches.
Google’s Developer Advocate, Martin Splitt, warns website owners to be cautious of traffic that appears to come from Googlebot. Many requests pretending to be Googlebot are actually from third-party scrapers.
He shared this in the latest episode of Google’s SEO Made Easy series, emphasizing that “not everyone who claims to be Googlebot actually is Googlebot.”
Why does this matter?
Fake crawlers can distort analytics, consume resources, and make it difficult to assess your site’s performance accurately.
Here’s how to distinguish between legitimate Googlebot traffic and fake crawler activity.
Googlebot Verification Methods
You can distinguish real Googlebot traffic from fake crawlers by looking at overall traffic patterns rather than unusual requests.
Real Googlebot traffic tends to have consistent request frequency, timing, and behavior.
If you suspect fake Googlebot activity, Splitt advises using the following Google tools to verify it:
URL Inspection Tool (Search Console)
Finding specific content in the rendered HTML confirms that Googlebot can successfully access the page.
Provides live testing capability to verify current access status.
Rich Results Test
Acts as an alternative verification method for Googlebot access
Shows how Googlebot renders the page
Can be used even without Search Console access
Crawl Stats Report
Shows detailed server response data specifically from verified Googlebot requests
Helps identify patterns in legitimate Googlebot behavior
There’s a key limitation worth noting: These tools verify what real Googlebot sees and does, but they don’t directly identify impersonators in your server logs.
To fully protect against fake Googlebots, you would need to:
Compare server logs against Google’s official IP ranges
Implement reverse DNS lookup verification
Use the tools above to establish baseline legitimate Googlebot behavior
Monitoring Server Responses
Splitt also stressed the importance of monitoring server responses to crawl requests, particularly:
500-series errors
Fetch errors
Timeouts
DNS problems
These issues can significantly impact crawling efficiency and search visibility for larger websites hosting millions of pages.
Splitt says:
“Pay attention to the responses your server gave to Googlebot, especially a high number of 500 responses, fetch errors, timeouts, DNS problems, and other things.”
He noted that while some errors are transient, persistent issues “might want to investigate further.”
Splitt suggested using server log analysis to make a more sophisticated diagnosis, though he acknowledged that it’s “not a basic thing to do.”
However, he emphasized its value, noting that “looking at your web server logs… is a powerful way to get a better understanding of what’s happening on your server.”
Potential Impact
Beyond security, fake Googlebot traffic can impact website performance and SEO efforts.
Splitt emphasized that website accessibility in a browser doesn’t guarantee Googlebot access, citing various potential barriers, including:
Robots.txt restrictions
Firewall configurations
Bot protection systems
Network routing issues
Looking Ahead
Fake Googlebot traffic can be annoying, but Splitt says you shouldn’t worry too much about rare cases.
Suppose fake crawler activity becomes a problem or uses too much server power. In that case, you can take steps like limiting the rate of requests, blocking specific IP addresses, or using better bot detection methods.
WP Engine regained control of their Advanced Custom Forms plugin and login access to WordPress.org. Matt Mullenweg responded by expressing that he is “disgusted and sickened.”
Mullenweg tweeted about how he felt about how things turned out:
“I’m disgusted and sickened by being legally forced to provide free labor and services to @wpengine, a dangerous precedent that should chill every open source maintainer. While I disagree with the court’s decision, I’ve fully complied with its order. You can see most changes on the site. They have access to ACF slug but haven’t changed it… must not have been the emergency they claimed.”
I’m disgusted and sickened that you released software as GPL, made it intimately dependent on a private website+APIs you personally own and then you’re shocked when you learn you can’t discriminate against users
Another accused Mullenweg of tricking the WordPress community:
“And what about all of the free labor that you, @photomatt , tricked the WordPress community into providing to your personal .org website that the community believed was owned by the Foundation?”
Despite the compliance, Mullenweg pointed out that WP Engine had yet to change the plugin slug, questioning their claim of urgency. The ACF team subsequently reclaimed the plugin slug and tweeted an announcement about it.
On December 13, 2024, WP Engine’s official Advanced Custom Fields account confirmed on X (formerly Twitter) that they had regained access. The WordPress.org plugin directory now displays the original ACF plugin instead of Mullenweg’s forked version, Secure Custom Fields.
“We’re pleased to share that our team has had account access restored on WordPress dot org along with control of the ACF plugin repo. This means all ACF users can rest assured that the ACF team you trust is once again maintaining the plugin. There’s no action required if you have installed ACF directly from the ACF website or you are an ACF PRO user.”
Members of the WordPress community congratulated WP Engine.
Matt Mullenweg claims that WP Engine does not contribute enough to the WordPress ecosystem. He has also raised concerns about WP Engine’s use of the word “WordPress” and has written about his years long attempt to get WP Engine to pay a “fair share” back into the WordPress open source project. On the September 20, 2024 Matt Mullenweg publicy denounced WP Engine at the United States WordCamp conference, after WP Engine declined to agree to his demands for $30 million dollars.
WP Engine sued Automattic and Matt Mullenweg in federal court, obtaining a preliminary injunction that required Automattic and Mullenweg to restore WP Engine’s access to WordPress.org, the plugin repository, logins and to remove a WP Engine customer list from a website Mullenweg created to encourage customers to leave WP Engine.
Mullenweg’s History Of Disputes
There is some history of Mullenweg engaging in disputes related to GPL licensing of code and trademarks. In 2010 Mullenweg rightfully challenged Chris Pearson and his theme company Thesis over software licensing. Chris Pearson himself has acknowledged that he was ignorant at the time about software licensing.
Mullenweg escalated his dispute with Pearson by offering Thesis customers any premium theme of their choice in exchange for abandoning their use of the Thesis them. These disputes caused Pearson to lose a significant amount of business and gain a negative perception in the WordPress community, which he described in a blog post:
“…I was woefully ignorant about software licensing, and I felt as though I was being backed into a corner and asked to accept something I didn’t fully understand. Instead of handling it in a measured, polite manner, I was a jerk.
I made a mistake, and I paid dearly for it.The WordPress community’s reaction towards me was incredibly negative, but on top of that, Matt did whatever he could to further damage what was left of my business. His most blatant effort in this regard was making a public offer to buy Thesis customers the premium, GPL-licensed Theme of their choice if they quit using Thesis.”
Three years later Mullenweg purchased the Thesis.com domain name which began another dispute with Pearson that Mullenweg also won. His motivation for going after the Thesis.com domain name was never fully acknowledged but the WordPress community largely understood it as “retribution” against Pearson.
The comments in a WP Tavern report about Automattic were largely negative, with one person’s comment representative of the negative sentiment:
“I don’t think anyone is saying what Automattic did was illegal, they’re saying it was unethical.
It’s possible to be a jerk without breaking the law, but that doesn’t make it acceptable behavior.”
In 2016 Matt Mullenweg initiated a dispute with Wix in relation to GPL licensing. Wix’s CEO responded with his own blog post showing how Wix had contributed over 224 open source projects, writing:
“Yes, we did use the WordPress open source library for a minor part of the application (that is the concept of open source right?), and everything we improved there or modified, we submitted back as open source, see here in this link – you should check it out, pretty cool way of using it on mobile native. I really think you guys can use it with your app (and it is open source, so you are welcome to use it for free). And, by the way, the part that we used was in fact developed by another and modified by you.”
The court’s ruling emphasizes the importance of adherence to legal agreements within the WordPress ecosystem. WP Engine’s victory may bolster its chances of prevailing in the ongoing federal lawsuit. Automattic’s to their loss signals their intention to challenge the outcome during a full trial, stating:
“We look forward to prevailing at trial as we continue to protect the open-source ecosystem during full-fact discovery and a full review of the merits.”
Matt Mullenweg continues to provoke WP Engine, only this time using humor. Automattic removed a checkmark from the WordPress.org login page that previously required users to affirm that they are not associated with WP Engine. Today there’s a checkbox asking users to affirm that pineapple on pizza is delicious.
“The Dec. 2024 core update is rolling out, and we expect it will complete in two weeks.
If you’re wondering why there’s a core update this month after one last month, we have different core systems we’re always improving. This past blog post explains more,”
Google’s post included a link to a blog post from November 2023 titled “A Q&A on Google Search updates.”
The blog post provides context around the company’s cadence of algorithm updates.
Multiple Ranking Systems
According to the announcement, Google uses “multiple ranking systems that do different things” and is “always looking at ways to improve these systems to show better results.”
The company said it generally shares information about “notable” updates that it thinks might produce noticeable changes in search results.
Regarding the proximity of the November and December updates, Google explained that while it tries to separate notable updates, “it’s not always possible” given the large number of updates the company implements overall. The post stated:
“If we have updates that can improve Search, that have been developed over the course of several months, we release them when they’re ready.”
Advice For Websites
As with previous core updates, the December update’s specific changes are unknown. However, Google has consistently advised that the best way for creators to succeed through these updates is to remain focused on creating helpful, reliable, people-first content.
Site owners who notice changes in traffic following an update are advised to look closely at Google’s update-specific guidance, which can be found via the Google Search Status Dashboard. The dashboard also allows users to check the status of an update rollout and subscribe to an RSS feed for alerts.
Wrapping Up a Year of Algorithm Updates
The December core update caps off a busy year of algorithm changes for Google Search.
We will closely watch traffic patterns and search rankings to assess the impact as the December update rolls out over the coming weeks.
Search Engine Journal will continue to monitor the situation and provide updates as they become available.
Automattic removed a spreadsheet containing the domain names of WP Engine customers from the WP Engine Tracker website. The removal is in response to a preliminary injunction granted to WP Engine, ordering Automattic and Matt Mullenweg to remove the spreadsheet within 72 hours.
The preliminary injunction was warmly received on X (formerly Twitter), a tweet by Joe Youngblood representative of the general sentiment:
“The ruling was a gigantic win for small businesses and entrepreneurs that rely on open source keeping it’s promises. That includes allowing webhosts to host and not stealing code repositories.
I am hopeful the full outcome of this looks much the same.”
“Unbiased parties watching on the sidelines think the court got it right. This was obvious from day one.
Next step for you guys is to try to settle out of court to prevent further embarrassment and reduce potential risk in damages.”
Mullenweg’s Dispute With WP Engine
Matt Mullenweg began an attack against WP Engine on September 20, 2024 after WP Engine declined to pay tens of millions of dollars, what WP Engine’s attorney’s called “extortionate monetary demands” in a cease and desist letter sent to Automattic’s Chief Legal Officer on September 23rd.
On November 6th Automattic intensified the pressure on WP Engine by launching a website called WP Engine Tracker that offered a list of WP Engine customers that could be used by other web hosts to solicit the clients with offers to leave WP Engine.
Solicitations of WP Engine customers apparently followed, as related by a Redditor in a discussion about the WP Engine Tracker website:
“I was out of the office for some medical procedures, so I missed the WPE Tracker thing. However, this explains why I’ve received unsolicited hosting calls from certain operations. Clearly, someone is mining it to solicit business. Absolutely aggravating and also completely expected.
All this does is further entrench me on WP Engine. Good work, Matt, you dweeb.”
The WP Engine Tracker website became evidence of the harm Mullenweg was causing to WP Engine and was cited in the request for a preliminary injunction.
The judge sided with WP Engine and granted the preliminary injunction, requiring among many other things that Automattic and Mullenweg take down the list of WP Engine customers.
The court order states:
“Within 72 hours, Defendants are ORDERED to:
…(a) remove the purported list of WPEngine customers contained in the “domains.csv” file linked to Defendants’ wordpressenginetracker.com website (which was launched on or about November 7, 2024) and stored in the associated GitHub repository located at https://github.com/wordpressenginetracker/wordpressenginetracker.github.io.”
The CSV file was subsequently removed although the link to a non-existent file , with a link showing zero :
Screenshot Of WP Engine Tracker Website
Clicking the link leads to a 404 error response message.
Screenshot Of 404 Error Response For CSV Download
A pull request on GitHub shows that a request was made to remove the CSV file on December 11th.
“Remove CTA to download list of sites #29
wordpressenginetracker commented 9 hours ago This PR removes the text and download link to download the list of sites that have are still using WPE”
Screenshot Of GitHub Pull Request
Advanced Custom Fields Plugin
Automattic removed WP Engine’s Advanced Custom Fields (ACF) plugin from the official WordPress.org plugin repository and replaced it with Automattic’s cloned version, renamed as Secure Custom Fields (SCF).
The preliminary injunction orders Automattic to also restore access to the Advanced Custom Fields (ACF) plugin repository:
“Within 72 hours, Defendants are ORDERED to:
…(v) returning and restoring WPEngine’s access to and control of its Advanced Custom Fields (“ACF”) plugin directory listing at https://wordpress.org/plugins/advanced-customfields, as it existed as of September 20, 2024.”
The cloned SCF plugin currently still exists at that URL, although Automattic still has time to take it down.
Screenshot Of SCF Plugin In The ACF Directory Listing
“Our AI Overviews now reach 1 billion people, enabling them to ask entirely new types of questions — quickly becoming one of our most popular Search features ever.”
With Gemini 2.0, AI overviews will soon handle complex topics and multi-step questions, including advanced math, multimodal queries, and coding.
Pichai explained:
“We’re bringing the advanced reasoning capabilities of Gemini 2.0 to AI Overviews to tackle more complex topics and multi-step questions, including advanced math equations, multimodal queries and coding.”
Google is testing these updates and plans to roll out the improved AI Overviews in early 2025, with plans to expand to more countries and languages within the next year.
Gemini 2.0
Gemini 2.0, mainly the Gemini 2.0 Flash model, is key to the recent Search updates.
As described by Google DeepMind’s leadership:
“2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed.”
This model improves performance and can handle different types of inputs and outputs.
The announcement states:
“In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio.”
Additionally, Gemini 2.0 Flash can use tools like Google Search and run code to access user-defined functions from other sources.
New Possibilities For Search
Google is developing new features for Search, including Project Mariner, which aims to improve user interaction with agents in web browsers.
The company describes it as:
“… an early research prototype built with Gemini 2.0 that explores the future of human-agent interaction, starting with your browser.”
Looking Ahead
Integrating Gemini 2.0 into Google Search could be a key step in improving users’ experience with AI overviews.
The success of these updates will depend on how well Google implements them while maintaining safety and responsibility.
As the updates roll out, we will see how users respond and whether these changes enhance the search experience.