Google AI Overviews Now Powered By Gemini 3 via @sejournal, @MattGSouthern

Google is making Gemini 3 the default model for AI Overviews in markets where the feature is available and adding a direct path into AI Mode conversations.

The updates, shared in a Google blog post, bring Gemini 3’s reasoning capabilities to AI Overviews. Google says the feature now reaches over one billion users.

What’s New

Gemini 3 For AI Overviews

The Gemini 3 upgrade brings the same reasoning capabilities to AI Overviews that previously powered AI Mode.

Robby Stein, VP of Product for Google Search, wrote:

“We’re rolling out Gemini 3 as the default model for AI Overviews globally, so even more people will be able to access best-in-class AI responses, directly in the results page for questions where it’s helpful.”

Gemini 3 launched in November, and Google shipped it to AI Mode on release day. This expands Gemini 3 from AI Mode into AI Overviews as the default.

AI Overview To AI Mode Transition

You can now ask a follow-up question right from an AI Overview and continue into AI Mode. The context from the original response carries into the conversation, so you don’t start over.

Stein described the thinking behind the change:

“People come to Search for an incredibly wide range of questions – sometimes to find information quickly, like a sports score or the weather, where a simple result is all you need. But for complex questions or tasks where you need to explore a topic deeply, you should be able to seamlessly tap into a powerful conversational AI experience.”

He called the result “one fluid experience with prominent links to continue exploring.”

An earlier test of this flow ran globally on mobile back in December.

In testing, Google found people prefer this kind of natural flow into conversation. The company also found that keeping AI Overview context in follow-ups makes Search more helpful.

Why This Matters

The pattern has held since AI Overviews launched. Each update makes it easier to stay within AI-powered responses.

When Gemini 3 arrived in AI Mode, it brought deeper query fan-out and dynamic response layouts. AI Overviews running on the same model could produce different citation patterns.

That makes today’s update an important one to monitor. Model changes can affect which pages get cited and how responses are structured.

Looking Ahead

Google says the updates are rolling out starting today, though availability may vary by market.

Google previously indicated plans to add automatic model selection that routes complex questions to Gemini 3 while using faster models for simpler tasks. Whether that affects AI Overviews beyond today’s default model change isn’t specified.


Featured Image: Darshika Maduranga/Shutterstock

WP Go Maps Plugin Vulnerability Affects Up To 300K WordPress Sites via @sejournal, @martinibuster

A security advisory was published about a vulnerability affecting the WP Go Maps plugin for WordPress installed on over 300,000 websites. The flaw enables authenticated subscribers to modify map engine settings.

WP Go Maps Plugin

The WP Go Maps plugin is used by local business WordPress sites to display customizable maps on pages and posts, including contact page maps, delivery areas, and store locations. Site owners can manage map markers and map settings without writing code.

The plugin had four vulnerabilities in 2025 and seven vulnerabilities in 2024. Vulnerabilities were discovered in the previous years stretching back to 2019 but not as often.

Vulnerability

The vulnerability can be exploited by authenticated attackers with Subscriber-level access or higher. The Subscriber role is the lowest WordPress permission role. This means an attacker only needs a basic user account to exploit the issue but only if that account level is offered to users on affected websites.

The vulnerability is caused by a missing capability check in the plugin’s processBackgroundAction() function. A capability check is used to verify whether a logged-in user is allowed to perform a specific action. Because this check is missing, the function processes requests from users who do not have permission to change plugin settings.

As a result, authenticated attackers with Subscriber-level access can modify global map engine settings used by the plugin. These settings apply site-wide and affect how the plugin functions across the website.

Wordfence described the vulnerability as an unauthorized modification of data caused by a missing capability check. In practice, this means the plugin allows low-privileged users to change global settings that should be restricted to administrators.

The Wordfence advisory explains:

“The WP Go Maps (formerly WP Google Maps) plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the processBackgroundAction() function in all versions up to, and including, 10.0.04. This makes it possible for authenticated attackers, with Subscriber-level access and above, to modify global map engine settings”

Any site running an affected version of the plugin with subscriber level registration enabled is exposed to authenticated attackers.

The vulnerability affects all versions of WP Go Maps up to and including version 10.0.04. A patch is available. Site owners are recommended to update the WP Go Maps plugin to version 10.0.05 or newer to fix the vulnerability.

Featured Image by Shutterstock/Dean Drobot

Sam Altman Says OpenAI “Screwed Up” GPT-5.2 Writing Quality via @sejournal, @MattGSouthern

Sam Altman said OpenAI “screwed up” GPT-5.2’s writing quality during a developer town hall Monday evening.

When asked about user feedback that GPT-5.2 produces writing that’s “unwieldy” and “hard to read” compared to GPT-4.5, Altman was blunt.

He said:

“I think we just screwed that up. We will make future versions of GPT 5.x hopefully much better at writing than 4.5 was.”

Altman explained that OpenAI made a deliberate choice to focus GPT-5.2’s development on technical capabilities:

“We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing. And we have limited bandwidth here, and sometimes we focus on one thing and neglect another.”

How OpenAI Positioned Each Model

The contrast between GPT-4.5 and GPT-5.2 shows where OpenAI focused its resources.

When OpenAI introduced GPT-4.5 in February 2025, the company emphasized natural interaction and writing. OpenAI said interacting with GPT-4.5 “feels more natural” and called it “useful for tasks like improving writing.”

GPT-5.2’s announcement took a different direction. OpenAI positioned it as the most capable model series yet for professional knowledge work, with improvements in creating spreadsheets, building presentations, writing code, and handling complex, multi-step projects.

The release post spotlights spreadsheets, presentations, tool use, and coding. Writing appears more briefly, with technical writing noted as an improvement for GPT-5.2 Instant. But Altman’s comments suggest the overall writing experience still fell short for users comparing it to GPT-4.5.

Why This Matters

We’ve covered the iterative changes to ChatGPT since GPT-5 launched in August, including updates to warmth and tone and the GPT-5.1 instruction-following improvements. OpenAI regularly adjusts model behavior based on user feedback, and regressions in one area while improving another aren’t new.

What’s unusual is hearing Altman acknowledge a tradeoff this directly. For anyone using ChatGPT output in client-facing work, drafts, or polished writing, this explains why outputs may have changed. Model upgrades don’t guarantee improvement across every capability.

If you rely on ChatGPT for writing, treat model updates like any other dependency change. Re-test your prompts when defaults change, and keep a fallback if output quality matters for your workflow.

Looking Ahead

Altman said he believes “the future is mostly going to be about very good general purpose models” and that even coding-focused models should “write well, too.”

No timeline was given for when GPT-5.x writing improvements will ship. OpenAI typically iterates on model behavior through point releases, so changes could arrive gradually rather than in a single update.

Hear Altman’s full statement in the video below:


Featured Image: FotoField/Shutterstock

Why Google Gemini Has No Ads Yet: ‘Trust In Your Assistant’ via @sejournal, @MattGSouthern

Google DeepMind CEO Demis Hassabis said Google doesn’t have any current plans to introduce advertising into its Gemini AI assistant, citing unresolved questions about user trust.

Speaking at the World Economic Forum in Davos, Hassabis said AI assistants represent a different product than search. He believes Gemini should be built for users first.

“In the realm of assistants, if you think of the chatbot as an assistant that’s meant to be helpful and ideally in my mind, as they become more powerful, the kind of technology that works for you as the individual,” Hassabis said in an interview with Axios. “That’s what I’d like to see with these systems.”

He said no one in the industry has figured out how advertising fits into that model.

“There is a question about how does ads fit into that model, where you want to have trust in your assistant,” Hassabis said. “I think no one’s really got a full answer to that yet.”

When asked directly about Google’s plans, Hassabis said: “We don’t have any current plans to do it ourselves.”

What Hassabis Said About OpenAI

The comments came days after OpenAI said it plans to begin testing ads in ChatGPT in the coming weeks for logged-in adults in the U.S. on free and Go tiers.

Hassabis said he was “a little bit surprised they’ve moved so early into that.”

He acknowledged advertising has funded much of the consumer internet and can be useful to users when done well. But he warned that poor execution in AI assistants could damage user relationships.

“I think it can be done right, but it can also be done in a way that’s not good,” Hassabis said. “In the end, what we want to do is be the most useful we can be to our users.”

Search Is Different

Hassabis drew a line between AI assistants and search when discussing advertising.

When asked whether his comments applied to Google Search, where the company already shows ads in AI Overviews, he said the two products work differently.

“But there it’s completely different use case because you’ve already just like how it’s always worked with search, you’ve already, you know, we know what your intent is basically and so we can be helpful there,” Hassabis said. “That’s a very different construct.”

Google began rolling out ads in AI Overviews in October 2024 and has continued expanding them since. The company claims AI Overviews generate ad revenue equal to traditional search results.

Why This Matters

This is the second time in two months that a Google executive has said Gemini ads aren’t currently planned.

In December, Google Ads VP Dan Taylor disputed an Adweek report claiming the company had told advertisers to expect Gemini ads in 2026. Taylor called that report “inaccurate” and said Google has “no current plans” to monetize the Gemini app.

Hassabis’s comments reinforce that position but go further by explaining the reasoning. His “technology that works for you” framing suggests Google sees a tension between advertising and the assistant relationship it wants Gemini to build.

Looking Ahead

Google is comfortable expanding ads where user intent is explicit, like search queries triggering AI Overviews. The company is holding back where intent is less defined and the relationship is more personal.

How long Google maintains its current position depends in part on how users respond to advertising in rival assistants.


Featured Image: Screenshot from: youtube.com/@axios, January 2026. 

Google’s New User Intent Extraction Method via @sejournal, @martinibuster

Google published a research paper on how to extract user intent from user interactions that can then be used for autonomous agents. The method they discovered uses on-device small models that do not need to send data back to Google, which means that a user’s privacy is protected.

The researchers discovered they were able to solve the problem by splitting it into two tasks. Their solution worked so well it was able to beat the base performance of multi-modal large language models (MLLMs) in massive data centers.

Smaller Models On Browsers And Devices

The focus of the research is on identifying the user intent through the series of actions that a user takes on their mobile device or browser while also keeping that information on the device so that no information is sent back to Google. That means the processing must happen on the device.

They accomplished this in two stages.

  1. The first stage the model on the device summarizes what the user was doing.
  2. The sequence of summaries are then sent to a second model that identifies the user intent.

The researchers explained:

“…our two-stage approach demonstrates superior performance compared to both smaller models and a state-of-the-art large MLLM, independent of dataset and model type.
Our approach also naturally handles scenarios with noisy data that traditional supervised fine-tuning methods struggle with.”

Intent Extraction From UI Interactions

Intent extraction from screenshots and text descriptions of user interactions was a technique that was proposed in 2025 using Multimodal Large Language Models (MLLMs). The researchers say they followed this approach to their problem but using an improved prompt.

The researchers explained that extracting intent is not a trivial problem to solve and that there are multiple errors that can happen along the steps. The researchers use the word trajectory to describe a user journey within a mobile or web application, represented as a sequence of interactions.

The user journey (trajectory) is turned into a formula where each interaction step consists of two parts:

  1. An Observation
    This is the visual state of the screen (screenshot) of where the user is at that step.
  2. An Action
    The specific action that the user performed on that screen (like clicking a button, typing text, or clicking a link).

They described three qualities of a good extracted intent:

  • “faithful: only describes things that actually occur in the trajectory;
  • comprehensive: provides all of the information about the user intent required to re-enact the trajectory;
  • and relevant: does not contain extraneous information beyond what is needed for comprehensiveness.”

Challenging To Evaluate Extracted Intents

The researchers explain that grading extracted intent is difficult because user intents contain complex details (like dates or transaction data) and the user intents are inherently subjective, containing ambiguities, which is a hard problem to solve. The reason trajectories are subjective is because the underlying motivations are ambiguous.

For example, did a user choose a product because of the price or the features? The actions are visible but the motivations are not. Previous research shows that intents between humans matched 80% on web trajectories and 76% on mobile trajectories, so it’s not like a given trajectory can always indicate a specific intent.

Two-Stage Approach

After ruling out other methods like Chain of Thought (CoT) reasoning (because small language models struggled with the reasoning), they chose a two-stage approach that emulated Chain of Thought reasoning.

The researchers explained their two-stage approach:

“First, we use prompting to generate a summary for each interaction (consisting of a visual screenshot and textual action representation) in a trajectory. This stage is
prompt-based as there is currently no training data available with summary labels for individual interactions.

Second, we feed all of the interaction-level summaries into a second stage model to generate an overall intent description. We apply fine-tuning in the second stage…”

The First Stage: Screenshot Summary

The first summary, for the screenshot of the interaction, they divide the summary into two parts, but there is also a third part.

  1. A description of what’s on the screen.
  2. A description of the user’s action.

The third component (speculative intent) is a way to get rid of speculation about the user’s intent, where the model is basically guessing at what’s going on. This third part is labeled “speculative intent” and they actually just get rid of it. Surprisingly, allowing the model to speculate and then getting rid of that speculation leads to a higher quality result.

The researchers cycled through multiple prompting strategies and this was the one that worked the best.

The Second Stage: Generating Overall Intent Description

For the second stage, the researchers fine tuned a model for generating an overall intent description. They fine tuned the model with training data that is made up of two parts:

  1. Summaries that represent all interactions in the trajectory
  2. The matching ground truth that describes the overall intent for each of the trajectories.

The model initially tended to hallucinate because the first part (input summaries) are potentially incomplete, while the “target intents” are complete. That caused the model to learn to fill in the missing parts in order to make the input summaries match the target intents.

They solved this problem by “refining” the target intents by removing details that aren’t reflected in the input summaries. This trained the model to infer the intents based only on the inputs.

The researchers compared four different approaches and settled on this approach because it performed so well.

Ethical Considerations And Limitations

The research paper ends by summarizing potential ethical issues where an autonomous agent might take actions that are not in the user’s interest and stressed the necessity to build the proper guardrails.

The authors also acknowledged limitations in the research that might limit generalizability of the results. For example, the testing was done only on Android and web environments, which means that the results might not generalize to Apple devices. Another limitation is that the research was limited to users in the United States in the English language.

There is nothing in the research paper or the accompanying blog post that suggests that these processes for extracting user intent are currently in use. The blog post ends by communicating that the described approach is helpful:

“Ultimately, as models improve in performance and mobile devices acquire more processing power, we hope that on-device intent understanding can become a building block for many assistive features on mobile devices going forward.”

Takeaways

Neither the blog post about this research or the research paper itself describe the results of these processes as something that might be used in AI search or classic search. It does mention the context of autonomous agents.

The research paper explicitly mentions the context of an autonomous agent on the device that is observing how the user is interacting with a user interface and then be able to infer what the goal (the intent) of those actions are.

The paper lists two specific applications for this technology:

  1. Proactive Assistance:
    An agent that watches what a user is doing for “enhanced personalization” and “improved work efficiency”.
  2. Personalized Memory
    The process enables a device to “remember” past activities as an intent for later.

Shows The Direction Google Is Heading In

While this might not be used right away, it shows the direction that Google is heading, where small models on a device will be watching user interactions and sometimes stepping in to assist users based on their intent. Intent here is used in the sense of understanding what a user is trying to do.

Read Google’s blog post here:

Small models, big results: Achieving superior intent extraction through decomposition

Read the PDF research paper:

Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition (PDF)

Featured Image by Shutterstock/ViDI Studio

BuddyPress WordPress Vulnerability May Impact Up To 100,000 Sites via @sejournal, @martinibuster

A newly disclosed security vulnerability waffects the BuddyPress plugin, a WordPress plugin installed in over 100,000 websites. The vulnerability, given a threat level rating of 7.3 (high),  enables unauthenticated attackers to execute arbitrary shortcodes.

BuddyPress WordPress Plugin

The BuddyPress plugin enables WordPress sites to create community features such as user profiles, activity streams, private messaging, and groups. It is commonly used on membership sites and online communities and is installed on more than 100,000 WordPress websites.

BuddyPress has a good track record with regard to vulnerabilities. There was only one vulnerability reported for the entire year of 2025, which was a relatively mild medium threat vulnerability, ranked at a 5.3 threat level on a scale of 1-10.

Unauthenticated Arbitrary Shortcode Execution

The vulnerability can be exploited by unauthenticated attackers. An attacker does not need a WordPress account or any level of user access to trigger the issue.

The BuddyPress plugin is vulnerable to arbitrary shortcode execution in all versions up to and including 14.3.3. That means that an attacker can execute shortcodes on the website. Shortcodes are used by WordPress to add dynamic functionality to pages and posts. Because the plugin does not properly validate input before executing shortcodes, attackers can cause the site to run shortcodes they are not authorized to use.

The vulnerability is caused by missing validation before user-supplied input is passed to the do_shortcode function.

Wordfence described the issue:

“The The BuddyPress plugin for WordPress is vulnerable to arbitrary shortcode execution in all versions up to, and including, 14.3.3. This is due to the software allowing users to execute an action that does not properly validate a value before running do_shortcode. This makes it possible for unauthenticated attackers to execute arbitrary shortcodes.”

This means that attackers can trigger a shortcode which in turn will carry out whatever action it is supposed to run, which in the worst case scenario could expose restricted site features or functionality. Depending on the shortcodes available on a site, this can enable attackers to access sensitive information, modify site content, or interact with other plugins in unintended ways.

The vulnerability does not depend on special server settings or optional configurations. Any site running a vulnerable version of the plugin is affected.

The issue was patched in BuddyPress version 14.3.4. Users of the plugin should update to version 14.3.4 or newer to fix the vulnerability.

Featured Image by Shutterstock/Login

TikTok US Deal Closes After Years Of Regulatory Uncertainty via @sejournal, @MattGSouthern

A White House official said the US and China have finalized a deal to spin off TikTok’s US business to a consortium led by Oracle and Silver Lake, Fox Business reported Thursday. CNN reported the joint venture has been formally established and announced its leadership team.

The closing comes ahead of a January 23 deadline created by Trump’s September executive order, which set a 120-day enforcement pause on the divest-or-ban law.

What’s New

The joint venture has been formally established and announced its leadership team. TikTok said Adam Presser, previously the company’s head of operations and trust and safety, will be CEO. Will Farrell, who led privacy and security for the effort, will serve as Chief Security Officer.

TikTok CEO Shou Chew outlined the ownership structure in a December internal memo to employees after signing binding agreements with investors.

Under the new ownership structure, ByteDance retains just under 20% of the US business. Oracle, Silver Lake, and MGX, an Abu Dhabi-based AI investment firm, will each hold 15% stakes. Other investors in the consortium include Susquehanna, Dragoneer, and DFO, Michael Dell’s family office.

A new seven-member board of directors with an American majority will govern the entity. The board will oversee data protection, content moderation, and algorithm security for US operations.

Vice President JD Vance said in September the deal would value TikTok’s US operations at roughly $14 billion, though the final amount ByteDance received remains unclear.

The algorithm question remains murky in public reporting. TikTok’s recommendation algorithm has been the central point of contention between the US and Chinese governments throughout the negotiations. The September executive order described US oversight of the technology, including requirements for algorithm retraining and monitoring, but specific implementation terms have not been publicly disclosed.

Background

The deal closes a chapter that spans two presidential administrations and multiple reversal points.

President Biden signed a law in 2024 requiring ByteDance to divest TikTok’s US business or face a ban. The Supreme Court upheld that law in 2025. TikTok briefly went dark two days later before President Trump, on his first day in office, signed an executive order keeping the app running while his administration negotiated a sale.

The current deal structure emerged from a framework announced in September, when the White House outlined terms that would create a US entity with majority American ownership while allowing ByteDance to maintain a minority stake.

Why This Matters

This should end more than five years of regulatory uncertainty for the 170 million Americans the White House says use TikTok and the businesses that depend on the platform for marketing and commerce.

We first covered the TikTok ban timeline when the original executive order gave ByteDance 45 days to sell in August 2020. Then it was a potential Oracle deal that looked promising before falling apart. The pattern repeated through multiple administrations, executive orders, and court cases.

For marketers who built strategies around TikTok, the resolution removes a persistent source of planning uncertainty. TikTok Shop, creator partnerships, and advertising campaigns can proceed without the backdrop of a potential shutdown.

The ownership structure also creates a new dynamic. Oracle, which already provides data and computing services for TikTok’s US operations through Project Texas, now holds an equity stake and board-level oversight. That deeper integration could affect how the platform handles data practices and content policies going forward.

Looking Ahead

TikTok’s US operations will function as an independent entity responsible for data protection, algorithm security, and content moderation.

TikTok has told employees that users and advertisers should see no immediate changes to the platform experience. Chew’s December memo indicated Americans would continue using TikTok as before and advertisers would maintain access to global audiences, according to multiple outlets that reviewed the document.

The deal removes a sticking point in US-China relations at a time when tensions remain elevated on trade and technology issues. Whether this model becomes a template for other Chinese-owned platforms operating in the US remains to be seen.

10Web WordPress Photo Gallery Plugin Vulnerability via @sejournal, @martinibuster

A security advisory was published about a vulnerability in the Photo Gallery by 10Web plugin that has over 200,000 installations. The vulnerability affects how the plugin handles image comments, exposing some sites to unauthorized data modification by unauthenticated attackers (meaning that attackers do not need to register with the site).

The Photo Gallery by 10Web plugin is used by WordPress sites to create and display image galleries, slideshows, and albums in a variety of layouts. It is used by photography sites, portfolios, and businesses that rely on visual content.

About The Vulnerability

The flaw can be exploited by unauthenticated visitors, meaning anyone can trigger the issue without logging in. This significantly increases exposure because there is no barrier to entry such as having to register with the website or attain a higher permission level.

It is important to note that image comments, where the vulnerability exists, are only available in the Pro version of the plugin. Sites that do not use the comments feature are not affected by this specific issue.

What Went Wrong

The vulnerability is caused by a missing capability check in the plugin’s delete_comment() function.

The plugin does not verify whether a request to delete an image comment is coming from someone who is allowed to perform that action. Normally, WordPress plugins are expected to confirm that a user has the appropriate permissions before modifying site content. That check is missing with this plugin.

Because the plugin fails to perform this verification, it accepts deletion requests even when they come from unauthenticated users.

What Attackers Can Do

An attacker can delete arbitrary image comments from a site. This vulnerability has a severity level rating of 5.3, which is a medium threat level. This vulnerability does not enable a full website takeover or any other server compromise, but it does allow unauthorized deletion of image comments. For sites that rely on image comments for engagement, moderation history, or user interaction, this can result in data loss and disruption.

The official Wordfence advisory explains the vulnerability:

“The Photo Gallery by 10Web – Mobile-Friendly Image Gallery plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the delete_comment() function in all versions up to, and including, 1.8.36. This makes it possible for unauthenticated attackers to delete arbitrary image comments. Note: comments functionality is only available in the Pro version of the plugin.”

Which Versions Can Be Exploited

The vulnerability affects all versions of the plugin up to and including version 1.8.36.The issue is tied specifically to the comment deletion functionality. Since image comments are only available in the Pro version of the plugin, exploitation is limited to sites running that version with comments enabled.

No special server configuration or user interaction is required beyond the plugin being active and vulnerable.

What Site Owners Should Do

A patch is available. Site owners should update the Photo Gallery by 10Web plugin to version 1.8.37 or later, which includes a security fix addressing this issue. If updating is not possible, disabling the plugin or the comments feature will prevent exploitation until the site can be patched.

Keeping the plugin up to date is the only direct fix for this vulnerability.

Featured Image by Shutterstock/Roman Samborskyi

Google Launches Personal Intelligence In AI Mode via @sejournal, @MattGSouthern

Google is rolling out Personal Intelligence, a feature that connects Gmail and Google Photos to AI Mode in Search, delivering personalized responses based on users’ own data.

The feature, announced in a blog post by Robby Stein, VP of Product at Google Search, is available to Google AI Pro and AI Ultra subscribers who opt in.

What’s New

Personal Intelligence lets AI Mode reference information from a user’s Gmail and Google Photos to tailor search responses. Google describes it as connecting the dots across Google apps to unlock search results that fit individual context.

The feature rolls out as a Labs experiment for eligible subscribers in the U.S. in English. It is available for personal Google accounts only, not for Workspace business, enterprise, or education users.

To enable Personal Intelligence, users can:

  1. Open Search and tap their profile
  2. Click on Search personalization
  3. Select Connected Content Apps
  4. Connect Gmail and Google Photos

In the settings menu, the Gmail connection appears under “Workspace,” though the feature itself is not available to Workspace business, enterprise, or education accounts.

Subscribers may also see an invitation to try the feature directly in AI Mode as the rollout progresses over the next few days.

How It Works

Personal Intelligence uses Gemini 3 to process queries alongside connected account data. When enabled, AI Mode may reference email confirmations, travel bookings, and photo memories to inform responses.

Stein offered examples in the announcement. A user searching for trip activities could receive recommendations based on hotel bookings in Gmail and past travel photos. Someone shopping for a coat could get suggestions that account for preferred brands, upcoming travel destinations from flight confirmations, and expected weather conditions.

Stein wrote:

“With Personal Intelligence, recommendations don’t just match your interests — they fit seamlessly into your life. You don’t have to constantly explain your preferences or existing plans, it selects recommendations just for you, right from the start.”

See an example in the screenshots below:

Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.
Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.

Privacy Controls

Google emphasizes that connecting Gmail and Google Photos is opt-in. Users choose whether to enable the connections and can turn them off at any time.

Google says AI Mode does not train directly on users’ Gmail inbox or Google Photos library. The company says training is limited to specific prompts in AI Mode and the model’s responses, used to improve functionality over time.

Google acknowledges that Personal Intelligence may make mistakes, including incorrectly connecting unrelated topics or misunderstanding context. Users can correct errors through follow-up responses or by providing feedback with the thumbs down button.

Why This Matters

This is the personal context feature Google teased at I/O in May 2025. Seven months later, in December, Google SVP Nick Fox confirmed in an interview that the feature was still in internal testing with no public timeline. Today’s rollout delivers what was delayed.

For the 75 million daily active users Fox reported in AI Mode in that December interview, this could reduce how much context you need to type in order to get tailored responses.

For publishers, the implications depend on how personalization affects which content surfaces in AI Mode responses. If the system prioritizes user-specific context over general search results, some informational queries may resolve without a click to external sites. Google has not shared data on how Personal Intelligence affects citation patterns or traffic flow.

The feature is currently limited to paid subscribers on personal accounts. Whether Google expands it to free users or Workspace accounts would change its reach.

Looking Ahead

Personal Intelligence is rolling out as a Labs feature over the next few days. Google says eligible AI Pro and AI Ultra subscribers in the U.S. will automatically have access as it becomes available.

Watch for whether Google provides analytics or attribution tools that let publishers track how personalized AI Mode responses affect visibility and traffic patterns.

A Breakdown Of Microsoft’s Guide To AEO & GEO via @sejournal, @martinibuster

Microsoft published a sixteen page explainer guide about optimizing for AI search and chat. While many of the suggestions can be classified as SEO, some of the other tips relate exclusively to AI search surfaces. Here are the most helpful takeaways.

What AEO and GEO Are And Why They Matter

Microsoft explains that AI search surfaces have created an evolution from “ranking for clicks” to “being understood and recommended by AI.” Traditional SEO still provides a foundation for being cited in AI, but AEO and GEO determine whether content gets surfaced inside AI-driven experiences.

Here is how Microsoft distinguishes AEO and GEO. The first thing to notice is that they define AEO as Agentic Engine Optimization. That’s different from Answer Engine Optimization, which is how AEO is commonly understood.

  • AEO (Answer/Agentic Engine Optimization) focuses on optimizing content and product information easy for AI assistants and agents to retrieve, interpret, and present as direct answers.
  • GEO (Generative Engine Optimization) focuses on making your content discoverable and persuasive inside generative AI systems by increasing clarity, trustworthiness, and authoritativeness.

Microsoft views AEO and GEO as not limited to marketing, but multiple teams within an organization.

The guide says:

“This shift impacts every part of the organization. Marketing teams must rethink brand differentiation, growth teams need to adapt to AI-driven journeys, ecommerce teams must measure success differently, data teams must surface richer signals, and engineering teams must ensure systems are AI-readable and reliable.”

AI shopping is not one channel, it’s really a set of overlapping systems.

Microsoft describes AI shopping as three overlapping consumer touchpoints:

  1. AI browsers that interpret what’s on a page and surface context while users browse.
  2. AI assistants that answer questions and guide decisions in conversation.
  3. AI agents that can take actions, like navigating, selecting options, and completing purchases.

The AI touchpoint matters less than whether the system can access accurate, structured, and trustworthy product information.

SEO Still Plays A Role

Microsoft’s guide says that the AEO and GEO competition changes from discovery over to influence. SEO is still important, but it is no longer the whole game.

The new competition is about influencing the AI recommendation layer, not just showing up in rankings.

Microsoft describes it like this:

  • SEO helps the product get found.
  • AEO helps the AI explain it clearly.
  • GEO helps the AI trust it and recommend it.

Microsoft explains:

“Competition is shifting from discovery to influence (SEO to AEO/GEO).

If SEO focused on driving clicks, AEO is focused on driving clarity with enriched, real-time data, while GEO focuses on building credibility and trust so AI systems can confidently recommend your products.

SEO remains foundational, but winning in AI-powered shopping experiences requires helping AI systems understand not just what your product is, but why it should be chosen.”

How AI Systems Decide What To Recommend

Microsoft explains how an AI assistant, in this case Copilot, handles a user’s request. When a user asks for a recommendation, the AI assistant goes into a reasoning phase where the query is broken down using a combination of web and product feed data.

The web data provides:

  • “General knowledge
  • Category understanding
  • Your brand positioning”

Feed data provides:

  • “Current prices
  • Availability
  • Key specs”

The AI assistant may, based on the feed data, choose to surface the product with the lowest price that is also in stock.  When the user clicks through to the website, the AI Assistant scans the page for information that provides context.

Microsoft lists these as examples of context:

  • Detailed reviews
  • Video that explain the product
  • Current promotions
  • Delivery estimates

The agent aggregates this information and provides guidance on what it discovered in terms of the context of the product (delivery times, etc.).

Microsoft brings it all together like this:

First, there’s crawled data:
The information AI systems learned during training and retrieve from indexed web pages, which shapes your brand’s baseline perception and provides grounding for AI responses, including your product
categories, reputation and market position.

Second, there’s product feeds and APIs:
The structured data you actively push to AI platforms, giving you control over how your products are represented in comparisons and recommendations. Feeds provide accuracy, details and consistency.

Third, there’s live website data:
The real-time information AI agents see when they visit your actual site, from rich media and user reviews to dynamic pricing and transaction capabilities. Each data source plays a distinct role in the shopping journey — traditional SEO remains essential because AI systems perform real-time web searches frequently throughout the shopping journey, not just at purchase time, and your site must rank well to be discovered, evaluated, and recommended.

Microsoft recommends A Three-Part Action Plan

Strategy 1: Technical Foundations

The core idea for this strategy is that your product catalog must be machine-readable, consistent everywhere, and up to date.

Key actions:

  • Use structured data (schema) for products, offers, reviews, lists, FAQs, and brand.
  • Include dynamic fields like pricing and availability.
  • Keep feed data and on-page structured data aligned with what users actually see.
  • Avoid mismatches between visible content and what is served to crawlers.

Strategy 2: Optimize Content For Intent And Clarity

This strategy is about optimizing product content so that it answers typical user questions and is easy for AI to reuse.

Key actions:

  • Write product descriptions that start with benefits and real use-case value.
  • Use headings and phrasing that match how people ask questions.

Add modular content blocks:

  • FAQs
  • specs
  • key features
  • comparisons

Add Contextual Information

  • Support multi-modal interpretation (good alt text, transcripts for video content, structured image metadata).
  • Add complementary product context (pairings, bundles, “goes well with”).

Strategy 3: Trust Signals (Authority And Credibility)

The takeaway for this strategy is that AI assistants and agents prioritize content that looks verified and reputable.

Key actions:

  • Strengthen review credibility (verified reviews, strong volumes, clear sentiment).
  • Reinforce brand authority through real-world signals (press, certifications, partnerships).
  • Keep claims grounded and consistent to avoid trust degradation.
  • Use structured data to clarify legitimacy and identity.

Microsoft explains it like this:

“AI assistants prioritize content from sources they can trust. Signals such as verified reviews, review volume, and clear sentiment help establish credibility and influence recommendations.

Brand authority is reinforced through consistent identity, real-world validation such as press coverage, certifications, and partnerships, and the use of structured data to clearly define brand entities.

Claims should be factual, consistent, and verifiable, as exaggerated or misleading information can reduce trust and limit visibility in AI-powered experiences”

Takeaways

AI search changes the goal from winning rankings to earning recommendations. SEO still matters, but AEO and GEO determine how well content is interpreted, explained, and chosen inside AI assistants and agents.

AI shopping is not a single channel but an ecosystem of assistants, browsers, and agents that rely on authoritative signals across crawled content, structured feeds, and live site experiences. The brands that win are the ones with consistent, machine-readable data, and clear content that contains useful contextual information that can be easily summarized.

Microsoft published a blog post that is accompanied by a link to the downloadable explainer guide: From Discovery to Influence: A Guide to AEO and GEO.

Featured Image by Shutterstock/Kues