Help! My therapist is secretly using ChatGPT

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted. 

It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found. 

I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?

In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.

In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?

Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.

There is some evidence AI can deliver more standardized, “manualized” therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.

What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?

At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.

OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?

I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things. 

Read the full story from Laurie Clarke

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AI is changing the grid. Could it help more than it harms?

The rising popularity of AI is driving an increase in electricity demand so significant it has the potential to reshape our grid. Energy consumption by data centers has gone up by 80% from 2020 to 2025 and is likely to keep growing. Electricity prices are already rising, especially in places where data centers are most concentrated. 

Yet many people, especially in Big Tech, argue that AI will be, on balance, a positive force for the grid. They claim that the technology could help get more clean power online faster, run our power system more efficiently, and predict and prevent failures that cause blackouts. 


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


There are early examples where AI is helping already, including AI tools that utilities are using to help forecast supply and demand. The question is whether these big promises will be realized fast enough to outweigh the negative effects of AI on local grids and communities. 

A delicate balance

One area where AI is already being used for the grid is in forecasting, says Utkarsha Agwan, a member of the nonprofit group Climate Change AI.

Running the grid is a balancing act: Operators have to understand how much electricity demand there is and turn on the right combination of power plants to meet it. They optimize for economics along the way, choosing the sources that will keep prices lowest for the whole system.

That makes it necessary to look ahead hours and in some cases days. Operators consider factors such as historical data (holidays often see higher demand) and the weather (a hot day means more air conditioners sucking up power). These predictions also consider what level of supply is expected from intermittent sources like solar panels.

There’s little risk in using AI tools in forecasting; it’s often not as time sensitive as other applications, which can require reactions within seconds or even milliseconds. A grid operator might use a forecast to determine which plants will need to turn on. Other groups might run their own forecasts as well, using AI tools to decide how to staff a plant, for example. The tools also can’t physically control anything. Rather, they can be used alongside more conventional methods to provide more data.  

Today, grid operators make a lot of approximations to model the grid, because the system is so incredibly complex that it’s impossible to truly know what’s going on in every place at every time. Not only are there a whole host of power plants and consumers to think about, but there are considerations like making sure power lines don’t get overloaded.

Working with those estimates can lead to some inefficiencies, says Kyri Baker, a professor at the University of Colorado Boulder. Operators tend to generate a bit more electricity than the system uses, for example. Using AI to create a better model could reduce some of those losses and allow operators to make decisions about how to control infrastructure in real time to reach a closer match of supply and demand.

She gives the example of a trip to the airport. Imagine there’s a route you know will get you there in about 45 minutes. There might be another, more complicated route that could save you some time in ideal conditions—but you’re not sure whether it’s better on any particular day. What the grid does now is the equivalent of taking the reliable route.

“So that’s the gap that AI can help close. We can solve this more complex problem, fast enough and reliably enough that we can possibly use it and shave off emissions,” Baker says. 

In theory, AI could be used to operate the grid entirely without human intervention. But that work is largely still in the research phase. Grid operators are running some of the most critical infrastructure in this country, and the industry is hesitant to mess with something that’s already working, Baker says. If this sort of technology is ever used in grid operations, there will still be humans in the loop to help make decisions, at least when it’s first deployed.  

Planning ahead

Another fertile area for AI is planning future updates to the grid. Building a power plant can take a very long time—the typical time from an initial request to commercial operation in the US is roughly four years. One reason for the lengthy wait is that new power plants have to demonstrate how they might affect the rest of the grid before they can connect. 

An interconnection study examines whether adding a new power plant of a particular type in a particular place would require upgrades to the grid to prevent problems. After regulators and utilities determine what upgrades might be needed, they estimate the cost, and the energy developer generally foots the bill. 

Today, those studies can take months. They involve trying to understand an incredibly complicated system, and because they rely on estimates of other existing and proposed power plants, only a few can happen in an area at any given time. This has helped create the years-long interconnection queue, a long line of plants waiting for their turn to hook up to the grid in markets like the US and Europe. The vast majority of projects in the queue today are renewables, which means there’s clean power just waiting to come online. 

AI could help speed this process, producing these reports more quickly. The Midcontinent Independent System Operator, a grid operator that covers 15 states in the central US, is currently working with a company called Pearl Street to help automate these reports.

AI won’t be a cure-all for grid planning; there are other steps to clearing the interconnection queue, including securing the necessary permits. But the technology could help move things along. “The sooner we can speed up interconnection, the better off we’ll be,” says Rob Gramlich, president of Grid Strategies, a consultancy specializing in transmission and power markets.

There’s a growing list of other potential uses for AI on the grid and in electricity generation. The technology could monitor and plan ahead for failures in equipment ranging from power lines to gear boxes. Computer vision could help detect everything from wildfires to faulty lines. AI could also help balance supply and demand in virtual power plants, systems of distributed resources like EV chargers or smart water heaters. 

While there are early examples of research and pilot programs for AI from grid planning to operation, some experts are skeptical that the technology will deliver at the level some are hoping for. “It’s not that AI has not had some kind of transformation on power systems,” Climate Change AI’s Agwan says. “It’s that the promise has always been bigger, and the hope has always been bigger.”

Some places are already seeing higher electricity prices because of power needs from data centers. The situation is likely to get worse. Electricity demand from data centers is set to double by the end of the decade, reaching 945 terawatt-hours, roughly the annual demand from the entire country of Japan. 

The infrastructure growth needed to support AI load growth has outpaced the promises of the technology, “by quite a bit,” says Panayiotis Moutis, an assistant professor of electrical engineering at the City College of New York. Higher bills caused by the increasing energy needs of AI aren’t justified by existing ways of using the technology for the grid, he says. 

“At the moment, I am very hesitant to lean on the side of AI being a silver bullet,” Moutis says. 

Correction: This story has been updated to correct Moutis’s affiliation.

Three big things we still don’t know about AI’s energy burden

Earlier this year, when my colleague Casey Crownhart and I spent six months researching the climate and energy burden of AI, we came to see one number in particular as our white whale: how much energy the leading AI models, like ChatGPT or Gemini, use up when generating a single response. 

This fundamental number remained elusive even as the scramble to power AI escalated to the White House and the Pentagon, and as projections showed that in three years AI could use as much electricity as 22% of all US households. 

The problem with finding that number, as we explain in our piece published in May, was that AI companies are the only ones who have it. We pestered Google, OpenAI, and Microsoft, but each company refused to provide its figure. Researchers we spoke to who study AI’s impact on energy grids compared it to trying to measure the fuel efficiency of a car without ever being able to drive it, making guesses based on rumors of its engine size and what it sounds like going down the highway.


This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.


But then this summer, after we published, a strange thing started to happen. In June, OpenAI’s Sam Altman wrote that an average ChatGPT query uses 0.34 watt-hours of energy. In July, the French AI startup Mistral didn’t publish a number directly but released an estimate of the emissions generated. In August, Google revealed that answering a question to Gemini uses about 0.24 watt-hours of energy. The figures from Google and OpenAI were similar to what Casey and I estimated for medium-size AI models. 

So with this newfound transparency, is our job complete? Did we finally harpoon our white whale, and if so, what happens next for people studying the climate impact of AI? I reached out to some of our old sources, and some new ones, to find out.

The numbers are vague and chat-only

The first thing they told me is that there’s a lot missing from the figures tech companies published this summer. 

OpenAI’s number, for example, did not appear in a detailed technical paper but rather in a blog post by Altman that leaves lots of unanswered questions, such as which model he was referring to, how the energy use was measured, and how much it varies. Google’s figure, as Crownhart points out, refers to the median amount of energy per query, which doesn’t give us a sense of the more energy-demanding Gemini responses, like when it uses a reasoning model to “think” through a hard problem or generates a really long response. 

The numbers also refer only to interactions with chatbots, not the other ways that people are becoming increasingly reliant on generative AI. 

“As video and image becomes more prominent and used by more and more people, we need the numbers from different modalities and how they measure up,” says Sasha Luccioni, AI and climate lead at the AI platform Hugging Face. 

This is also important because the figures for asking a question to a chatbot are, as expected, undoubtedly small—the same amount of electricity used by a microwave in just seconds. That’s part of the reason AI and climate researchers don’t suggest that any one individual’s AI use creates a significant climate burden. 

A full accounting of AI’s energy demands—one that goes beyond what’s used to answer an individual query to help us understand its full net impact on the climate—would require application-specific information on how all this AI is being used. Ketan Joshi, an analyst for climate and energy groups, acknowledges that researchers don’t usually get such specific information from other industries but says it might be justified in this case.

“The rate of data center growth is inarguably unusual,” Joshi says. “Companies should be subject to significantly more scrutiny.”

We have questions about energy efficiency

Companies making billion-dollar investments into AI have struggled to square this growth in energy demand with their sustainability goals. In May, Microsoft said that its emissions have soared by over 23% since 2020, owing largely to AI, while the company has promised to be carbon negative by 2030. “It has become clear that our journey towards being carbon negative is a marathon, not a sprint,” Microsoft wrote.

Tech companies often justify this emissions burden by arguing that soon enough, AI itself will unlock efficiencies that will make it a net positive for the climate. Perhaps the right AI system, the thinking goes, could design more efficient heating and cooling systems for a building, or help discover the minerals required for electric-vehicle batteries. 

But there are no signs that AI has been usefully used to do these things yet. Companies have shared anecdotes about using AI to find methane emission hot spots, for example, but they haven’t been transparent enough to help us know if these successes outweigh the surges in electricity demand and emissions that Big Tech has produced in the AI boom. In the meantime, more data centers are planned, and AI’s energy demand continues to rise and rise. 

The ‘bubble’ question

One of the big unknowns in the AI energy equation is whether society will ever adopt AI at the levels that figure into tech companies’ plans. OpenAI has said that ChatGPT receives 2.5 billion prompts per day. It’s possible that this number, and the equivalent numbers for other AI companies, will continue to soar in the coming years. Projections released last year by the Lawrence Berkeley National Laboratory suggest that if they do, AI alone could consume as much electricity annually as 22% of all US households by 2028.

But this summer also saw signs of a slowdown that undercut the industry’s optimism. OpenAI’s launch of GPT-5 was largely considered a flop, even by the company itself, and that flop led critics to wonder if AI may be hitting a wall. When a group at MIT found that 95% of businesses are seeing no return on their massive AI investments, stocks floundered. The expansion of AI-specific data centers might be an investment that’s hard to recoup, especially as revenues for AI companies remain elusive. 

One of the biggest unknowns about AI’s future energy burden isn’t how much a single query consumes, or any other figure that can be disclosed. It’s whether demand will ever reach the scale companies are building for or whether the technology will collapse under its own hype. The answer will determine whether today’s buildout becomes a lasting shift in our energy system or a short-lived spike.

The Download: meet our AI innovators, and what happens when therapists use AI covertly

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the AI honorees on our 35 Innovators Under 35 list for 2025

Each year, we select 35 outstanding individuals under the age of 35 who are using technology to tackle tough problems in their respective fields.

Our AI honorees include people who steer model development at Silicon Valley’s biggest tech firms and academic researchers who develop new techniques to improve AI’s performance.

Check out all of our AI innovators here, and the full list—including our innovator of the year—here.

How Yichao “Peak” Ji became a global AI app hitmaker

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production but something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people.

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility—and put him at the forefront of a rising class of Chinese technologists with global ambitions. Read the full story.

—Caiwei Chen

Help! My therapist is secretly using ChatGPT

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. But the secretive use by therapists of AI models that are not vetted for mental health is something very different. James O’Donnell, our senior AI reporter, had a conversation with Clarke to hear more about what she found.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

What’s next in tech: the breakthroughs that matter

Some technologies reshape industries, whether we’re ready or not.

Join us for our next LinkedIn Live event on September 10 as our editorial team explores the breakthroughs defining this moment and the ones on the horizon that demand our attention. 

From quantum computing to humanoid robotics, AI agents to climate tech, we’ll explore the innovations that excite us, the challenges they may bring, and why they’re worth watching now. It kicks off at 12.30pm ET tomorrow—register here to join us.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is abandoning its international push against disinformation 
The State Department will no longer collaborate with Europe to combat malicious information spread by foreign governments. (FT $)
+ It comes as Russia is increasing its efforts to interfere overseas. (NYT $)

2 The judge overseeing Anthropic’s copyright case isn’t happy
Judge William Alsup says a $1.5 billion out-of-court settlement may not be in the authors’ best interests. (Bloomberg $)

3 WhatsApp’s former head of security is suing Meta
Attaullah Baig is accusing the company of failing to protect user data. (WP $)
+ He claims he uncovered systemic security failures, but was ignored. (Bloomberg $)
+ Meta maintains that Baig was dismissed for poor performance, not whistleblowing. (NYT $)

4 DOGE’s acting head is urging the US government to start hiring again 
Following months of widespread firings and resignations. (Fast Company $)
+ How DOGE wreaked havoc in Social Security. (ProPublica)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

5 OpenAI is weighing up leaving California
It’s worried that state regulators could derail its efforts to convert to a for-profit entity. (WSJ $)
+ Rival Anthropic is backing California governor Gavin Newsom’s AI bill. (Politico)

6 ICE spends millions on facial recognition tech
In an effort to pinpoint people it suspects have assaulted officers. (404 Media)
+ The Supreme Court has given ICE the go-ahead to target people based on race. (Vox)
+ ICE directors were told to triple their daily arrests for undocumented immigrants. (NY Mag $)

7 AI researchers are training AI to replace them
They’re recording every detail of their working days to help AI grasp their jobs. (The Information $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

8 What comes after the smartphone?
The rise of AI agents means we may not be staring at glass slabs forever. (NYT $)
+ What’s next for smart glasses. (MIT Technology Review)

9 Social media’s obsession with ‘locking in’ needs to die
Hustle culture and maximizing productivity at all costs are the aims of the game. (Insider $)

10 What it’s like to receive a massage from a robot
While it may not be quite as relaxing, it’s relatively cheap. (The Guardian)
+ Will we ever trust robots? (MIT Technology Review)

Quote of the day

“It was hell on Earth.”

—Duncan Okindo, who was enslaved in a Myanmar cyberscam compound and beaten for missing his targets, tells the Guardian about his harrowing experience.

One more thing

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few words in a search box and in return get a list of blue links to the most relevant results. Fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in a structured way.

But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines deliver information to us since the 1990s is happening right now, thanks to generative AI.

Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Read the full story.

—Mat Honan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Stephen King’s list of favorite movies doesn’t feature a whole lot of horror.
+ Tune into a breathtaking livestream of Earth, beamed live from the International Space Station.
+ Rodent thumbnails are way more important than I gave them credit for 🐿
+ Mark our words, actor Wagner Moura is going to be the next big thing.

Adapting to new threats with proactive risk management

In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.

The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”

Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.

A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.

Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.

To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Quick SEO: 6 Key Elements, 3 Free Tools

A page’s visibility on search engines and generative AI platforms depends on six key elements:

  • Title,
  • Meta tags,
  • HTML headings,
  • Links, internal and external,
  • Images,
  • Structured data.

When optimizing a page, I rely on three free browser extensions to quickly reveal those components: Devaka Tools, Site Inspector, and SEO Meta in 1 Click.

What follows is my explanation of the six elements, followed by a side-by-side table comparing the three extensions.

Title of page

The title tag is the most important on-page optimization element because search engines use it to understand the page’s purpose. Descriptive and keyword-focused page titles improve and diversify organic rankings.

The title tag appears in the browser tab and is not necessarily visible on the page.

Meta tags

Two types of meta tags are key for search engine optimization:

  • A meta description does not directly impact rankings, but it may appear in search snippets and thus affect click-throughs.

HTML headings

HTML headings such as H1, H2, and H3 organize on-page content. This article includes HTML headings: “Title of page,” “Meta tags,” “HTML headings,” et cetera. The headings, while optional, help readers digest the content and assist crawlers in identifying relevant info for searchers’ queries. Using keywords in HTML headings serves both purposes.

Links, internal and external

Internal links signal to search engines the importance of a page: the more internal links pointing to a page, the higher its significance.

Internal links also help search engines understand the linked page. The link’s anchor text is the strongest signal, although its surrounding words also send relevancy signals, per Google. Improving internal linking structure is often a quick way to streamline crawlability and increase organic search visibility.

External links to authoritative sites add credibility to the page, especially in “Your Money Your Life” niches.

Images

Images enhance visitor engagement, a Google ranking factor, and improve visibility in image search results.

Image alt tags are essential for both visually-impaired visitors and search engines. Compressing images enhances page speed and thus Core Web Vitals, another ranking factor.

Structured data

Structured data helps search engines and AI platforms understand a site, its pages, and the owner. Schema.org’s “vocabulary” of structured data is the most popular. Google and Bing recognize it as well as other methods.

Browser Extensions

You can quickly see all of these elements using one or more of the following browser extensions:

Feature Devaka Tools Site Inspector SEO Meta
Browser Many Chrome Chrome
Title of page Yes Yes Yes
Meta tags Yes Yes Yes
HTML headings Yes Yes Yes
Links: internal, external Yes Yes Yes (plus highlighting nofollow links)
Images Yes Yes Yes
Structured data Yes Yes Yes
Excel export Yes Yes Yes
Notes Can highlight keywords, show image alt text, reveal hidden text. Provides links to tools such as Schema.org validator and Search Console. Can keep sidebar open to automatically load data. Provides a page summary with word count, headings, images. Can export a page’s copy. Includes a page preview and Schema.org validator.

Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Google Drops Search Console Reporting For Six Structured Data Types via @sejournal, @MattGSouthern

Google will stop reporting six deprecated structured data types in Search Console and remove them from the Rich Results Test and appearance filters.

  • Search Console and Rich Results Test stop reporting on deprecated structured data types.
  • Rankings are unaffected; you can keep the markup, it just won’t show rich results.
  • API returns continue through December.
Structured Data’s Role In AI And AI Search Visibility via @sejournal, @marthavanberkel

The way people find and consume information has shifted. We, as marketers, must think about visibility across AI platforms and Google.

The challenge is that we don’t have the same ability to control and measure success as we do with Google and Microsoft, so it feels like we’re flying blind.

Earlier this year, Google, Microsoft, and ChatGPT each commented about how structured data can help LLMs to better understand your digital content.

Structured data can give AI tools the context they need to determine their understanding of content through entities and relationships. In this new era of search, you could say that context, not content, is king.

Schema Markup Helps To Build A Data Layer

By translating your content into Schema.org and defining the relationships between pages and entities, you are building a data layer for AI. This schema markup data layer, or what I like to call your “content knowledge graph,” tells machines what your brand is, what it offers, and how it should be understood.

This data layer is how your content becomes accessible and understood across a growing range of AI capabilities, including:

  • AI Overviews
  • Chatbots and voice assistants
  • Internal AI systems

Through grounding, structured data can contribute to visibility and discovery across Google, ChatGPT, Bing, and other AI platforms. It also prepares your web data to be of value to accelerate your internal AI initiatives as well.

The same week that Google and Microsoft announced they were using structured data for their generative AI experiences, Google and OpenAI announced their support of the Model Context Protocol.

What Is Model Context Protocol?

In November 2024, Anthropic introduced Model Context Protocol (MCP), “an open protocol that standardizes how applications provide context to LLMs” and was subsequently adopted by OpenAI and Google DeepMind.

You can think of MCP as the USB-C connector for AI applications and agents or an API for AI. “MCP provides a standardized way to connect AI models to different data sources and tools.”

Since we are now thinking of structured data as a strategic data layer, the problem Google and OpenAI need to solve is how they scale their AI capabilities efficiently and cost-effectively. The combination of structured data you put on your website, with MCP, would allow accuracy in inferencing and the ability to scale.

Structured Data Defines Entities And Relationships

LLMs generate answers based on the content they are trained on or connected to. While they primarily learn from unstructured text, their outputs can be strengthened when grounded in clearly defined entities and relationships, for example, via structured data or knowledge graphs.

Structured data can be used as an enhancer that allows enterprises to define key entities and their relationships.

When implemented using Schema.org vocabulary, structured data:

  • Defines the entities on a page: people, products, services, locations, and more.
  • Establishes relationships between those entities.
  • Can reduce hallucinations when LLMs are grounded in structured data through retrieval systems or knowledge graphs.

When schema markup is deployed at scale, it builds a content knowledge graph, a structured data layer that connects your brand’s entities across your site and beyond. 

A recent study by BrightEdge demonstrated that schema markup improved brand presence and perception in Google’s AI Overviews, noting higher citation rates on pages with robust schema markup.

Structured Data As An Enterprise AI Strategy

Enterprises can shift their view of structured data beyond the basic requirements for rich result eligibility to managing a content knowledge graph.

According to Gartner’s 2024 AI Mandates for the Enterprise Survey, participants cite data availability and quality as the top barrier to successful AI implementation.

By implementing structured data and developing a robust content knowledge graph you can contribute to both external search performance and internal AI enablement.

A scalable schema markup strategy requires:

  • Defined relationships between content and entities: Schema markup properties connect all content and entities across the brand. All page content is connected in context.
  • Entity Governance: Shared definitions and taxonomies across marketing, SEO, content, and product teams.
  • Content Readiness: Ensuring your content is comprehensive, relevant, representative of the topics you want to be known for, and connected to your content knowledge graph.
  • Technical Capability: Cross-functional tools and processes to manage schema markup at scale and ensure accuracy across thousands of pages.

For enterprise teams, structured data is a cross-functional capability that prepares web data to be consumed by internal AI applications.

What To Do Next To Prepare Your Content For AI

Enterprise teams can align their content strategies with AI requirements. Here’s how to get started:

1. Audit your current structured data to identify gaps in coverage and whether schema markup is defining relationships within your website. This context is critical for AI inferencing.

2. Map your brand’s key entities, such as products, services, people, and core topics, and ensure they are clearly defined and consistently marked up with schema markup across your content. This includes identifying the main page that defines an entity, known as the entity home.

3. Build or expand your content knowledge graph by connecting related entities and establishing relationships that AI systems can understand.

4. Integrate structured data into AI budget and planning, alongside other AI investments and that content is intended for AI Overviews, chatbots, or internal AI initiatives.

5. Operationalize schema markup management by developing repeatable workflows for creating, reviewing, and updating schema markup at scale.

By taking these steps, enterprises can ensure that their data is AI-ready, inside and outside the enterprise.

Structured Data Provides A Machine-Readable Layer

Structured data doesn’t assure placement in AI Overviews or directly control what large language models say about your brand. LLMs are still primarily trained on unstructured text, and AI systems weigh many signals when generating answers.

What structured data does provide is a strategic, machine-readable layer. When used to build a knowledge graph, schema markup defines entities and the relationships between them, creating a reliable framework that AI systems can draw from. This reduces ambiguity, strengthens attribution, and makes it easier to ground outputs in fact-based content when structured data is part of a connected retrieval or grounding system.

By investing in semantic, large-scale schema markup and aligning it across teams, organizations position themselves to be as discoverable in AI experiences as possible.

More Resources:


Featured Image: Koto Amatsukami/Shutterstock

New: From longform to key takeaways, in seconds. Meet Yoast AI Summarize

Today, we’re excited to welcome Yoast AI Summarize to our growing family of AI features. Just like our other AI tools, this new feature is designed to make your publishing process faster and easier by putting powerful, practical AI right where you work, in the WordPress Block Editor. 

Yoast AI Summarize is perfect for bloggers, content teams, agencies, and publishers who want to give readers instant value while also making sure their posts clearly communicate the intended message. 

What does Yoast AI Summarize do? 

You’ve finished drafting your post, great! But before you hit “Publish,” wouldn’t it be helpful to instantly see the core points your content is actually conveying? That’s exactly what Yoast AI Summarize does. 

With one click, you can insert a Key Takeaways block into your content. Yoast AI Summarize scans your post’s main body and creates a short, bullet-point summary, giving your readers a quick, scannable snapshot, and giving you a chance to check if your post is truly saying what you want it to. 

How you can access the new feature 

Yoast AI Summarize is automatically available to all Yoast SEO Premium customers. Just make sure you’ve updated to the latest version and granted consent to use AI. 

Once enabled, simply: 

  1. Open your post in the WordPress Block Editor
  1. Add the new block from the “Yoast AI Blocks” section 
  1. Click to generate summary, and watch your Key Takeaways section appear in seconds. 

Where you can use Yoast AI Summarize 

Right now, Yoast AI Summarize works in the WordPress Block Editor on posts and pages. The block is fully editable, you can change the title, rewrite bullet points, or move it anywhere in your content flow. 

Pricing and usage 

There are no hidden costs for Yoast AI Summarize, it’s included in Yoast SEO Premium. Like our other AI features, it uses our spark counter to track usage. 

  • A spark is a single click on an AI feature. 
  • Generating one summary = one spark. 
  • Your spark counter resets at the start of each month. 
  • There’s currently no hard limit, so you can experiment freely. 

Limitations 

Yoast AI Summarize is currently in beta. That means you may notice a few restrictions: 

  • Only available in the WordPress Block Editor
  • Summaries are excluded from Yoast SEO and Readability Analysis to protect your scores. 
  • Currently works only on published or drafted content within supported blocks. 
  • For very long posts, it may take a few seconds for the summary to generate. 

Try out Yoast AI Summarize today 

Upgrade to Yoast SEO Premium to unlock this and all our AI features, including the award-nominated Yoast AI Generate and the powerful Yoast AI Optimize. With Yoast AI Summarize, you can work faster, keep your content aligned with your intent, and give your readers instant value with clear, scannable takeaways. 

Update to the latest version and try it out today!