The Texas-based startup Quidnet Energy just completed a test showing it can store energy for up to six months by pumping water underground.
Using water to store electricity is hardly a new concept—pumped hydropower storage has been around for over a century. But the company hopes its twist on the technology could help bring cheap, long-duration energy storage to new places.
In traditional pumped hydro storage facilities, electric pumps move water uphill, into a natural or manmade body of water. Then, when electricity is needed, that water is released and flows downhill past a turbine, generating electricity. Quidnet’s approach instead pumps water down into impermeable rock formations and keeps it under pressure so it flows up when released. “It’s like pumped hydro, upside down,” says CEO Joe Zhou.
Quidnet started a six-month test of its technology in late 2024, pressurizing the system. In June, the company was able to discharge 35 megawatt-hours of energy from the well. There was virtually no self-discharge, meaning no energy loss, Zhou says.
Inexpensive forms of energy storage that can store electricity for weeks or months could help inconsistent electricity sources like wind and solar go further for the grid. And Quidnet’s approach, which uses commercially available equipment, could be deployed quickly and qualify for federal tax credits to help make it even cheaper.
However, there’s still a big milestone ahead: turning the pressurized water back into electricity. The company is currently building a facility with the turbines and support equipment to do that—all the components are available to purchase from established companies. “We don’t need to invent new things based on what we’ve already developed today,” Zhou says. “We can now start just deploying at very, very substantial scales.”
That process will come with energy losses. Energy storage systems are typically measured by their round-trip efficiency: how much of the electricity that’s put into the system is returned at the end as electricity. Modeling suggests that Quidnet’s technology could reach a maximum efficiency of about 65%, Zhou says, though some design choices made to optimize for economics will likely cause the system to land at roughly 50%.
That’s less efficient than lithium-ion batteries, but long-duration systems, if they’re cheap enough, can operate at low efficiencies and still be useful for the grid, says Paul Denholm, a senior research fellow at the National Renewable Energy Laboratory.
“It’s got to be cost-competitive; it all comes down to that,” Denholm says.
Lithium-ion batteries, the fastest-growing technology in energy storage, are the target that new forms of energy storage, like Quidnet’s, must chase. Lithium-ion batteries are about 90% cheaper today than they were 15 years ago. They’ve become a price-competitive alternative to building new natural-gas plants, Denholm says.
When it comes to competing with batteries, one potential differentiator for Quidnet could be government subsidies. While the Trump administration has clawed back funding for clean energy technologies, there’s still an energy storage tax credit, though recently passed legislation added new supply chain restrictions.
Starting in 2026, new energy storage facilities hoping to qualify for tax credits will need to prove that at least 55% of the value of a project’s materials are not from foreign entities of concern. That rules out sourcing batteries from China, which dominates battery production today. Quidnet has a “high level of domestic content” and expects to qualify for tax credits under the new rules, Zhou says.
The facility Quidnet is building is a project with utility partner CPS Energy, and it should come online in early 2026.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
This startup wants to use the Earth as a massive battery
Texas-based startup Quidnet Energy just completed a test showing it can store energy for up to six months by pumping water underground.
Using water to store electricity is hardly a new concept—pumped hydropower storage has been around for over a century. But the company hopes its twist on the technology could help bring cheap, long-duration energy storage to new places. Read the full story.
—Casey Crownhart
What you may have missed about Trump’s AI Action Plan
The executive orders and announcements coming from the White House since Donald Trump returned to office have painted an ambitious vision for America’s AI future, but the details have been sparse.
The White House’s AI Action Plan, released last week, is meant to fix that. Trump wants to boost the buildout of data centers by slashing environmental rules; withhold funding from states that pass “burdensome AI regulations”; and contract only with AI companies whose models are “free from top-down ideological bias.”
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Democrats aren’t happy about Trump’s China chip U-turn They’re worried about the security implications of approving exporting Nvidia chips. (WP $) + They claim the Trump administration is using export controls as a bargaining chip. (The Hill) + Meanwhile, both parties are planning new bills targeting China. (Reuters)
2 US tariffs are at their highest level since before WWII Trump’s tariff wall appears likely to trigger a global reordering of trade. (FT $) + But who picks up the bill? (The Guardian) + Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)
3 Utility companies want Big Tech to pay more for their data centers Otherwise, rates may end up rising for regular customers. (WSJ $) + The data center boom in the desert. (MIT Technology Review)
4 Citizen science is on the rise across the US Platform iNaturalist is playing a key role in helping to identify new species. (NYT $) + How nonprofits and academia are stepping up to salvage US climate programs. (MIT Technology Review)
5 Anthropic is cracking down on Claude power users Some of its customers are running its AI coding tool 24/7. (TechCrunch) + That’s seriously bad news for the environment. (Engadget) + We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review) 6 MAHA might resurrect psychedelic therapy Last year, the FDA rejected MDMA therapy. Now, it might get thrown a lifeline. (Wired $) + People are using AI to ‘sit’ with them while they trip on psychedelics. (MIT Technology Review)
7 Waymo is launching its robotaxi service in Dallas In a new partnership with car rental firm Avis, not Uber. (Reuters) + It’s expanding steadily, unlike its rival Tesla. (Forbes $)
8 How a promising young coder wound up at DOGE Luke Farritor has assessed, slashed, and dismantled at least 10 departments. (Bloomberg $) + The foundations of America’s prosperity are being dismantled. (MIT Technology Review)
9 This Californian startup’s robot kills fish the Japanese way The method is considered the most humane way to kill them. (Semafor)
10 AI is making online shopping hyper-personalized By serving up results for searches like “revenge dress to wear to a party in Sicily.” (CNN)
Quote of the day
“Now I’ll click the ‘Verify you are human’ checkbox…this step is necessary to prove I’m not a bot.”
—OpenAI’s new ChatGPT Agent explains how it passes a common internet security checkpoint designed to catch bots just like it, Ars Technica reports.
One more thing
How gamification took over the world
It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one?
Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.
For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential. But instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control. Read the full story.
—Bryan Gardiner
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ USPS is taking votes from the public to bring back their favorite stamps (thanks Amy!) + Here’s how to make your morning toast that bit more interesting. + The long-awaited Madonna biopic is still happening, apparently. + Bad news for matcha fans—there’s a global shortage
OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. It’s part of a wider push by the company to get AI more embedded into classrooms when the new academic year starts in September.
A demonstration for reporters from OpenAI showed what happens when a student asks Study Mode about an academic subject like game theory. The chatbot begins by asking what the student wants to know and then attempts to build an exchange, where the pair work methodically toward the answer together. OpenAI says the tool was built after consulting with pedagogy experts from over 40 institutions.
A handful of college students who were part of OpenAI’s testing cohort—hailing from Princeton, Wharton, and the University of Minnesota—shared positive reviews of Study Mode, saying it did a good job of checking their understanding and adapting to their pace.
The learning approaches that OpenAI has programmed into Study Mode, which are based partially on Socratic methods, appear sound, says Christopher Harris, an educator in New York who has created a curriculum aimed at AI literacy. They might grant educators more confidence about allowing, or even encouraging, their students to use AI. “Professors will see this as working with them in support of learning as opposed to just being a way for students to cheat on assignments,” he says.
But there’s a more ambitious vision behind Study Mode. As demonstrated in OpenAI’s recent partnership with leading teachers’ unions, the company is currently trying to rebrand chatbots as tools for personalized learning rather than cheating. Part of this promise is that AI will act like the expensive human tutors that currently only the most well-off students’ families can typically afford.
“We can begin to close the gap between those with access to learning resources and high-quality education and those who have been historically left behind,” says OpenAI’s head of education. Leah Belsky.
But painting Study Mode as an education equalizer obfuscates one glaring problem. Underneath the hood, it is not a tool trained exclusively on academic textbooks and other approved materials—it’s more like the same old ChatGPT, tuned with a new conversation filter that simply governs how it responds to students, encouraging fewer answers and more explanations.
This AI tutor, therefore, more resembles what you’d get if you hired a human tutor who has read every required textbook, but also every flawed explanation of the subject ever posted to Reddit, Tumblr, and the farthest reaches of the web. And because of the way AI works, you can’t expect it to distinguish right information from wrong.
Professors encouraging their students to use it run the risk of it teaching them to approach problems in the wrong way—or worse, being taught material that is fabricated or entirely false.
Given this limitation, I asked OpenAI if Study Mode is limited to particular subjects. The company said no—students will be able to use it to discuss anything they’d normally talk to ChatGPT about.
It’s true that access to human tutors—which for certain subjects can cost upward of $200 an hour—is typically for the elite few. The notion that AI models can spread the benefits of tutoring to the masses holds an allure. Indeed, it is backed up by at least some early research that shows AI models can adapt to individual learning styles and backgrounds.
But this improvement comes with a hidden cost. Tools like Study Mode, at least for now, take a shortcut by using large language models’ humanlike conversational style without fixing their inherent flaws.
OpenAI also acknowledges that this tool won’t prevent a student who’s frustrated and wants an answer from simply going back to normal ChatGPT. “If someone wants to subvert learning, and sort of get answers and take the easier route, that is possible,” Belsky says.
However, one thing going for Study Mode, the students say, is that it’s simply more fun to study with a chatbot that’s always encouraging you along than to stare at a textbook on Bayesian theorem for the hundredth time. “It’s like the reward signal of like, oh, wait, I can learn this small thing,” says Maggie Wang, a student from Princeton who tested it. The tool is free for now, but Praja Tickoo, a student from Wharton, says it wouldn’t have to be for him to use it. “I think it’s absolutely something I would be willing to pay for,” he says.
A baby boy born over the weekend holds the new record for the “oldest baby.” Thaddeus Daniel Pierce, who arrived on July 26, developed from an embryo that had been in storage for 30 and a half years.
“We had a rough birth but we are both doing well now,” says Lindsey Pierce, his mother. “He is so chill. We are in awe that we have this precious baby!”
Lindsey and her husband, Tim Pierce, who live in London, Ohio, “adopted” the embryo from a woman who had it created in 1994. She says her family and church family think “it’s like something from a sci-fi movie.”
“The baby has a 30-year-old sister,” she adds. Tim was a toddler when the embryos were first created.
“It’s been pretty surreal,” says Linda Archerd, 62, who donated the embryo. “It’s hard to even believe.”
Three little hopes
The story starts back in the early 1990s. Archerd had been trying—and failing—to get pregnant for six years. She and her husband decided to try IVF, a fairly new technology at the time. “People were [unfamiliar] with it,” says Archerd. “A lot of people were like, what are you doing?”
They did it anyway, and in May 1994, they managed to create four embryos. One of them was transferred to Linda’s uterus. It resulted in a healthy baby girl. “I was so blessed to have a baby,” Archerd says. The remaining three embryos were cryopreserved and kept in a storage tank.
That was 31 years ago. The healthy baby girl is now a 30-year-old woman who has her own 10-year-old daughter. But the other three embryos remained frozen in time.
Archerd originally planned to use the embryos herself. “I always wanted another baby desperately,” she says. “I called them my three little hopes.” Her then husband felt differently, she says. Archerd went on to divorce him, but she won custody of the embryos and kept them in storage, still hopeful she might use them one day, perhaps with another partner.
That meant paying annual storage fees, which increased over time and ended up costing Archerd around a thousand dollars a year, she says. To her, it was worth it. “I always thought it was the right thing to do,” she says.
Things changed when she started going through menopause, she says. She considered her options. She didn’t want to discard the embryos or donate them for research. And she didn’t want to donate them to another family anonymously—she wanted to meet the parents and any resulting babies. “It’s my DNA; it came from me … and [it’s] my daughter’s sibling,” she says.
Then she found out about embryo “adoption.” This is a type of embryo donation in which both donors and recipients have a say in whom they “place” their embryos with or “adopt” them from. It is overseen by agencies—usually explicitly religious ones—that believe an embryo is morally equivalent to a born human. Archerd is Christian.
There are several agencies that offer these adoption services in the US, but not all of them accept embryos that have been stored for a very long time. That’s partly because those embryos will have been frozen and stored in unfamiliar, old-fashioned ways, and partly because old embryos are thought to be less likely to survive thawing and transfer to successfully develop into a baby.
“So many places wouldn’t even take my information,” says Archerd. Then she came across the Snowflakes program run by the Nightlight Christian Adoptions agency. The agency was willing to accept her embryos, but it needed Archerd’s medical records from the time the embryos had been created, as well as the embryos’ lab records.
So Archerd called the fertility doctor who had treated her decades before. “I still remembered his phone number by heart,” she says. That doctor, now in his 70s, is still practicing at a clinic in Oregon. He dug Archerd’s records out from his basement, she says. “Some of [them] were handwritten,” she adds. Her embryos entered Nightlight’s “matching pool” in 2022.
Making a match
“Our matching process is really driven by the preferences of the placing family,” says Beth Button, executive director of the Snowflakes program. Archerd’s preference was for a married Caucasian, Christian couple living in the US. “I didn’t want them to go out of the country,” says Archerd. “And being Christian is very important to me, because I am.”
It took a while to find a match. Most of the “adopting parents” signed up for the Snowflakes program were already registered at fertility clinics that wouldn’t have accepted the embryos, says Button. “I would say that over 90% of clinics in the US would not have accepted these embryos,” she says.
Lindsey and Tim Pierce at Rejoice Fertility.
COURTESY LINDSEY PIERCE
Archerd’s embryos were assigned to the agency’s Open Hearts program for embryos that are “hard to place,” along with others that have been in storage for a long time or are otherwise thought to be less likely to result in a healthy birth.
Lindsey and Tim Pierce had also signed up for the Open Hearts program. The couple, aged 35 and 34, respectively, had been trying for a baby for seven years and had seen multiple doctors.
Lindsey was researching child adoption when she came across the Snowflakes program.
When the couple were considering their criteria for embryos they might receive, they decided that they’d be open to any. “We checkmarked anything and everything,” says Tim. That’s how they ended up being matched with Archerd’s embryos. “We thought it was wild,” says Lindsey. “We didn’t know they froze embryos that long ago.”
Lindsey and Tim had registered with Rejoice Fertility, an IVF clinic in Knoxville, Tennessee, run by John Gordon, a reproductive endocrinologist who prides himself on his efforts to reduce the number of embryos in storage. The huge numbers of embryos left in storage tanks was weighing on his conscience, he says, so around six years ago, he set up Rejoice Fertility with the aim of doing things differently.
“Now we’re here in the belt buckle of the Bible Belt,” says Gordon, who is Reformed Presbyterian. “I’ve changed my mode of practice.” IVF treatments performed at the clinic are designed to create as few excess embryos as possible. The clinic works with multiple embryo adoption agencies and will accept any embryo, no matter how long it has been in storage.
COURTESY LINDA ARCHERD
It was his clinic that treated the parents who previously held the record for the longest-stored embryo—in 2022, Rachel and Philip Ridgeway had twins from embryos created more than 30 years earlier. “They’re such a lovely couple,” says Gordon. When we spoke, he was making plans to meet the family for breakfast. The twins are “growing like weeds,” he says with a laugh.
“We have certain guiding principles, and they’re coming from our faith,” says Gordon, although he adds that he sees patients who hold alternative views. One of those principles is that “every embryo deserves a chance at life and that the only embryo that cannot result in a healthy baby is the embryo not given the opportunity to be transferred into a patient.”
That’s why his team will endeavor to transfer any embryo they receive, no matter the age or conditions. That can be challenging, especially when the embryos have been frozen or stored in unusual or outdated ways. “It’s scary for people who don’t know how to do it,” says Sarah Atkinson, lab supervisor and head embryologist at Rejoice Fertility. “You don’t want to kill someone’s embryos if you don’t know what you’re doing.”
Cumbersome and explosive
In the early days of IVF, embryos earmarked for storage were slow-frozen. This technique involves gradually lowering the temperature of the embryos. But because slow freezing can cause harmful ice crystals to form, clinics switched in the 2000s to a technique called vitrification, in which the embryos are placed in thin plastic tubes called straws and lowered into tanks of liquid nitrogen. This rapidly freezes the embryos and converts them into a glass-like state.
The embryos can later be thawed by removing them from the tanks and rapidly—within two seconds—plunging them into warm “thaw media,” says Atkinson. Thawing slow-frozen embryos is more complicated. And the exact thawing method required varies, depending on how the embryos were preserved and what they were stored in. Some of the devices need to be opened while they are inside the storage tank, which can involve using forceps, diamond-bladed knives, and other tools in the liquid nitrogen, says Atkinson.
Sarah Atkinson, lab supervisor and head embryologist at Rejoice Fertility, directly injects sperm into two eggs to fertilize them.
COURTESY OF SARAH ATKINSON AT REJOICE FERTILITY.
Recently, she was tasked with retrieving embryos that had been stored inside a glass vial. The vial was made from blown glass and had been heat-sealed with the embryo inside. Atkinson had to use her diamond-bladed knife to snap open the seal inside the nitrogen tank. It was fiddly work, and when the device snapped, a small shard of glass flew out and hit Atkinson’s face. “Hit me on the cheek, cut my cheek, blood running down my face, and I’m like, Oh shit,” she says. Luckily, she had her safety goggles on. And the embryos survived, she adds.
The two embryos that were transferred to Lindsey Pierce.
Atkinson has a folder in her office with notes she’s collected on various devices over the years. She flicks through it over a video call and points to the notes she made about the glass vial. “Might explode; wear face shield and eye protection,” she reads. A few pages later, she points to another embryo-storage device. “You have to thaw this one in your fingers,” she tells me. “I don’t like it.”
The record-breaking embryos had been slow-frozen and stored in a plastic vial, says Atkinson. Thawing them was a cumbersome process. But all three embryos survived it.
The Pierces had to travel from their home in Ohio to the clinic in Tennessee five times over a two-week period. “It was like a five-hour drive,” says Lindsey. One of the three embryos stopped growing. The other two were transferred to Lindsey’s uterus on November 14, she says. And one developed into a fetus.
Now that the baby has arrived, Archerd is keen to meet him. “The first thing that I noticed when Lindsey sent me his pictures is how much he looks like my daughter when she was a baby,” she says. “I pulled out my baby book and compared them side by side, and there is no doubt that they are siblings.”
She doesn’t yet have plans to meet the baby, but doing so would be “a dream come true,” she says. “I wish that they didn’t live so far away from me … He is perfect!”
“We didn’t go into it thinking we would break any records,” says Lindsey. “We just wanted to have a baby.”
Brad Feld is a veteran tech entrepreneur, early-stage investor, and co-founder of Techstars, a venture fund and startup accelerator. He’s also a prolific author. I asked him how writing helps shape his ideas and what prompted his latest book, “Give First,” which focuses on the value of mentorship.
Brad Feld: The idea has been rattling around in my head for over a decade. Back in 2012, when I was writing “Startup Communities,” I realized one of the secrets to the success of Boulder, Colorado, was a philosophy I called “give before you get.” It’s a simple idea: be willing to help someone without a clear expectation of what’s in it for you.
It wasn’t altruism; it was a more effective, long-term way to build a healthy system. Around the same time, this ethos was becoming deeply baked into Techstars, which we described as a “mentor-driven accelerator.”
Then, in 2014, my friends at Techstars, led by David Cohen and Gregg Cochran, started using the hashtag #GiveFirst on Twitter. It was cleaner, stickier, and captured the essence of the idea. That’s when I knew it deserved its own book.
Gazis: Give us examples of how you’ve benefited from a giving-first mindset?
Brad Feld
Feld: My go-to example is the origin story of Techstars itself. I used to hold “random days,” where anyone could book a 15-minute meeting with me. It was an attempt to be open and accessible without destroying my calendar.
That’s how I met David Cohen. In 2006, he came in with a brochure for a mentorship and investment program for startups.
I loved the idea. Ten minutes into our 15-minute slot, I’d already committed to invest, and I stepped out to call my friend Jared Polis, then an entrepreneur and now the governor of Colorado, who agreed to join us on the spot. That unplanned gift of time and capital turned into Techstars, which has since funded over 4,000 companies. There was no way to predict that return.
Another is the evolution of Pledge 1%. The idea started when Ryan Martens of Rally Software, a web development platform, and I co-founded the Entrepreneurs Foundation of Colorado in 2007, based on the Salesforce 1% model, wherein the founders committed 1% of the company’s equity, technology, and employees’ time to a better world.
Pledge 1% has generated nearly $3 billion for communities globally and spawned a powerful network of founders helping each other. The financial return was for the community, but the network and relationship returns have been immeasurable.
Gazis: What’s your goal of “Give First,” the book?
Feld: I hope readers break out of a transactional mindset. We live in a “what’s in it for me?” world. “Give First” is a different philosophy. It’s not about being a martyr or working for free. It’s about putting energy into a relationship or a system without defining the parameters of the return up front. You still expect to get something back, but you don’t know when, from whom, or in what form.
The result is a positive-sum, long-term game. The ensuing knowledge, trust, and opportunities are often far greater than anything you could have engineered with a quid-pro-quo approach. My goal is for people to see that it’s a powerful and sustainable way to build a career, a company, and a community.
Gazis: Why do you write books? Are they relevant in our digital world?
Feld: Absolutely. In an age of infinite distraction, a book is an anchor. It’s a technology for focused, deep thinking that a tweet, a post, or a podcast can’t replicate. Long-form writing forces both the writer and the reader to slow down and grapple with nuance. A book is a durable artifact. In a world of fleeting digital content, a well-argued, 250-page narrative is a powerful signal that an idea is worth spending time with.
Writing debugs my own thinking. I have ideas and stories swirling around from decades of investing and mentoring. The process of putting them into a coherent narrative forces me to clarify what I believe. It’s how I find the signal in the noise.
The second reason is scale. I can only mentor so many founders one-on-one. A book allows me to share the lessons — and the scar tissue — with anyone, anywhere. I explored the give-first philosophy on my blog and in practice at Techstars for 15 years. Putting it all in a book makes the framework accessible to anyone.
Gazis: You’ve written about mentorship in business. Do you have a mentor?
Feld: My most important mentor, in business and life, was Len Fassler. I dedicated “Give First” to him. He taught me how to behave in business relationships and how to show up for people, especially when things are hard. I’ll never forget being at his house in 2001, completely crushed by the dot-com bust. He put his hands on my shoulders and said, “Suit up. They can’t kill you, and they can’t eat you. We’ll get through it.” That’s a story, not a spreadsheet. It’s guided me ever since.
As for writing, Adam Grant’s book “Give and Take” provided a framework for the ideas I had explored intuitively for years. And Dov Seidman’s book “How” emphasized that our manner of doing things matters more than what they are. Both write with a clarity and moral conviction that I aspire to.
Gazis: What books do you read?
Feld: I’m a voracious reader. My wife Amy and I take a week off the grid every quarter, and I usually get through a book a day. That sustained immersion is where I do some of my best thinking and pattern recognition.
My reading is all over the place, and I track every book on Goodreads. My infinite pile of books has a lot of fiction, biography, history, philosophy, and some business, especially by friends. I love discovering how different systems work, whether it’s a company, a brain, or a fictional universe. The variety is essential and feeds my curiosity, which is the fuel for everything I do, including writing.
OpenAI has launched a new feature in ChatGPT called Study Mode, offering a step-by-step learning experience designed to guide users through complex topics.
While aimed at students, Study Mode reflects a broader trend in how people use AI tools for information and adapt their search habits.
As more people start using conversational AI tools to seek information, Study Mode could represent the next step of AI-assisted discovery.
A Shift Toward Guided Learning
Activate Study Mode by selecting “Study and learn” from the tools in ChatGPT and ask a question.
Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.
Instead of giving direct answers, this feature promotes deeper engagement by asking questions, providing hints, and tailoring explanations to meet user needs.
Screenshot from: openai.com/index/chatgpt-study-mode/, July 2025.
Study Mode runs on custom instructions developed with input from teachers and learning experts. The feature incorporates research-based strategies, including:
Encouraging people to take part actively
Helping manage how much information people can handle
Supporting self-awareness and a desire to learn
Giving helpful and practical feedback.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, explains:
“Instead of doing the work for them, study mode encourages students to think critically about their learning. Features like these are a positive step toward effective AI use for learning. Even in the AI era, the best learning still happens when students are excited about and actively engaging with the lesson material.”
How It Works
Study Mode adjusts responses based on a user’s skill level and context from prior chats.
Key features include:
Interactive Prompts: Socratic questioning and self-reflection prompts promote critical thinking.
Scaffolded Responses: Content is broken into manageable segments to maintain clarity.
Knowledge Checks: Quizzes and open-ended questions help reinforce understanding.
Toggle Functionality: Users can turn Study Mode on or off as needed during a conversation.
Early testers describe it as an on-demand tutor, useful for unpacking dense material or revisiting difficult subjects.
Looking Ahead
Study Mode is now available to logged-in users across Free, Plus, Pro, and Team plans, with ChatGPT Edu support expected in the coming weeks.
OpenAI plans to integrate Study Mode behavior directly into its models after gathering feedback. Future updates may include visual aids, goal tracking, and more personalized support.
Google is expanding AI Mode in Search with new tools that include PDF uploads, persistent planning documents, and real-time video assistance.
The updates begin rolling out today, with the AI Mode button now appearing on the Google homepage for desktop users.
PDF Uploads Now Supported On Desktop
Desktop users can now upload images directly into search queries, a feature previously available only on mobile.
Support for PDFs is coming in the weeks ahead, allowing you to ask questions about uploaded files and receive AI-generated responses based on both document content and relevant web results.
For example, a student could upload lecture slides and use AI Mode to get help understanding the material. Responses include suggested links for deeper exploration.
Image Credit: Google
Google plans to support additional file types and integrate with Google Drive “in the months ahead.”
Canvas: A Tool For Multi-Session Planning
A new AI Mode feature called Canvas can help you stay organized across multiple search sessions.
When you ask AI Mode for help with planning or creating something, you’ll see an option to “Create Canvas.” This opens a dynamic side panel that saves and updates as queries evolve.
Use cases include building study guides, travel itineraries, or task checklists.
Image Credit: Google
Canvas is launching for desktop users in the U.S. enrolled in the AI Mode Labs experiment.
Real-Time Assistance With Search Live
Search Live with video input also launches this week on mobile. This allows you to utilize AI Mode while pointing your phone camera at real-world objects or scenes.
The feature builds on Project Astra and is available through Google Lens. Start by tapping the ‘Live’ icon in the Google app, then engage in back-and-forth conversations with AI Mode using live video as visual context.
Image Credit: Google
Chrome Adds Contextual AI Answers
Lens is getting expanded desktop functionality within Chrome. Soon, you’ll see a “Ask Google about this page” option in the address bar.
When selected, it opens a panel where you can highlight parts of a page, like a diagram or snippet of text, and receive an AI Overview.
This update also allows follow-up questions via AI Mode from within the Lens experience, either through a button labeled “Dive deeper” or by selecting AI Mode directly.
Looking Ahead
These updates reflect Google’s vision of search as a multi-modal, interactive experience rather than a one-off text query.
While most of these tools are limited to U.S.-based Labs users for now, they point to a future where AI Mode becomes central to how searchers explore, learn, and plan.
Rollout timelines vary by feature. So keep a close eye on how these capabilities add to the search experience and consider how to adapt your content strategies accordingly.
The first half of 2025 brought major shakeups in SEO, AI, and organic growth – and it’s time for a reality check.
Traffic is down, revenue is … complicated, and large language models (LLMs) are no longer fringe.
Publishers are panicking, and SEO teams are reevaluating how they measure success.
And it’s not just the tech shifting; it’s the economy around it. The DOJ’s antitrust case against Google could reshape the playing field before Q4 even begins.
In today’s Memo, I’m unpacking the state of organic growth at the midpoint of 2025:
How AI Overviews and AI Mode are eating clicks, and what that means for TOFU, MOFU, and BOFU content.
Why publishers are suing Google and preparing for zero traffic.
What’s really happening with tech layoffs and job transformation.
How we measure LLM visibility today, and where that’s headed.
What to expect next in organic growth, search, and monetization.
Plus, premium subscribers will receive my scorecard that will help evaluate whether the team is adapting effectively to the AI landscape.
Let’s take stock of where we are, and what comes next.
Image Credit: Kevin Indig
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
AI Is Cutting Flesh
AI Overviews (AIOs) looked “interesting” to marketers in 2024 and “devastating” in 2025.
The traffic loss impact ranges from 15% to 45% declines, from my own observations.
Bottom-line metrics across the industry range from “traffic down, revenue up” to “traffic down, revenue down.”
In February, I wrote in The Impact of AI Overviews that mostly the top of the funnel (TOFU) queries were impacted:
Every study I looked at confirmed that the majority of AI Overviews show up for informational-intent keywords like questions.
Shortly after, in March 2025, Google nullified that theory by dialing up the number of AIOs way beyond the top of the funnel.
Ever since, U.S. companies have experienced a strong (negative) impact, and I’m hearing the phrase “SEO is dead” more often from leaders.
Between 13 and 19% of keywords show AI Overviews, according to Semrush and seoClarity, but I assume the actual number is much higher because searchers use much longer prompts. (Prompts that most tools don’t track.) [1, 2]
I expect organic traffic to keep dropping as the year moves forward.
In theAIO Usability study I published in May, only a small fraction of clicks still came through to websites.
It wouldn’t surprise me if 70% of the organic traffic that sites earned in 2024 is gone by 2026, leaving just 30% of that organic traffic behind.
Scary? Yes. But traffic is just a means.
The same study also shows that 80% of searchers still lean on organic results to complete their search journeys.
So, I still feel optimistic about the value of organic search in the long term.
There are two questions top of mind for me at the moment:
If AIOs really only impact the top of the funnel, then why are revenue numbers down?
At which point is the decline going to level off?
In my view, either:
AIOs are really mostly TOFU queries. In that case, TOFU content always had more impact on the bottom line than we were able to prove, and we can expect the traffic decline to level off.
Or AIOs impact way more than MOFU and BOFU queries as well (which is what I think), and we’re in for a long decline of traffic. If true, I expect revenue that’s attributed to organic search to decline at a lower rate, or not at all for certain companies, since purchase intent doesn’t just go away. Therefore, revenue results would relate more to our ability to influence purchase intent.
With one exception.
Publishers Are Struggling
The whole internet is trying to figure out whether the value of showing up in LLMs (ChatGPT, Gemini, AI Mode, AI Overviews, etc.) is worth more than the loss in traffic.
But without a doubt, publishers and affiliates are the group that gets hit the hardest due to their reliance on ad impressions and link clicks.
No one needs traffic as much as publishers.
Image Credit: Kevin Indig
The consequence? Leading publishers and news sites will conduct layoffs and assume that Google traffic will go to zero at some point.
At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [3]
Publishers in the EU have banded together and filed an antitrust complaint against Google for its launch and the impact of AI Overviews with the European Commission. [4]
Publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page.
I caught up with Chris Dicker, who leads one of the co-signatories in the DMA complaint against Google, the Independent Publishers Alliance:
Kevin: What’s your role in the lawsuit against Google?
Chris: The Independent Publishers Alliance is one of the co-signatories on the complaint. I am helping lead this from the Alliance side.
Kevin: What would be an outcome, i.e., an action by Google, that would be satisfactory?
Chris: We are only asking for what we deem to be fair, which is for a sustainable ecosystem.
Whether that is payment for use of content or for Google to start to substantially reduce the zero-click searches, which have gotten significantly worse since the launch of AIOs.
Kevin: Can LLMs (ChatGPT & Co) provide some remedy against the traffic drop from Google?
Chris: Not for publishers at the moment, no. They don’t have scale or the want to send traffic anywhere else. The current CTR’s we are seeing and that are being reported from publishers are tiny.
OpenAI’s scrape to human visit is 179:1, compared with Perplexity’s 369:1 and Anthropic’s 8692:1 (stats from Tollbit’s State of bots Q1 2025).
For perspective, Bing’s is 11:1. I know there are reports that the traffic from LLMs is “better quality,” but not on the metrics that would help publishers or content creators.
It is very much the opposite: Bounce rate is higher; pages per session and per visit are also both considerably down for AI search traffic compared to organic search.
Kevin: What are the consequences of Google’s AI Overviews on independent publishers so far? Can you quantify the impact?
Chris: It’s significant and something that has been extrapolated even since April this year. There are sites that are seeing traffic drops of up to 70% since April.
Publishers have no choice but to cut costs and, unfortunately, that also means job losses.
In the last year, we have had numerous members who, unfortunately, haven’t been able to weather the storm and have ceased publishing altogether, and these are respected sites that were well established over the last 10 years, if not longer.
Kevin: Do you know of publishers that are able to dampen the negative impact from AI Overviews in some ways? If so, what are they doing?
Chris: Nearly every publisher I speak to is actively diversifying away from Google.
It feels inevitable that we’ll see a mass blocking of Googlebot at some point, something that would have been inconceivable just 12 months ago.
If your business model still relies on search traffic, whether from traditional search or AI-powered results, it’s time to rethink – and fast.
More publishers are now focusing on direct audience relationships through newsletters, forums, podcasts, and similar channels.
Platforms like Substack offer an interesting model, though I’m not convinced their approach fully suits publishers just yet.
Beyond monetizing websites and content, many publishers are also developing in-house creative, social, or AI agencies. After all, these businesses have spent years engaging and inspiring audiences.
Helping advertisers tap into that expertise feels like a natural next step.
Besides the fact that the open web and critical societal instances are fading away, from a purely practical standpoint, there are also fewer publishers to amplify content for other businesses.
And yet, I believe we haven’t seen the full extent to which Google Search will change from sending traffic to answering questions directly.
AI Mode Is Sitting On The Bench, But It Seems Ready
At a recent event I attended, a Google representative mentioned that Sundar Pichai sees AI Mode as the default search experience in the next two to three years, with searchers being able to switch to classic search results if they want to – assuming users like AI Mode. And that seems to be the case: According to a (small) survey done by Oppenheimer & Co., 82% of searchers find AI Mode more helpful than Google Search, while 75% find it more helpful than ChatGPT (I wonder why). [5]
Nothing shows fear more than copying a challenger’s user interface and abandoning the cash machine that worked for 20 years.
AI Mode is basically ChatGPT with a Google logo. Google follows the Meta playbook, which fenced in Snapchat’s and TikTok’s growth by copying their core features.
And most alarmingly for search marketers, AI Mode eats clicks for breakfast.
Research by iPullrank found that “4.5% of AI Mode Sessions result in a click.”[6]
A click. As in one!
But Google cannot afford to lose the investor narrative.
I personally believe that AI Mode won’t launch before Google has figured out the monetization model. And I predict that searchers will see way fewer ads but much better ones, and displayed at a better time.
Due to the conversational interface and longer prompts, Google should not only have more context about what users really want, but they would also be able to better estimate when is the best time to show an ad during the chat conversation.
As a result, I expect CPCs will skyrocket, but CPAs will become more efficient.
AEO/GEO/LLMO: Too Many Buzzwords But Not Enough Differentiation
Between AI Mode, AI Overviews, and ChatGPT stands this important question:
How much can we influence answers, and how different is that job from what we’ve done in SEO over the last two decades?
1. Longer prompts: The average prompt is 23 words long compared to 4.2 for classic Google Search. [7]
The rich detail users provide about their intent hits a content gap that’s tuned for shorthead keywords on the other side of the marketplace.
As a result, I see hyper-specialized content that’s fine-tuned for specific personas (see How to Optimize for Topics) in our present and future.
2. SEO winners are not AI winners: If SEO was enough and there was nothing else we needed to do “for AI,” then why aren’t the sites that are most visible in Search the same ones that are visible in LLMs?
In Is GEO/AEO the same as SEO?, I found that the lists differ greatly in most verticals. Only highly consolidated spaces with a few winners, like CRM software, have identical winners across both modalities.
3. New intent: Generative: Semrush and Profound came to the conclusion that ~30-70% of intent on LLMs is “generative,” meaning users want to accomplish tasks right then and there. [8]
What’s often missed is that while performing an action, e.g., generating an image, the intent can quickly flip to informational or transactional, e.g., learn more about the topic you want to generate the image about or buy icon license.
Since experiences are conversational and more continuous, we need to update our model of intent. It doesn’t happen in isolation (think: one session), but several intents can occur during the same session (informational → generative → transactional → informational → etc.).
My opinion: It’s too soon to coin a term.
Will we switch from Answer Engine Optimization to Agentic Engine Optimization when we enter the Agentic AI age? AI has evolved at a rocket pace over the last 2.5 years, and I don’t expect it will slow down soon.
LLMs Are No Longer Fringe
In 2025, LLMs reached the mainstream. We’re not talking about a fringe platform anymore: ChatGPT supposedly receives 2.5 billion prompts a day. With Google seeing over 5 trillion searches per year, you could say ChatGPT has reached about 17.8% of Google’s volume.
Keep in mind that a lot of prompts are not searches on ChatGPT, and then the comparison becomes weaker (until Google rolls AI Mode out broadly). [9]
Image Credit: Kevin Indig
Important to note is that LLMs rely on different citation sources to varying degrees. [10]
Profound saw in 30 million citations that ChatGPT, AIOs, and Perplexity rely on different citation sources:
ChatGPT cites Wikipedia almost 50% of the time, followed by citing Reddit at 11.3% and Forbes at 6.8%.
AI Overviews cite Reddit 21% of the time, followed by 18.8% for YouTube, 14.3% for Quora, and 13% for LinkedIn.
Perplexity cites Reddit almost 50% of the time, YouTube at 13.9% of the time, and Gartner at 7%.
We know that investing time and resources into non-Google platforms is critical to building trust and visibility across all platforms.
But now we know that the mix of platform investment depends on where you want to build visibility.
Reddit seems to provide universal impact, which makes sense given their licensing deals with OpenAI and Google, but YouTube, Quora, and review platforms don’t show the same potential for gaining citations on all LLMs.
Image Credit: Kevin Indig
Time also matters. AirOps found that 95% of pages cited in ChatGPT are less than 10 months old. [11]
A big reason for this is the training data cutoff for LLMs. New models are still trained on large corpi of data (remember the Google Dance?).
Anything newer than the time of training needs to come from the web. As a result, keeping content fresh and continuously iterating seems like a path to AI visibility to me. Even adding the current year to the URL (and meta-title) seems like a good idea. [12]
A study by Apple, which I covered in the Growth Intelligence Brief, raises a question we might all have at the tip of our tongue: Are LLMs overhyped? [13]
The answer: It depends … on the complexity of the task:
Simple problems: Models often find correct solutions early but wastefully continue exploring incorrect ones (“overthinking”).
Moderate complexity: Models explore many incorrect solutions before finding correct ones.
High complexity: Models fail to generate any correct solutions.
LLMs are smart but still struggle with complex tasks. Good news for tech workers … right?
And here’s another thing: With the increase of LLM use and adoption, how will we measure success for our optimization efforts?
I ran a survey of Growth Memo in June, and it’s clear our industry hasn’t really nailed how we measure the LLM visibility of our brands.
Out of those who responded, about 30% are using traditional SEO tools to measure LLM visibility, 26% are using Google Analytics 4 traffic signals, and a whopping 21% aren’t measuring yet and need help determining how.
Image Credit: Kevin Indig
And the biggest surprise is this: Overwhelmingly, we don’t trust our LLM visibility measurements.
Close to 80% of survey respondents don’t believe the way they are measuring LLM visibility is accurate.
Image Credit: Kevin Indig
A big topic in the whole LLM conversation is, of course, whether AI replaces white collar workers or not.
I’m including this discussion in my halftime report because I’m seeing a growing number of in-house experts who are afraid to be replaced.
Amazon’s CEO, Andy Jassy, wrote a public memo, saying the company would need fewer people because of AI (bolded text is mine):
“As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.” [14]
Amazon has cut +27,000 jobs between 2022 and 2023, but has never had more employees at the end of 2024, except for at the end of 2021 by a small margin. [15]
Other tech companies pulled even:
Salesforce’s CEO, Marc Benioff, says that 30-50% of the work at Salesforce is done by AI. [16] Salesforce eliminated ~1,000 roles this year.
Klarna’s CEO first announced that AI is doing the work of 700 customer service agents and fired about 2,000 employees, but then backtracked and rehired humans. [17]
Microsoft cut 15,000 jobs in 2025. CEO Satya Nadella said AI writes ~30% of new code in some projects.
Meta laid off 3,600 employees in 2025, with Mark Zuckerberg saying AI could be ready to be a mid-level engineer this year.
But is AI really replacing white collar workers, or is it used for good PR?
Layoff tracker, layoffs.fyi, shows that the number of companies and employees laid off is not growing since the pandemic.
Image Credit: Kevin Indig
A jobs report by CompTIA shows that while tech employment is slightly down between June 2023 and June 2025…[18]
Image Credit: Kevin Indig
…the number of job openings with AI skills far outpaces the number of listings for all roles.
Image Credit: Kevin Indig
In other words, “AI layoffs” seem more PR play or justification for job cuts.
But upskilling with AI is critical.
Google Lawsuit Rushes Toward A Final Decision On Labor Day
The landmark lawsuit against Google for being an online search monopoly concludes by Labor Day (September 1). The DoJ asks for:
A mandatory divestiture of Chrome within a specified timeframe.
A five-year prohibition on Google owning any browser.
Termination of exclusive default agreements.
Extensive data sharing requirements.
The right to seek Android divestiture if behavioral remedies prove insufficient.
Google, on the other hand, agrees to end exclusive agreements, so we know Google and Apple will divorce, but opposes a Chrome divestiture and data sharing mandates.
The remedy ruling could have significant implications on the AI race, and where marketers should place their money.
For example, a Chrome divestiture could significantly set Google back, as OpenAI and Perplexity launch their own browsers. It would also mean a material loss in user behavior data and agentic AI capabilities.
Losing the exclusive agreement with Apple could also mean that more users set other browsers than Chrome as default, if they can provide a strong benefit.
However, I personally think the most realistic outcome is a forced end to exclusive agreements and would be shocked to see a Chrome divestiture.
For context:
The Department of Justice has achieved two landmark antitrust victories against Google in 2024-2025, with federal judges ruling the tech giant operates illegal monopolies in both online search and digital advertising technology.
Both cases have now advanced to remedy phases where courts will determine whether to break up parts of Google’s business, representing the most aggressive government intervention in Big Tech since the Microsoft case 25 years ago.
Outlook For H2
The second half of 2025 will likely be defined by adaptation rather than resistance.
Companies that succeed will be those that foster trust beyond Google, build direct audience relationships, and upskill teams in AI.
Here’s what I expect for the second half of the year:
Accelerating Traffic Decline
Organic traffic losses will likely intensify as Google expands AI Overviews.
Publishers should prepare for further 20-30% traffic declines.
The “new normal” of 30% of historical traffic by 2026 could arrive sooner than expected.
AI Mode Launch
Google will likely roll out AI Mode more broadly, but cautiously.
Expect a heavy focus on monetization testing before wide release.
Watch for new ad formats optimized for conversational search.
Publisher Adaptation
More publishers will actively block Googlebot.
Increased focus on direct revenue streams (newsletters, memberships).
Potential consolidation as smaller publishers struggle to survive.
Measurement Evolution
New tools specifically for measuring LLM visibility will emerge.
Industry will start standardizing on key metrics for AI performance.
Greater emphasis on revenue vs. traffic as success metrics.
Market Restructuring
DoJ ruling could reshape the search landscape.
Expect new search entrants to gain traction.
Browser wars may reignite with AI-native options.
Featured Image: Paulo Bobita/Search Engine Journal
This week’s Ask An SEO question comes from an ecommerce business that followed best practices, but still lost traffic after migrating to a new platform.
“We recently migrated our ecommerce store to a new platform, and despite following all the recommended SEO practices, our organic traffic dropped by 30%.
What recovery strategies should we prioritize, and how long should we expect before seeing improvements?”
This is a common frustration many ecommerce businesses face after a platform migration.
But why does it happen, and more importantly, how can you recover lost traffic? Let’s dive into the likely causes of this issue and explore the most effective strategies to get your organic traffic back on track.
Why Organic Traffic Can Drop Post-Migration
So, why does this happen? And more importantly, what can you do to fix it?
Understanding why this happens is key to finding a solution. Without pinpointing the root cause, any recovery efforts can feel like a shot in the dark – and that’s the last thing you want.
Tracking Issues
After a migration, it’s surprisingly common for something small to go wrong with your analytics setup. Maybe the Google Analytics 4 tag wasn’t added correctly. Maybe your Google Search Console property wasn’t verified.
Even a tiny mistake – like a misconfigured setting or a missing bit of code – can make it look like traffic has fallen off a cliff, when really it’s just not being tracked properly.
The upside? These problems are usually quick to spot and easy to fix. It’s always a good first step before diving into deeper SEO troubleshooting because the issue might not be your traffic at all, just your data.
Technical Issues
If your tracking is working as it should and the traffic drop is real, the next step is to check for technical SEO problems on your new site – and this almost always starts with redirects.
During a migration, especially if URLs have changed, redirects are crucial. One missing or incorrect 301 redirect can break the connection between your old and new pages, making Google think important content has disappeared.
That can quickly tank rankings and traffic. Make sure all old URLs point to the right new ones, that you’re using proper 301 (not 302) redirects, and that there are no long redirect chains slowing things down.
Other common technical pitfalls? Broken or removed internal links, staging URLs accidentally left in canonical tags, or no-index rules carried over from development.
Any of these can stop Google from crawling or indexing your site properly, and if that happens, your content won’t show up in search at all.
In some cases, ranking drops can be caused by changes to page content itself. Even small changes like missing H1s, altered metadata, or content now rendered in JavaScript can have a big impact.
So, you will need to double-check the content on your pages to identify if anything has changed.
But, don’t forget that Google will need time to reindex and trust your new setup, especially if you didn’t submit an updated sitemap or if backlinks still point to old URLs.
Although you may see some big changes in your SEO performance initially, monitor it for a little while to see if things settle back down on their own.
Steps To Recover Your Organic Traffic
Crawl Your Site: Look For Redirect Problems And Broken Links
The first thing you should do is crawl your site. Tools like Screaming Frog or Sitebulb are perfect for this.
Crawling your site helps you identify technical issues such as broken redirects, incorrect or missing 301 redirects, and redirect chains.
During a migration, URL changes are common, and improper redirects can create huge SEO setbacks.
If you have old URLs pointing to pages that no longer exist or haven’t been properly redirected, Google might struggle to index your site correctly, impacting traffic and rankings.
Another key issue to spot during crawling is orphaned pages. These are pages that exist on your site but have no internal links pointing to them.
Without internal links, Google may have a harder time finding and indexing these pages, which can hurt your rankings.
Fix Redirect Problems Immediately
Once you’ve identified any issues with redirects during the crawl, fixing them should be your priority.
Redirects are crucial for preserving SEO value during a migration. Check that all old URLs are properly redirected to their new counterparts using 301 redirects, if your URLs have changed.
Ensure there are no redirect chains, as these can slow down page load times and confuse search engines.
Even if you think you’ve set up redirects, it’s worth doing a detailed check. Missing or incorrect redirects are one of the top causes of traffic loss after a migration.
Remember, each redirect is a connection that ensures the SEO equity of your old pages gets passed on to your new ones.
Tackle Potential On-Page Issues
If you’ve ruled out any major technical errors, focus on the content itself.
Compare your post-migration content with the version you had before the migration. Did anything change that might have negatively affected your rankings?
Ensure that all your pages are optimized for the target keywords, including title tags, meta descriptions, header tags (especially H1s), and body content.
It’s also worth revisiting your product pages to ensure they meet Google’s standards for quality content. This might involve adding more detailed product descriptions, improving product images, or enhancing user-generated content such as reviews.
Update Your XML Sitemap And Google Search Console
Once your on-page content is reviewed and technical issues addressed, the next step is to update your XML sitemap to reflect the new URLs, if applicable.
Submit the updated sitemap to Google Search Console so Google can easily find and crawl your pages. This also helps Google understand that you’ve made changes to your site’s structure and allows it to index the new pages more quickly.
Don’t forget to monitor Google Search Console closely. Regularly check for crawl errors and use the URL Inspection Tool to request indexing for important pages that may not have been crawled yet.
How Long Does SEO Recovery Take?
Recovery isn’t an instant process, unfortunately. Typically, sites begin to see improvements within four to 12 weeks, but several factors can influence the recovery timeline.
If your migration involved significant changes, like a new domain or a complete overhaul of your site structure, Google may treat your site as if it were brand new.
In this case, it can take longer for Google to rebuild trust and restore organic visibility. Sites with many pages may also experience slower recovery times, as Google has to crawl and reindex more content.
The content on your site can also affect recovery time. If important pages were altered or lost valuable content during the migration, it might take longer for Google to recognise the changes and rank your pages again.
A smooth migration doesn’t start on launch day; it starts way before. SEO needs to slot into your QA and development process from the beginning.
That means making sure things like redirects, content structure, and crawlability are all working before you go live, ideally in a proper staging environment.
Issues can still happen when you go live, though, so remember to crawl your old site before launch. That way, you can run side-by-side audits of your old and new sites and catch issues early.
It’s also a smart idea to have a rollback plan just in case. That means having backups and knowing what to do if something goes wrong.
Final Thoughts
Recovering from an SEO drop after a migration can be frustrating, but unfortunately, it’s often part of the process.
By focusing on the right technical checks, reviewing your on-page content, and giving search engines time to recrawl and reindex your site, you can get things back on track.
Keep a close eye on your data, be patient, and use this as an opportunity to strengthen your site’s overall SEO health.
More Resources:
Featured Image: Paulo Bobita/Search Engine Journal