Google On Balancing Needs Of Users And The Web Ecosystem via @sejournal, @martinibuster

At the recent Search Central Live Deep Dive 2025, Kenichi Suzuki asked Google’s Gary Illyes how Google measures high quality and user satisfaction of traffic from AI Overviews. Illyes’ response, published by Suzuki on LinkedIn, covered multiple points.

Kenichi asked for specific data, and Gary’s answer offered an overview of how Google gathers external data to form internal opinions on how AI Overviews is perceived by users in terms of satisfaction. He said that the data informs public statements by Google, including those made by CEO Sundar Pichai.

Illyes began his answer by saying that he couldn’t share specifics about the user satisfaction data, but he still continued to offer his overview.

User Satisfaction Surveys

The first data point that Illyes mentioned was user satisfaction surveys to understand how people feel about AI Overviews. Kenichi wrote that Illyes said:

“The public statements made by company leaders, such as Sundar Pichai, are validated by this internal data before being made public.”

Observed User Behavior

The second user satisfaction data point that Illyes mentioned was inferring from the broader market. Kenichi wrote:

“Gary suggested that one can infer user preference by looking at the broader market. He pointed out that the rapidly growing user base for other AI tools (like ChatGPT and Copilot) likely consists of the same demographic that enjoys and finds value in AI Overviews.”

Motivated By User-Focus

This part means putting the user first as the motivation for introducing a new feature. Illyes specifically said that causing a disruption is not Google’s motivation for AI search features.

Acknowledged The Web Ecosystem

The last point he made was to explain that Google’s still figuring out how to balance their user-focused approach with the need to maintain a healthy web ecosystem.

Kenichi wrote that Illyes said:

“He finished by acknowledging that they are still figuring out how to balance this user-focused approach with the need to continue supporting the wider web ecosystem.”

Balancing The Needs Of The Web Ecosystem

At the dawn of modern SEO, Google did something extraordinary: they reached out to web publishers through the most popular SEO forum at the time, WebmasterWorld. Gary Illyes himself, before he joined Google, was a WebmasterWorld member. This outreach by Google was the initiative of one Googler, Matt Cutts. Other Googlers provided interviews, but Matt Cutts, under the WebmasterWorld nickname of GoogleGuy, held two-way conversations with the search and publisher community.

This is no longer the case at Google, which is largely back to one-way communication accompanied by intermittent social media outreach.

The SEO community may share in the blame for this situation, as some SEOs post abusive responses on social media. Fortunately, those people are in the minority, but that behavior nonetheless puts a chill on the few opportunities provided to have a constructive dialogue.

It’s encouraging to hear Illyes mention the web ecosystem, and it would be even further encouraging to hear Googlers, including the CEO, focus more on how they intend to balance the needs of the users with those of the creators who publish content, because many feel that Google’s current direction is not sustainable for publishers.

Featured Image by Shutterstock/1000 Words

5 Predictions for 2025 Holiday Shopping

Could it be that Americans are heading into the holiday shopping season with confidence?

From faster delivery and cross-border buying to small business growth and AI-powered shopping tools, the coming Christmas season promises to be both bold and efficient — or at least that’s what I predict.

Near Instant Gratification

Fast, free delivery has become so common that consumers will pick up or receive at least 35% of orders placed in November and December within 24 hours.

I foresee a couple of factors driving speedy deliveries.

First, Amazon’s infrastructure prioritizes rapid delivery. In urban areas, Amazon delivers approximately 60% of Prime orders the next day. Rural delivery lowers the average, but Fulfillment by Amazon shipments will provide nearly instant shopping gratification.

Second, buy online, pick up in-store purchasing has grown rapidly and could soon represent 10% of U.S. ecommerce sales, according to Capital One Shopping.

Canadian-American Relations

Canadian shoppers are among the most active cross-border consumers worldwide. In a given year, about half of folks north of the border shop with a U.S. ecommerce business.

Photo of a male looking at a laptop with snow in the background

Holiday shopping has become a digital ritual — convenient and quiet.

Despite tariff disputes, I believe these shopping habits are both resilient and beneficial. Canadian buyers are accustomed to shopping at U.S. stores online owing to value and variety. And, the nations have been friends for too long to experience lasting trade disruptions.

With this in mind, expect at least 55% of Canadian shoppers to make at least one holiday purchase from U.S. ecommerce stores in 2025.

Small Business Growth

I expect small, independent online retailers will grow by approximately 10% in 2025, outperforming overall ecommerce performance and reaching roughly $15.5 billion in U.S. holiday revenue.

In comparison, Shopify merchants alone generated about $11.5 billion during the 2024 holiday peak sale period. Etsy sellers added about $2 billion.

The growth should come from small brands that sell craft or U.S.-made products.

AI Shopping

During the peak gift-giving season, at least 50% of North American shoppers will use artificial intelligence for shopping. Consumers will chat, search, seek recommendations, and even make purchases with the help of AI tools.

Last year, fewer than 15% of U.S. shoppers consulted AI for holiday gift giving, reportedly, but much has changed in a year. AI is present in nearly every tool, including Google.

Hence AI product discovery will likely be the top ecommerce traffic source in 2025.

Consumer Confidence

I was pessimistic last year about U.S. holiday ecommerce growth, and it showed in my failed predictions, listed below. If I am going to err this year, it will be on the side of being too optimistic.

The U.S. stock market has performed well of late. For example, the S&P 500 and the Nasdaq Composite index recently hit record highs. The driver for this boom may be trade optimism and solid corporate earnings.

I suspect this enthusiasm will carry over into holiday gift-giving in 2025. The key factor will be whether shoppers believe they can afford to spend.

Last Year’s Predictions

Since 2013 I have predicted ecommerce trends and sales for the coming holiday season. In 2024, I was incorrect in four of my five predictions, making last year’s forecasting my worst yet. Here are the embarrassing specifics.

Mobile commerce will represent 54% of holiday ecommerce sales: correct. Adobe reported that U.S. holiday sales on mobile devices reached $131.5 billion, accounting for 54.4% of the overall online total.

Ecommerce holiday sales grow 5% year-over-year: wrong. I was too pessimistic last year. I wrote that early holiday predictions, including one suggesting 23% growth in 2024, were “too optimistic, given the contentious U.S. election, inflation, and other economic woes.” Most sources put the actual growth at 8.7%.

Email volume grows 25% during the 2024 holiday season: wrong. This one was more difficult to measure, but nonetheless, I likely missed the mark. Global email volume grew about 4.3% year-over-year during the fourth quarter, according to multiple sources.

40% of Gen  Zs use social commerce this holiday season: wrong. Most estimates place the actual number at 32% for Gen Zs (ages 13 to 28), while an estimated 12% of all U.S. consumers shopped social in 2024.

BNPL accounts for 9% of online holiday sales: wrong. About 7.7% of U.S. holiday purchases in November and December 2024 were buy-now, pay-later, representing $18.2 billion, per Adobe.

What role should oil and gas companies play in climate tech?

This week, I have a new story out about Quaise, a geothermal startup that’s trying to commercialize new drilling technology. Using a device called a gyrotron, the company wants to drill deeper, cheaper, in an effort to unlock geothermal power anywhere on the planet. (For all the details, check it out here.) 

For the story, I visited Quaise’s headquarters in Houston. I also took a trip across town to Nabors Industries, Quaise’s investor and tech partner and one of the biggest drilling companies in the world. 

Standing on top of a drilling rig in the backyard of Nabors’s headquarters, I couldn’t stop thinking about the role oil and gas companies are playing in the energy transition. This industry has resources and energy expertise—but also a vested interest in fossil fuels. Can it really be part of addressing climate change?

The relationship between Quaise and Nabors is one that we see increasingly often in climate tech—a startup partnering up with an established company in a similar field. (Another one that comes to mind is in the cement industry, where Sublime Systems has seen a lot of support from legacy players including Holcim, one of the biggest cement companies in the world.) 

Quaise got an early investment from Nabors in 2021, to the tune of $12 million. Now the company also serves as a technical partner for the startup. 

“We are agnostic to what hole we’re drilling,” says Cameron Maresh, a project engineer on the energy transition team at Nabors Industries. The company is working on other investments and projects in the geothermal industry, Maresh says, and the work with Quaise is the culmination of a yearslong collaboration: “We’re just truly excited to see what Quaise can do.”

From the outside, this sort of partnership makes a lot of sense for Quaise. It gets resources and expertise. Meanwhile, Nabors is getting involved with an innovative company that could represent a new direction for geothermal. And maybe more to the point, if fossil fuels are to be phased out, this deal gives the company a stake in next-generation energy production.

There is so much potential for oil and gas companies to play a productive role in addressing climate change. One report from the International Energy Agency examined the role these legacy players could take:  “Energy transitions can happen without the engagement of the oil and gas industry, but the journey to net zero will be more costly and difficult to navigate if they are not on board,” the authors wrote. 

In the agency’s blueprint for what a net-zero emissions energy system could look like in 2050, about 30% of energy could come from sources where the oil and gas industry’s knowledge and resources are useful. That includes hydrogen, liquid biofuels, biomethane, carbon capture, and geothermal. 

But so far, the industry has hardly lived up to its potential as a positive force for the climate. Also in that report, the IEA pointed out that oil and gas producers made up only about 1% of global investment in climate tech in 2022. Investment has ticked up a bit since then, but still, it’s tough to argue that the industry is committed. 

And now that climate tech is falling out of fashion with the government in the US, I’d venture to guess that we’re going to see oil and gas companies increasingly pulling back on their investments and promises. 

BP recently backtracked on previous commitments to cut oil and gas production and invest in clean energy. And last year the company announced that it had written off $1.1 billion in offshore wind investments in 2023 and wanted to sell other wind assets. Shell closed down all its hydrogen fueling stations for vehicles in California last year. (This might not be all that big a loss, since EVs are beating hydrogen by a huge margin in the US, but it’s still worth noting.) 

So oil and gas companies are investing what amounts to pennies and often backtrack when the political winds change direction. And, let’s not forget, fossil-fuel companies have a long history of behaving badly. 

In perhaps the most notorious example, scientists at Exxon modeled climate change in the 1970s, and their forecasts turned out to be quite accurate. Rather than publish that research, the company downplayed how climate change might affect the planet. (For what it’s worth, company representatives have argued that this was less of a coverup and more of an internal discussion that wasn’t fit to be shared outside the company.) 

While fossil fuels are still part of our near-term future, oil and gas companies, and particularly producers, would need to make drastic changes to align with climate goals—changes that wouldn’t be in their financial interest. Few seem inclined to really take the turn needed. 

As the IEA report puts it:  “In practice, no one committed to change should wait for someone else to move first.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The deadly saga of the controversial gene therapy Elevidys

It has been a grim few months for the Duchenne muscular dystrophy (DMD) community. There had been some excitement when, a couple of years ago, a gene therapy for the disorder was approved by the US Food and Drug Administration for the first time. That drug, Elevidys, has now been implicated in the deaths of two teenage boys.

The drug’s approval was always controversial—there was a lack of evidence that it actually worked, for starters. But the agency that once rubber-stamped the drug has now turned on its manufacturer, Sarepta Therapeutics. In a remarkable chain of events, the FDA asked the company to stop shipping the drug on July 18. Sarepta refused to comply.

In the days since, the company has acquiesced. But its reputation has already been hit. And the events have dealt a devastating blow to people desperate for treatments that might help them, their children, or other family members with DMD.

DMD is a rare genetic disorder that causes muscles to degenerate over time. It’s caused by a mutation in a gene that codes for a protein called dystrophin. That protein is essential for muscles—without it, muscles weaken and waste away. The disease mostly affects boys, and symptoms usually start in early childhood.

At first, affected children usually start to find it hard to jump or climb stairs. But as the disease progresses, other movements become difficult too. Eventually, the condition might affect the heart and lungs. The life expectancy of a person with DMD has recently improved, but it is still only around 30 or 40 years. There is no cure. It’s a devastating diagnosis.

Elevidys was designed to replace missing dystrophin with a shortened, engineered version of the protein. In June 2023, the FDA approved the therapy for eligible four- and five-year-olds. It came with a $3.2 million price tag.

The approval was celebrated by people affected by DMD, says Debra Miller, founder of CureDuchenne, an organization that funds research into the condition and offers support to those affected by it. “We’ve not had much in the way of meaningful therapies,” she says. “The excitement was great.”

But the approval was controversial. It came under an “accelerated approval” program that essentially lowers the bar of evidence for drugs designed to treat “serious or life-threatening diseases where there is an unmet medical need.”

Elevidys was approved because it appeared to increase levels of the engineered protein in patients’ muscles. But it had not been shown to improve patient outcomes: It had failed a randomized clinical trial.

The FDA approval was granted on the condition that Sarepta complete another clinical trial. The topline results of that trial were described in October 2023 and were published in detail a year later. Again, the drug failed to meet its “primary endpoint”—in other words, it didn’t work as well as hoped.

In June 2024, the FDA expanded the approval of Elevidys. It granted traditional approval for the drug to treat people with DMD who are over the age of four and can walk independently, and another accelerated approval for those who can’t.

Some experts were appalled at the FDA’s decision—even some within the FDA disagreed with it. But things weren’t so simple for people living with DMD. I spoke to some parents of such children a couple of years ago. They pointed out that drug approvals can help bring interest and investment to DMD research. And, above all, they were desperate for any drug that might help their children. They were desperate for hope.

Unfortunately, the treatment does not appear to be delivering on that hope. There have always been questions over whether it works. But now there are serious questions over how safe it is. 

In March 2025, a 16-year-old boy died after being treated with Elevidys. He had developed acute liver failure (ALF) after having the treatment, Sarepta said in a statement. On June 15, the company announced a second death—a 15-year-old who also developed ALF following Elevidys treatment. The company said it would pause shipments of the drug, but only for patients who are not able to walk.

The following day, Sarepta held an online presentation in which CEO Doug Ingram said that the company was exploring ways to make the treatment safer, perhaps by treating recipients with another drug that dampens their immune systems. But that same day, the company announced that it was laying off 500 employees—36% of its workforce. Sarepta did not respond to a request for comment.

On June 24, the FDA announced that it was investigating the risks of serious outcomes “including hospitalization and death” associated with Elevidys, and “evaluating the need for further regulatory action.”

There was more tragic news on July 18, when there were reports that a third patient had died following a Sarepta treatment. This patient, a 51-year-old, hadn’t been taking Elevidys but was enrolled in a clinical trial for a different Sarepta gene therapy designed to treat limb-girdle muscular dystrophy. The same day, the FDA asked Sarepta to voluntarily pause all shipments of Elevidys. Sarepta refused to do so.

The refusal was surprising, says Michael Kelly, chief scientific officer at CureDuchenne: “It was an unusual step to take.”

After significant media coverage, including reporting that the FDA was “deeply troubled” by the decision and would use its “full regulatory authority,” Sarepta backed down a few days later. On July 21, the company announced its decision to “voluntarily and temporarily” pause all shipments of Elevidys in the US.

Sarepta says it will now work with the FDA to address safety and labeling concerns. But in the meantime, the saga has left the DMD community grappling with “a mix of disappointment and concern,” says Kelly. Many are worried about the risks of taking the treatment. Others are devastated that they are no longer able to access it.

Miller says she knows of families who have been working with their insurance providers to get authorization for the drug. “It’s like the rug has been pulled out from under them,” she says. Many families have no other treatment options. “And we know what happens when you do nothing with Duchenne,” she says. Others, particularly those with teenage children with DMD, are deciding against trying the drug, she adds.

The decision over whether to take Elevidys was already a personal one based on several factors, he says. People with DMD and their families deserve clear and transparent information about the treatment in order to make that decision.

The FDA’s decision to approve Elevidys was made on limited data, says Kelly. But as things stand today, over 900 people have been treated with Elevidys. “That gives the FDA… an opportunity to look at real data and make informed decisions,” he says.

“Families facing Duchenne do not have time to waste,” Kelly says. “They must navigate a landscape where hope is tempered by the realities of medical complexity.”

A version of this article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How nonprofits and academia are stepping up to salvage US climate programs

Nonprofits are striving to preserve a US effort to modernize greenhouse-gas measurements, amid growing fears that the Trump administration’s dismantling of federal programs will obscure the nation’s contributions to climate change.

The Data Foundation, a Washington, DC, nonprofit that advocates for open data, is fundraising for an initiative that will coordinate efforts among nonprofits, technical experts, and companies to improve the accuracy and accessibility of climate emissions information. It will build on an effort to improve the collection of emissions data that former president Joe Biden launched in 2023—and which President Trump nullified on his first day in office. 

The initiative will help prioritize responses to changes in federal greenhouse-gas monitoring and measurement programs, but the Data Foundation stresses that it will primarily serve a “long-standing need for coordination” of such efforts outside of government agencies.

The new greenhouse-gas coalition is one of a growing number of nonprofit and academic groups that have spun up or shifted focus to keep essential climate monitoring and research efforts going amid the Trump administration’s assault on environmental funding, staffing, and regulations. Those include efforts to ensure that US scientists can continue to contribute to the UN’s major climate report and publish assessments of the rising domestic risks of climate change. Otherwise, the loss of these programs will make it increasingly difficult for communities to understand how more frequent or severe wildfires, droughts, heat waves, and floods will harm them—and how dire the dangers could become. 

Few believe that nonprofits or private industry can come close to filling the funding holes that the Trump administration is digging. But observers say it’s essential to try to sustain efforts to understand the risks of climate change that the federal government has historically overseen, even if the attempts are merely stopgap measures. 

If we give up these sources of emissions data, “we’re flying blind,” says Rachel Cleetus, senior policy director with the climate and energy program at the Union of Concerned Scientists. “We’re deliberating taking away the very information that would help us understand the problem and how to address it best.”

Improving emissions estimates

The Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Forest Service, and other agencies have long collected information about greenhouse gases in a variety of ways. These include self-reporting by industry; shipboard, balloon, and aircraft readings of gas concentrations in the atmosphere; satellite measurements of the carbon dioxide and methane released by wildfires; and on-the-ground measurements of trees. The EPA, in turn, collects and publishes the data from these disparate sources as the Inventory of US Greenhouse Gas Emissions and Sinks.

But that report comes out on a two-year lag, and studies show that some of the estimates it relies on could be way off—particularly the self-reported ones.

A recent analysis using satellites to measure methane pollution from four large landfills found they produce, on average, six times more emissions than the facilities had reported to the EPA. Likewise, a 2018 study in Science found that the actual methane leaks from oil and gas infrastructure were about 60% higher than the self-reported estimates in the agency’s inventory.

The Biden administration’s initiative—the National Strategy to Advance an Integrated US Greenhouse Gas Measurement, Monitoring, and Information System—aimed to adopt state-of-the-art tools and methods to improve the accuracy of these estimates, including satellites and other monitoring technologies that can replace or check self-reported information.

The administration specifically sought to achieve these improvements through partnerships between government, industry, and nonprofits. The initiative called for the data collected across groups to be published to an online portal in formats that would be accessible to policymakers and the public.

Moving toward a system that produces more current and reliable data is essential for understanding the rising risks of climate change and tracking whether industries are abiding by government regulations and voluntary climate commitments, says Ben Poulter, a former NASA scientist who coordinated the Biden administration effort as a deputy director in the Office of Science and Technology Policy.

“Once you have this operational system, you can provide near-real-time information that can help drive climate action,” Poulter says. He is now a senior scientist at Spark Climate Solutions, a nonprofit focused on accelerating emerging methods of combating climate change, and he is advising the Data Foundation’s Climate Data Collaborative, which is overseeing the new greenhouse-gas initiative. 

Slashed staffing and funding  

But the momentum behind the federal strategy deflated when Trump returned to office. On his first day, he signed an executive order that effectively halted it. The White House has since slashed staffing across the agencies at the heart of the effort, sought to shut down specific programs that generate emissions data, and raised uncertainties about the fate of numerous other program components. 

In April, the administration missed a deadline to share the updated greenhouse-gas inventory with the United Nations, for the first time in three decades, as E&E News reported. It eventually did release the report in May, but only after the Environmental Defense Fund filed a Freedom of Information Act request.

There are also indications that the collection of emissions data might be in jeopardy. In March, the EPA said it would “reconsider” the Greenhouse Gas Reporting Program, which requires thousands of power plants, refineries, and other industrial facilities to report emissions each year.

In addition, the tax and spending bill that Trump signed into law earlier this month rescinds provisions in Biden’s Inflation Reduction Act that provided incentives or funding for corporate greenhouse-gas reporting and methane monitoring. 

Meanwhile, the White House has also proposed slashing funding for the National Oceanic and Atmospheric Administration and shuttering a number of its labs. Those include the facility that supports the Mauna Loa Observatory in Hawaii, the world’s longest-running carbon dioxide measuring program, as well as the Global Monitoring Laboratory, which operates a global network of collection flasks that capture air samples used to measure concentrations of nitrous oxide, chlorofluorocarbons, and other greenhouse gases.

Under the latest appropriations negotiations, Congress seems set to spare NOAA and other agencies the full cuts pushed by the Trump administration, but that may or may not protect various climate programs within them. As observers have noted, the loss of experts throughout the federal government, coupled with the priorities set by Trump-appointed leaders of those agencies, could still prevent crucial emissions data from being collected, analyzed, and published.

“That’s a huge concern,” says David Hayes, a professor at the Stanford Doerr School of Sustainability, who previously worked on the effort to upgrade the nation’s emissions measurement and monitoring as special assistant to President Biden for climate policy. It’s not clear “whether they’re going to continue and whether the data availability will drop off.”

‘A natural disaster’

Amid all these cutbacks and uncertainties, those still hoping to make progress toward an improved system for measuring greenhouse gases have had to adjust their expectations: It’s now at least as important to simply preserve or replace existing federal programs as it is to move toward more modern tools and methods.

But Ryan Alexander, executive director of the Data Foundation’s Climate Data Collaborative, is optimistic that there will be opportunities to do both. 

She says the new greenhouse-gas coalition will strive to identify the highest-priority needs and help other nonprofits or companies accelerate the development of new tools or methods. It will also aim to ensure that these organizations avoid replicating one another’s efforts and deliver data with high scientific standards, in open and interoperable formats. 

The Data Foundation declines to say what other nonprofits will be members of the coalition or how much money it hopes to raise, but it plans to make a formal announcement in the coming weeks. 

Nonprofits and companies are already playing a larger role in monitoring emissions, including organizations like Carbon Mapper, which operates satellites and aircraft that detect and measure methane emissions from particular facilities. The EDF also launched a satellite last year, known as MethaneSAT, that could spot large and small sources of emissions—though it lost power earlier this month and probably cannot be recovered. 

Alexander notes that shifting from self-reported figures to observational technology like satellites could not just replace but perhaps also improve on the EPA reporting program that the Trump administration has moved to shut down.

Given the “dramatic changes” brought about by this administration, “the future will not be the past,” she says. “This is like a natural disaster. We can’t think about rebuilding in the way that things have been in the past. We have to look ahead and say, ‘What is needed? What can people afford?’”

Organizations can also use this moment to test and develop emerging technologies that could improve greenhouse-gas measurements, including novel sensors or artificial intelligence tools, Hayes says. 

“We are at a time when we have these new tools, new technologies for measurement, measuring, and monitoring,” he says. “To some extent it’s a new era anyway, so it’s a great time to do some pilot testing here and to demonstrate how we can create new data sets in the climate area.”

Saving scientific contributions

It’s not just the collection of emissions data that nonprofits and academic groups are hoping to save. Notably, the American Geophysical Union and its partners have taken on two additional climate responsibilities that traditionally fell to the federal government.

The US State Department’s Office of Global Change historically coordinated the nation’s contributions to the UN Intergovernmental Panel on Climate Change’s major reports on climate risks, soliciting and nominating US scientists to help write, oversee, or edit sections of the assessments. The US Global Change Research Program, an interagency group that ran much of the process, also covered the cost of trips to a series of in-person meetings with international collaborators. 

But the US government seems to have relinquished any involvement as the IPCC kicks off the process for the Seventh Assessment Report. In late February, the administration blocked federal scientists including NASA’s Katherine Calvin, who was previously selected as a cochair for one of the working groups, from attending an early planning meeting in China. (Calvin was the agency’s chief scientist at the time but was no longer serving in that role as of April, according to NASA’s website.)

The agency didn’t respond to inquiries from interested scientists after the UN panel issued a call for nominations in March, and it failed to present a list of nominations by the deadline in April, scientists involved in the process say. The Trump administration also canceled funding for the Global Change Research Program and, earlier this month, fired the last remaining staffers working at the Office of Global Change.

In response, 10 universities came together in March to form the US Academic Alliance for the IPCC, in partnership with the AGU, to request and evalute applications from US researchers. The universities—which include Yale, Princeton, and the University of California, San Diego—together nominated nearly 300 scientists, some of whom the IPCC has since officially selected. The AGU is now conducting a fundraising campaign to help pay for travel expenses. 

Pamela McElwee, a professor at Rutgers who helped establish the academic coalition, says it’s crucial for US scientists to continue participating in the IPCC process.

“It is our flagship global assessment report on the state of climate, and it plays a really important role in influencing country policies,” she says. “To not be part of it makes it much more difficult for US scientists to be at the cutting edge and advance the things we need to do.” 

The AGU also stepped in two months later, after the White House dismissed hundreds of researchers working on the National Climate Assessment, an annual report analyzing the rising dangers of climate change across the country. The AGU and American Meteorological Society together announced plans to publish a “special collection” to sustain the momentum of that effort.

“It’s incumbent on us to ensure our communities, our neighbors, our children are all protected and prepared for the mounting risks of climate change,” said Brandon Jones, president of the AGU, in an earlier statement.

The AGU declined to discuss the status of the project.

Stopgap solution

The sheer number of programs the White House is going after will require organizations to make hard choices about what they attempt to save and how they go about it. Moreover, relying entirely on nonprofits and companies to take over these federal tasks is not viable over the long term. 

Given the costs of these federal programs, it could prove prohibitive to even keep a minimum viable version of some essential monitoring systems and research programs up and running. Dispersing across various organizations the responsibility of calculating the nation’s emissions sources and sinks also creates concerns about the scientific standards applied and the accessibility of that data, Cleetus says. Plus, moving away from the records that NOAA, NASA, and other agencies have collected for decades would break the continuity of that data, undermining the ability to detect or project trends.

More basically, publishing national emissions data should be a federal responsibility, particularly for the government of the world’s second-largest climate polluter, Cleetus adds. Failing to calculate and share its contributions to climate change sidesteps the nation’s global responsibilities and sends a terrible signal to other countries. 

Poulter stresses that nonprofits and the private sector can do only so much, for so long, to keep these systems up and running.

“We don’t want to give the impression that this greenhouse-gas coalition, if it gets off the ground, is a long-term solution,” he says. “But we can’t afford to have gaps in these data sets, so somebody needs to step in and help sustain those measurements.”

Why A Site Deindexed By Google For Programmatic SEO Bounced Back via @sejournal, @martinibuster

A company founder shared their experience with programmatic SEO, which they credited for initial success until it was deindexed by Google, calling it a big mistake they won’t repeat. The post, shared on LinkedIn, received scores of supportive comments.

The website didn’t receive a manual action, Google deindexed the web pages due to poor content quality.

Programmatic SEO (pSEO)

Programmatic SEO (aka pSEO) is a phrase that encompasses a wide range of tactics that have automation at the heart of it. Some of it can be very useful, like automating sitewide meta descriptions, titles, and alt text for images.

pSEO is also the practice of using AI automation to scale content creation sitewide, which is what the person did. They created fifty thousand pages targeting long tail phrases, phrases that are not commonly queried. The site initially received hundreds of clicks and millions of impressions but the success was not long-lived.

According to the post by Miquel Palet (LinkedIn Profile):

“Google flagged our domain. Pages started getting deindexed. Traffic plummeted overnight.

We learned the hard way that shortcuts don’t scale sustainably.

It was a huge mistake, but also a great lesson.

And it’s one of the reasons we rebranded to Tailride.”

Thin AI Content Was The Culprit

A follow-up post explained that they believe the AI generated content backfired was because it was thin content, which makes sense. Thin content, regardless of how it was authored, can be problematic.

One of the posts by Palet explained:

“We’re not sure, but probably not because AI. It was thin content and probably duplicated.”

Rasmus Sørensen (LinkedIn profile), an experienced digital marketer shared his opinion that he’s seen some marketers pushing shady practices under the banner of pSEO:

“Thanks for sharing and putting some real live experiences forward. Programmatic SEO had been touted as the next best thing in SEO. It’s not and I’ve seen soo much garbage published the last few months and agencies claiming that their pSEO is the silver bullet.
It very rarely is.”

Joe Youngblood (LinkedIn profile) shared that SEO trends can be abused and implied that it is a viable strategy if done correctly:

“I would always do something like pSEO under the supervision of a seasoned SEO consultant. This tale happens all too frequently with an SEO trend…”

What They Did To Fix The site

The company founder shared that they rebranded the website to a new domain, redirecting the old domain to the new one, and focused their site on higher quality content that’s relevant to users.

They explained:

“Less pages + more quality”

A site: search for their domain shows that Google is now indexing their content, indicating that they are back on track.

Takeaways

Programmatic SEO can be useful if approached with an understanding of where the line is between good quality and “not-quality” content.

Featured Image by Shutterstock/Cast Of Thousands

Why Is SureRank WordPress SEO Plugin So Popular? via @sejournal, @martinibuster

A new SEO plugin called SureRank, by Brainstorm Force, makers of the popular Astra theme, is rapidly growing in popularity. In beta for a few months, it was announced in July and has amassed over twenty thousand installations. That’s a pretty good start for an SEO plugin that has only been out of beta for a few weeks.

One possible reason that SureRank is quickly becoming popular is that it’s created by a trusted brand, much loved for its Astra WordPress theme.

SureRank By Brainstorm Force

SureRank is the creation of the publishers of many highly popular plugins and themes installed in many millions of websites, such as Astra theme, Ultimate Addons for Elementor, Spectra Gutenberg Blocks – Website Builder for the Block Editor, and Starter Templates – AI-Powered Templates for Elementor & Gutenberg, to name a few.

Why Another SEO Plugin?

The goal of SureRank is to provide an easy-to-use SEO solution that includes only the necessary features every site needs in order to avoid feature bloat. It positions itself as an SEO assistant that guides the user with an intuitive user interface.

What Does SureRank Do?

SureRank has an onboarding process that walks a user through the initial optimizations and setup. It then performs an analysis and offers suggestions for site-level improvements.

It currently enables users to handle the basics like:

  • Edit titles and meta descriptions
  • Custom write social media titles, descriptions, and featured images,
  • Tweak home page and, archive page meta data
  • Meta robot directives, canonicals, and sitemaps
  • Schema structured data
  • Site and page level SEO analysis
  • Automatic image alt text generation
  • Google Search Console integration
  • WooCommerce integration

SureRank also provides a built-in tool for migrating settings from other popular SEO plugins like Rank Math, Yoast, and AIOSEO.

Check out the SureRank SEO plugin at the official WordPress.org repository:

SureRank – SEO Assistant with Meta Tags, Social Preview, XML Sitemap, and Schema

Featured Image by Shutterstock/Roman Samborskyi

Facebook ‘Megaphone’ Powers D2C Watch Brand

Nate Lagos is vice president of marketing for Original Grain, a direct-to-consumer watch maker. He relies on Facebook advertising, but not for immediate customer acquisition.

“Platforms such as Facebook are megaphones, not salespeople,” he says.

In our recent conversation, Nate shared his marketing origins, advertising tactics, influencer management, and more.

Our entire audio conversation is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Give us a quick rundown of who you are and what you do.

Nate Lagos: I’m the vice president of marketing at Original Grain, a watch company that blends wood and steel to create timepieces that guys want to wear. I’ve been here four years, leading growth through product innovation and creative marketing campaigns. Before that, I served as CMO for a couple of smaller ecommerce brands.

The last four years at OG have been exciting, fast-paced, and at times stressful — but extremely rewarding.

My marketing journey started in college. I fell in love with the subject after my first class, but quickly realized school wouldn’t teach me how to thrive in the real world. I had one great professor, but most classes fell short. I began freelancing during my sophomore year, running organic and paid social campaigns for local businesses, and built from there.

I host a twice-weekly podcast called “Tactical & Practical.” Each episode is 10-12 minutes and delves into a single tactic we’re using or a challenge I’m facing. The goal is to create the kind of honest, tactical content I wish I had at my first CMO job at age 24, when I had no idea what I was doing.

Bandholz: How do you approach media buying and ad strategy?

Lagos: I see advertising as a way to amplify great brands, not as a tool to acquire customers directly. Platforms such as Facebook are megaphones, not salespeople. I pour budget into that megaphone because impressions have long-term value, even if they don’t immediately convert.

Nearly all of our ad budget goes to Facebook, primarily for conversion campaigns. Our average order value for new customers is $360. Their buying decisions often take months. So, we don’t obsess over daily customer acquisition costs — we focus on consistent awareness and brand affinity that pays off during key moments, such as the holidays.

Our performance metric is straightforward: If we spend $10,000 promoting a watch and earn $40,000 from it, the ads are effective, regardless of Facebook’s internal metrics. If we only make $11,000, we cut spend, test new creative, or shift messaging.

We typically advertise our top five watches, not our entire catalog. We structure our campaigns by collection, and we measure success both at the individual product level and the collection-level return-on-ad-spend. Meta accounts for 95% of our spend. The rest goes to Google, YouTube, and influencers, which we’d like to grow, though they’re harder to scale and produce content for.

Bandholz: What’s your strategy for changing ad creative on Meta?

Lagos: I’m still figuring that out. Historically, we didn’t launch a large number of ads — typically around 10–15 per week — even as we grew by over 100% last year from an eight-figure base. This year, we’ve ramped that up to 30–40 ads weekly. It’s not because we need more volume to find winners, but because Facebook won’t allocate spend unless we launch more.

The platform tends to push our top-performing ads, which is fine until those ads plateau. Previously, we could introduce new creative into the same ad set, and Facebook would distribute spend. That’s no longer happening. By increasing volume, we’re now seeing new ads spend faster and find winners more quickly.

Our full-time photographer is also our creative inspiration, handling graphic design and brand direction. We hired an operations lead earlier this year. He focuses on Klaviyo and Postscript scheduling and helps out with social and influencer campaigns. So there are three of us on the team.

Most of our messaging angles come from copy I test directly on our site. Once we see what converts there, we repurpose that language into ads.

Bandholz: Thirty pieces of content weekly takes work.

Lagos: Approximately one-third of our content consists of iterations of past winners — duplicate headlines, graphics, and photography styles. If a creative is performing, we replicate it across our top five watches and underperformers we want to push.

For new content, Chris (our creative lead) and I brainstorm weekly using a shared Canva board. I lean toward old-school inspiration — vintage Rolex and cigarette ads — while he pulls modern ecommerce and consumer-focused examples. We compare notes on what we like and dislike, and adapt our messaging and offers to those styles.

We’re intentional with testing. If we’re trying a new visual format, we’ll pair it with a proven offer, headline, and watch. If it flops, we know it’s the visual that didn’t land, not the copy or the product. It helps us stay efficient and avoid confusion when something doesn’t work.

Bandholz: What makes your top product so successful?

Lagos: We launched our top-selling watch two years ago. It’s an automatic skeleton-dial watch, so you see all the inner mechanics. It’s black-plated stainless steel with charred whiskey barrel wood, and that combo crushes. Since then, we’ve launched other watches using similar elements, and many have worked. Our founders do an incredible job designing them.

I’ve learned it’s not the marketing that determines success. We launched this watch with the same email, ad, and strategy as others. So when one sells out and the other doesn’t, no one blames marketing — it’s all about product-market fit.

Keeping this watch in stock has been the real challenge. We launched 400 units in November 2023, and they sold out quickly. We thought it was holiday timing, but it continued to sell — 500 more, then thousands for Father’s Day, and then a massive run in Q4 2024. Eventually, I raised prices and pulled back ads to slow sales.

Bandholz: You mentioned influencers. What’s your strategy?

Lagos: We’re lucky because we’re our own target audience — 35 to 50-year-old guys who drink whiskey and love outdoorsy, rugged stuff. So we’re already fans of the people we end up working with. We also survey our customers about their music and sports preferences to guide our influencer selection.

Our outreach is mostly manual. We send cold direct messages, and I occasionally reach out to agents on LinkedIn. Having big-name partners such as Jack Daniel’s and Taylor Guitars gives us instant credibility. Influencers take us seriously when they see who we work with.

We don’t do affiliate or revenue share. It doesn’t align with our long purchase cycles. Instead, we pay a flat fee for a set number of posts or YouTube inclusions. Instagram collaborations let us repurpose posts as ads. They aren’t high converters but deliver great impression and click costs.

We use codes and links to track YouTube performance and calculate revenue per thousand impressions. Some audiences, such as whiskey content creators, bring $80 RPMs, while lifestyle comedians bring $20. As long as we pay below those amounts, the channel works. We’ve also had success with truck, outdoors, and even music creators, although music has been hit or miss.

Bandholz: Where can people buy your watches and reach out?

Lagos: OriginalGrain.com. I’m on X and LinkedIn. My podcast is Tactical & Practical.

Google Confirms CSS Class Names Don’t Influence SEO via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller clarified how CSS affects SEO.

While some aspects of CSS have no bearing on SEO, others can directly influence how search engines interpret and rank content.

Here’s what matters and what doesn’t.

Class Names Don’t Matter For Rankings

One of the clearest takeaways from the episode is that CSS class names have no impact on Google Search.

Splitt stated:

“I don’t think it does. I don’t think we care because the CSS class names are just that. They’re just assigning a specific somewhat identifiable bit of stylesheet rules to elements and that’s it. That’s all. You could name them all “blurb.” It would not make a difference from an SEO perspective.”

Class names, they explained, are used only for applying visual styling. They’re not considered part of the page’s content. So they’re ignored by Googlebot and other HTML parsers when extracting meaningful information.

Even if you’re feeding HTML into a language model or a basic crawler, class names won’t factor in unless your system is explicitly designed to read those attributes.

Why Content In Pseudo Elements Is A Problem

While class names are harmless, the team warned about placing meaningful content in CSS pseudo elements like :before and :after.

Splitt stated:

“The idea again—the original idea—is to separate presentation from content. So content is in the HTML, and how it is presented is in the CSS. So with before and after, if you add decorative elements like a little triangle or a little dot or a little light bulb or like a little unicorn—whatever—I think that is fine because it’s decorative. It doesn’t have meaning in the sense of the content. Without it, it would still be fine.”

Adding visual flourishes is acceptable, but inserting headlines, paragraphs, or any user-facing content into pseudo elements breaks the core principle of web development.

That content becomes invisible to search engines, screen readers, and any other tools that rely on parsing the HTML directly.

Mueller shared a real-world example of how this can go wrong:

“There was once an escalation from the indexing team that said we should contact the site and tell them to stop using before and after… They were using the before pseudo class to add a number sign to everything that they considered hashtags. And our indexing system was like, it would be so nice if we could recognize these hashtags on the page because maybe they’re useful for something.”

Because the hashtag symbols were added via CSS, they were never seen by Google’s systems.

Splitt tested it live during the recording and confirmed:

“It’s not in the DOM… so it doesn’t get picked up by rendering.”

Oversized CSS Can Hurt Performance

The episode also touched on performance issues related to bloated stylesheets.

According to data from the HTTP Archive’s 2022 Web Almanac, the median size of a CSS file had grown to around 68 KB for mobile and 72 KB for desktop.

Mueller stated:

“The Web Almanac says every year we see CSS grow in size, and in 2022 the median stylesheet size was 68 kilobytes or 72 kilobytes. … They also mentioned the largest one that they found was 78 megabytes. … These are text files.”

That kind of bloat can negatively impact Core Web Vitals and overall user experience, which are two areas that do influence rankings. Frameworks and prebuilt libraries are often the cause.

While developers can mitigate this with minification and unused rule pruning, not everyone does. This makes CSS optimization a worthwhile item on your technical SEO checklist.

Keep CSS Crawlable

Despite CSS’s limited role in ranking, Google still recommends making CSS files crawlable.

Mueller joked:

“Google’s guidelines say you should make your CSS files crawlable. So there must be some kind of magic in there, right?”

The real reason is more technical than magical. Googlebot uses CSS files to render pages the way users would see them.

Blocking CSS can affect how your pages are interpreted, especially for layout, mobile-friendliness, or elements like hidden content.

Practical Tips For SEO Pros

Here’s what this episode means for your SEO practices:

  • Stop optimizing class names: Keywords in CSS classes won’t help your rankings.
  • Check pseudo elements: Any real content, like text meant to be read, should live in HTML, not in :before or :after.
  • Audit stylesheet size: Large CSS files can hurt page speed and Core Web Vitals. Trim what you can.
  • Ensure CSS is crawlable: Blocking stylesheets may disrupt rendering and impact how Google understands your page.

The team also emphasized the importance of using proper HTML tags for meaningful images:

“If the image is part of the content and you’re like, ‘Look at this house that I just bought,’ then you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like, ah, so this page has this image that is not just decoration.”

Use CSS for styling and HTML for meaning. This separation helps both users and search engines.

Listen to the full podcast episode below:

5 Ways To Prove The Real Value Of SEO In The AI Era via @sejournal, @wburton27

As SEO evolves with AI optimization, generative engine optimization, and answer engine optimization, brands and marketers must rethink their SEO strategies to stay competitive.

Instead of focusing solely on traditional SEO strategies and tactics, you need to be visible in AI-powered search and answer engines.

Showing the value of SEO in this new world means showcasing how optimized, structured, and intent-driven content can maximize visibility across generative platforms.

It can also enhance user trust and drive qualified engagement in a world where AI chatbots and platforms interpret a user’s intent, retrieve relevant information, and generate clear and concise answers.

In today’s competitive AI-powered results, it can be difficult to maximize your visibility.

With SEO becoming more challenging and the search engine results constantly changing to incorporate AI results, what metrics do you need to track, and how can you show the value of SEO in today’s AI-powered search results?

Let’s explore.

Proving The Value Of SEO

Proving SEO value depends on your client or prospective client’s goals and what will move the needle for them to get visibility in the search engine results pages (SERPs) and in AI chatbots and platforms.

This could include local search, app store optimization, content marketing, technical optimization, AI Overviews, etc.

That said, you must show performance improvements and drive revenue to secure more funding and make your client successful.

In my experience, here are some of the best metrics to track and measure to prove the SEO value in an AI world:

1. Monitor AI Results

With AI Overviews and generative AI changing SEO, it is important to track visibility as we move from ranking to relevance.

AI Overviews are not expected to go anywhere. During I/O 2025, Google announced that AI Overviews were expanding to over 200 countries and more than 40 languages.

AI Mode is now available to all users in the United States without the need to opt in via Search Labs.

To track AI Overviews:

Identify Which Queries Trigger AI Overviews

You can use tools like ZipTie.dev or Semrush to track which of your top-performing queries show AI Overviews and whether your site is included in those summaries.

Track which of your top-performing queries show AI Overviews Screenshot from Semrush, June 2025

Track AI Overview Queries

Once you have a list of queries that your site does or doesn’t appear in for an AI Overview,  you should track those queries using keyword tracking tools and compare your traffic pre- and post-AI rollouts.

Strategize To Optimize Your Content For AI Overviews

Segment your traffic based on content type, as many informational queries are experiencing a decline in traffic due to users obtaining answers directly from AI Overviews.

This will help you identify which areas are most impacted and plan your strategy to optimize queries that have the potential to show AI Overviews.

Consider server-side analytics solutions (e.g., Writesonic’s AI Traffic Analytics) to track AI crawler visits, see which pages are accessed, and monitor trends over time.

2. Track AI Brand Mentions

Since AI platforms process information differently than traditional search engines, getting mentioned in ChatGPT, Perplexity, Claude, or Google’s AI Mode for relevant queries is a must.

AI platforms like ChatGPT and Google’s AI Overview generate answers from a mix of training data and some real-time retrieval, depending on the platform and setup.

In my experience, brands that are frequently mentioned across various platforms, including PR, blogs, social media, news coverage, YouTube forums (such as Reddit and Quora), and authoritative sites, tend to be mentioned by AI.

To track AI mentions, several tools like Brand24, Brand Radar from Ahrefs, and Mention.com use AI to monitor online conversations across various platforms, leveraging large datasets to provide insights into your brand’s perception and those of your competitors.

It’s imperative that you find out if your brand is mentioned, what people are saying about your brand (both positive and negative), what queries are used to describe it, and which websites mention your brand.

Brand Radar: AhrefsScreenshot from Brand Radar, Ahrefs, June 2025

3. Track AI Citations/References

Checking to see if your website is cited by large language models (LLMs) can help brands and marketers understand how their content is being used by AI and assess their brand’s authority and visibility.

Ahrefs now offers a free tool that tracks when your website is cited in the answers generated by AI-powered search tools like Google AIO, ChatGPT, and Perplexity. AI citations count how often a domain was linked in AI results.

Pages show how many unique URLs from this domain were linked.

AI CitationsScreenshot from Ahrefs, June 2025

This is one of my favorite audit tools to look to see if there are any citations in any brand that we’re reviewing.

If Ahrefs adds trend analysis to track whether you’re gaining more citations in Google AIO, ChatGPT, and other platforms over time, it would be a valuable way to assess whether your strategies are working.

4. Tracking Branded Searches

It’s extremely important to track your branded searches in this new SEO AI era. AI-powered search results are personalized, and LLMs like Gemini and ChatGPT, to name a few, heavily consider user intent and context.

Having strong brand signals could improve entity recognition, which can improve your visibility for related queries.

Tracking how AI-generated answers (e.g., featured snippets or AI Overviews) treat your brand helps you optimize for entity-driven SEO.

In the AI SEO era, where search engines prioritize context, trust, and relevance, tracking branded searches could inform you to refine strategies that help defend your SERP presence and maximize conversions.

Here are some tips to help enhance branded visibility:

  • Create unique, authoritative, and factual, conversational content because AI models prioritize reliable and accurate information. Focus on content that demonstrates expertise and includes verifiable data.
  • Structure content for AI readability by using clear headings (H1, H2, H3), bullet lists, numbered lists, and data tables. Also, create concise paragraphs that directly answer questions.
  • Leverage schema markup like Organization, Product, Service, FAQPage, and Review to provide structured data that AI models can easily understand and reference.
  • Build brand authority and expertise by getting consistent citations, mentions on authoritative third-party sites, and positive reviews, to contribute to AI’s perception of your brand’s credibility.
  • Optimize conversational queries by creating content that directly answers “who, what, why, and how” in your niche.
  • Be active on platforms like Reddit and Quora, where AI models often pull information. SEO becomes “Search Engine Everywhere.”
  • Regularly review your AI visibility data, identify gaps, and adjust your content and SEO strategies based on insights.

5. Tracking AI Mode Metrics

AI Traffic In GSC

Google has recently provided some data in GSC for tracking AI Mode and marketers can track clicks, impressions, and positions.

According to Google:

AI Mode groups the user’s question into subtopics and searches for each one simultaneously, and users can go deeper.

If a user asks a follow-up question within AI Mode, they are essentially performing a new query. All impression, position, and click data in the new response are counted as coming from this new user query.

AI Traffic In GA4

While Google Analytics 4 doesn’t explicitly label AI traffic, you can look for patterns. Create custom reports with “Session source/medium” and apply regex filters for known AI domains (e.g., .*ChatGPT.*|.*perplexity.*|.*openai.*|.*bard.*).

For specific content you hope AI will cite, create unique URLs with UTM parameters (e.g., utm_source=chatgpt, utm_medium=ai). This can help attribute some traffic directly.

If you can get more conversions from AI Overviews, like Ahrefs did, when it found that AI search visitors converted at a rate 23 times higher than traditional organic search traffic, despite representing only 0.5% of total website visits, then you will have discovered a conversion goldmine that makes AI optimization not just worthwhile, but essential for staying competitive.

Final Thoughts

The SEO landscape has shifted from optimizing search engines and traditional search to optimizing for AI-powered chatbots and solutions, such as ChatGPT, Perplexity, Claude, Google’s AI Overviews, and potentially OpenAI’s web browser “in the coming weeks,” according to Reuters.

Google may face increased pressure and potentially lose market share if OpenAI launches an AI-powered web browser that challenges Google Chrome, changing how users access web content.

OpenAI has 500 million weekly active users of ChatGPT and could disrupt a key component of rival Google’s ad-money source.

SEO is no longer about ranking on the first page of Google.

It’s about being relevant and visible across multiple AI platforms, getting mentioned in generative responses, and demonstrating value through AI-focused metrics outside of the traditional metrics like rankings and traffic.

Brands and marketers that prove the SEO value in this new era can deliver immediate, measurable value while building momentum for larger investments in the future.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock