Cybersecurity’s global alarm system is breaking down

Every day, billions of people trust digital systems to run everything from communication to commerce to critical infrastructure. But the global early warning system that alerts security teams to dangerous software flaws is showing critical gaps in coverage—and most users have no idea their digital lives are likely becoming more vulnerable.

Over the past 18 months, two pillars of global cybersecurity have flirted with apparent collapse. In February 2024, the US-backed National Vulnerability Database (NVD)—relied on globally for its free analysis of security threats—abruptly stopped publishing new entries, citing a cryptic “change in interagency support.” Then, in April of this year, the Common Vulnerabilities and Exposures (CVE) program, the fundamental numbering system for tracking software flaws, seemed at similar risk: A leaked letter warned of an imminent contract expiration.

Cybersecurity practitioners have since flooded Discord channels and LinkedIn feeds with emergency posts and memes of “NVD” and “CVE” engraved on tombstones. Unpatched vulnerabilities are the second most common way cyberattackers break in, and they have led to fatal hospital outages and critical infrastructure failures. In a social media post, Jen Easterly, a US cybersecurity expert, said: “Losing [CVE] would be like tearing out the card catalog from every library at once—leaving defenders to sort through chaos while attackers take full advantage.” If CVEs identify each vulnerability like a book in a card catalogue, NVD entries provide the detailed review with context around severity, scope, and exploitability. 

In the end, the Cybersecurity and Infrastructure Security Agency (CISA) extended funding for CVE another year, attributing the incident to a “contract administration issue.” But the NVD’s story has proved more complicated. Its parent organization, the National Institute of Standards and Technology (NIST), reportedly saw its budget cut roughly 12% in 2024, right around the time that CISA pulled its $3.7 million in annual funding for the NVD. Shortly after, as the backlog grew, CISA launched its own “Vulnrichment” program to help address the analysis gap, while promoting a more distributed approach that allows multiple authorized partners to publish enriched data. 

“CISA continuously assesses how to most effectively allocate limited resources to help organizations reduce the risk of newly disclosed vulnerabilities,” says Sandy Radesky, the agency’s associate director for vulnerability management. Rather than just filling the gap, she emphasizes, Vulnrichment was established to provide unique additional information, like recommended actions for specific stakeholders, and to “reduce dependency of the federal government’s role to be the sole provider of vulnerability enrichment.”

Meanwhile, NIST has scrambled to hire contractors to help clear the backlog. Despite a return to pre-crisis processing levels, a boom in vulnerabilities newly disclosed to the NVD has outpaced these efforts. Currently, over 25,000 vulnerabilities await processing—nearly 10 times the previous high in 2017, according to data from the software company Anchore. Before that, the NVD largely kept pace with CVE publications, maintaining a minimal backlog.

“Things have been disruptive, and we’ve been going through times of change across the board,” Matthew Scholl, then chief of the computer security division in NIST’s Information Technology Laboratory, said at an industry event in April. “Leadership has assured me and everyone that NVD is and will continue to be a mission priority for NIST, both in resourcing and capabilities.” Scholl left NIST in May after 20 years at the agency, and NIST declined to comment on the backlog. 

The situation has now prompted multiple government actions, with the Department of Commerce launching an audit of the NVD in May and House Democrats calling for a broader probe of both programs in June. But the damage to trust is already transforming geopolitics and supply chains as security teams prepare for a new era of cyber risk. “It’s left a bad taste, and people are realizing they can’t rely on this,” says Rose Gupta, who builds and runs enterprise vulnerability management programs. “Even if they get everything together tomorrow with a bigger budget, I don’t know that this won’t happen again. So I have to make sure I have other controls in place.”

As these public resources falter, organizations and governments are confronting a critical weakness in our digital infrastructure: Essential global cybersecurity services depend on a complex web of US agency interests and government funding that can be cut or redirected at any time.

Security haves and have-nots

What began as a trickle of software vulnerabilities in the early Internet era has become an unstoppable avalanche, and the free databases that have tracked them for decades have struggled to keep up. In early July, the CVE database crossed over 300,000 catalogued vulnerabilities. Numbers jump unpredictably each year, sometimes by 10% or much more. Even before its latest crisis, the NVD was notorious for delayed publication of new vulnerability analyses, often trailing private security software and vendor advisories by weeks or months.

Gupta has watched organizations increasingly adopt commercial vulnerability management (VM) software that includes its own threat intelligence services. “We’ve definitely become over-reliant on our VM tools,” she says, describing security teams’ growing dependence on vendors like Qualys, Rapid7, and Tenable to supplement or replace unreliable public databases. These platforms combine their own research with various data sources to create proprietary risk scores that help teams prioritize fixes. But not all organizations can afford to fill the NVD’s gap with premium security tools. “Smaller companies and startups, already at a disadvantage, are going to be more at risk,” she explains. 

Komal Rawat, a security engineer in New Delhi whose mid-stage cloud startup has a limited budget, describes the impact in stark terms: “If NVD goes, there will be a crisis in the market. Other databases are not that popular, and to the extent they are adopted, they are not free. If you don’t have recent data, you’re exposed to attackers who do.”

The growing backlog means new devices could be more likely to have vulnerability blind spots—whether that’s a Ring doorbell at home or an office building’s “smart” access control system. The biggest risk may be “one-off” security flaws that fly under the radar. “There are thousands of vulnerabilities that will not affect the majority of enterprises,” says Gupta. “Those are the ones that we’re not getting analysis on, which would leave us at risk.”

NIST acknowledges it has limited visibility into which organizations are most affected by the backlog. “We don’t track which industries use which products and therefore cannot measure impact to specific industries,” a spokesperson says. Instead, the team prioritizes vulnerabilities on the basis of CISA’s known exploits list and those included in vendor advisories like Microsoft Patch Tuesday.

The biggest vulnerability

Brian Martin has watched this system evolve—and deteriorate—from the inside. A former CVE board member and an original project leader behind the Open Source Vulnerability Database, he has built a combative reputation over the decades as a leading historian and practitioner. Martin says his current project, VulnDB (part of Flashpoint Security), outperforms the official databases he once helped oversee. “Our team processes more vulnerabilities, at a much faster turnaround, and we do it for a fraction of the cost,” he says, referring to the tens of millions in government contracts that support the current system. 

When we spoke in May, Martin said his database contains more than 112,000 vulnerabilities with no CVE identifiers—security flaws that exist in the wild but remain invisible to organizations that rely solely on public channels. “If you gave me the money to triple my team, that non-CVE number would be in the 500,000 range,” he said.

In the US, official vulnerability management duties are split between a web of contractors, agencies, and nonprofit centers like the Mitre Corporation. Critics like Martin say that creates potential for redundancy, confusion, and inefficiency, with layers of middle management and relatively few actual vulnerability experts. Others defend the value of this fragmentation. “These programs build on or complement each other to create a more comprehensive, supportive, and diverse community,” CISA said in a statement. “That increases the resilience and usefulness of the entire ecosystem.”

As American leadership wavers, other nations are stepping up. China now operates multiple vulnerability databases, some surprisingly robust but tainted by the possibility that they are subject to state control. In May, the European Union accelerated the launch of its own database, as well as a decentralized “Global CVE” architecture. Following social media and cloud services, vulnerability intelligence has become another front in the contest for technological independence. 

That leaves security professionals to navigate multiple potentially conflicting sources of data. “It’s going to be a mess, but I would rather have too much information than none at all,” says Gupta, describing how her team monitors multiple databases despite the added complexity. 

Resetting software liability

As defenders adapt to the fragmenting landscape, the tech industry faces another reckoning: Why don’t software vendors carry more responsibility for protecting their customers from security issues? Major vendors routinely disclose—but don’t necessarily patch—thousands of new vulnerabilities each year. A single exposure could crash critical systems or increase the risks of fraud and data misuse. 

For decades, the industry has hidden behind legal shields. “Shrink-wrap licenses” once forced consumers to broadly waive their right to hold software vendors liable for defects. Today’s end-user license agreements (EULAs), often delivered in pop-up browser windows, have evolved into incomprehensibly long documents. Last November, a lab project called “EULAS of Despair” used the length of War and Peace (587,287 words) to measure these sprawling contracts. The worst offender? Twitter, at 15.83 novels’ worth of fine print.

“This is a legal fiction that we’ve created around this whole ecosystem, and it’s just not sustainable,” says Andrea Matwyshyn, a US special advisor and technology law professor at Penn State University, where she directs the Policy Innovation Lab of Tomorrow. “Some people point to the fact that software can contain a mix of products and services, creating more complex facts. But just like in engineering or financial litigation, even the most messy scenarios can be resolved with the assistance of experts.”

This liability shield is finally beginning to crack. In July 2024, a faulty security update in CrowdStrike’s popular endpoint detection software crashed millions of Windows computers worldwide and caused outages at everything from airlines to hospitals to 911 systems. The incident led to billions in estimated damages, and the city of Portland, Oregon, even declared a “state of emergency.” Now, affected companies like Delta Airlines have hired high-priced attorneys to pursue major damages—a signal opening of the floodgates to litigation.

Despite the soaring number of vulnerabilities, many fall into long-established categories, such as SQL injections that interfere with database queries and buffer memory overflows that enable code to be executed remotely. Matwyshyn advocates for a mandatory “software bill of materials,” or S-BOM—an ingredients list that would let organizations understand what components and potential vulnerabilities exist throughout their software supply chains. One recent report found 30% of data breaches stemmed from the vulnerabilities of third-party software vendors or cloud service providers.

She adds: “When you can’t tell the difference between the companies that are cutting corners and a company that has really invested in doing right by their customers, that results in a market where everyone loses.”

CISA leadership shares this sentiment, with a spokesperson emphasizing its “secure-by-design principles,” such as “making essential security features available without additional cost, eliminating classes of vulnerabilities, and building products in a way that reduces the cybersecurity burden on customers.”

Avoiding a digital ‘dark age’

It will likely come as no surprise that practitioners are looking to AI to help fill the gap, while at the same time preparing for a coming swarm of cyberattacks by AI agents. Security researchers have used an OpenAI model to discover new “zero-day” vulnerabilities. And both the NVD and CVE teams are developing “AI-powered tools” to help streamline data collection, identification, and processing. NIST says that “up to 65% of our analysis time has been spent generating CPEs”—product information codes that pinpoint affected software. If AI can solve even part of this tedious process, it could dramatically speed up the analysis pipeline.

But Martin cautions against optimism around AI, noting that the technology remains unproven and often riddled with inaccuracies—which, in security, can be fatal. “Rather than AI or ML [machine learning], there are ways to strategically automate bits of the processing of that vulnerability data while ensuring 99.5% accuracy,” he says. 

AI also fails to address more fundamental challenges in governance. The CVE Foundation, launched in April 2025 by breakaway board members, proposes a globally funded nonprofit model similar to that of the internet’s addressing system, which transitioned from US government control to international governance. Other security leaders are pushing to revitalize open-source alternatives like Google’s OSV Project or the NVD++ (maintained by VulnCheck), which are accessible to the public but currently have limited resources.

As these various reform efforts gain momentum, the world is waking up to the fact that vulnerability intelligence—like disease surveillance or aviation safety—requires sustained cooperation and public investment. Without it, a patchwork of paid databases will be all that remains, threatening to leave all but the richest organizations and nations permanently exposed.

Matthew King is a technology and environmental journalist based in New York. He previously worked for cybersecurity firm Tenable.

The first babies have been born following “simplified” IVF in a mobile lab

This week I’m sending congratulations to two sets of parents in South Africa. Babies Milayah and Rossouw arrived a few weeks ago. All babies are special, but these two set a new precedent. They’re the first to be born following “simplified” IVF performed in a mobile lab.

This new mobile lab is essentially a trailer crammed with everything an embryologist needs to perform IVF on a shoestring. It was designed to deliver reproductive treatments to people who live in rural parts of low-income countries, where IVF can be prohibitively expensive or even nonexistent. And it seems to work!

While IVF is increasingly commonplace in wealthy countries—around 12% of all births in Spain result from such procedures—it remains expensive and isn’t always covered by insurance or national health providers. And it’s even less accessible in low-income countries—especially for people who live in rural areas.

People often assume that countries with high birth rates don’t need access to fertility treatments, says Gerhard Boshoff, an embryologist at the University of Pretoria in South Africa. Sub-Saharan African countries like Niger, Angola, and Benin all have birth rates above 40 per 1,000 people, which is over four times the rates in Italy and Japan, for example.

But that doesn’t mean people in Sub-Saharan Africa don’t need IVF. Globally, around one in six adults experience infertility at some point in their lives, according to the World Health Organization. Research by the organization suggests that infertility rates are similar in high-income and low-income countries. As the WHO’s director general Tedros Adhanom Ghebreyesus puts it: “Infertility does not discriminate.”

For many people in rural areas of low-income countries, IVF clinics simply don’t exist. South Africa is considered a “reproductive hub” of the African continent, but even in that country there are fewer than 30 clinics for a population of over 60 million. A recent study found there were no such clinics in Angola or Malawi.  

Willem Ombelet, a retired gynecologist, first noticed these disparities back in the 1980s, while he was working at an IVF lab in Pretoria. “I witnessed that infertility was [more prevalent] in the black population than the white population—but they couldn’t access IVF because of apartheid,” he says. The experience spurred him to find ways to make IVF accessible for everyone. In the 1990s, he launched The Walking Egg—a science and art project with that goal.

In 2008, Ombelet met Jonathan Van Blerkom, a reproductive biologist and embryologist who had already been experimenting with a simplified version of IVF. Typically, embryos are cultured in an incubator that provides a sterile mix of gases. Van Blerkom’s approach was to preload tubes with the required gases and seal them with a rubber stopper. “We don’t need a fancy lab,” says Ombelet.

a sleeping infant in a hat and fuzzy sweater
Milayah was born on June 18.
COURTESY OF THE WALKING EGG

Eggs and sperm can be injected into the tubes through the stoppers, and the resulting embryos can be grown inside. All you really need is a good microscope and a way to keep the tube warm, says Ombelet. Once the embryos are around five days old, they can be transferred to a person’s uterus or frozen. “The cost is one tenth or one twentieth of a normal lab,” says Ombelet.

Ombelet, Van Blerkom, and their colleagues found that this approach appeared to work as well as regular IVF. The team ran their first pilot trial at a clinic in Belgium in 2012. The first babies conceived with the simplified IVF process were born later that year.

More recently, Boshoff wondered if the team could take the show on the road. Making IVF simpler and cheaper is one thing, but getting it to people who don’t have access to IVF care is another. What if the team could pack the simplified IVF lab into a trailer and drive it around rural South Africa?

“We just needed to figure out how to have everything in a very confined space,” says Boshoff. As part of the Walking Egg project, he and his colleagues found a way to organize the lab equipment and squeeze in air filters. He then designed a “fold-out system” that allowed the team to create a second room when the trailer was parked. This provides some privacy for people who are having embryos transferred, he says.

People who want to use the mobile IVF lab will first have to undergo treatment at a local medical facility, where they will take drugs that stimulate their ovaries to release eggs, and then have those eggs collected. The rest of the process can be done in the mobile lab, says Boshoff, who presented his work at the European Society of Human Reproduction and Embryology’s annual meeting in Paris earlier this month.

The first trial started last year. The team partnered with one of the few existing fertility clinics in rural South Africa, which put them in touch with 10 willing volunteers. Five of the 10 women got pregnant following their simplified IVF in the mobile lab. One miscarried, but four pregnancies continued. On June 18, baby Milayah arrived. Two days later, another mother welcomed baby Rossouw. The other babies could come any day now.

“We’ve proven that a very cheap and easy [IVF] method can be used even in a mobile unit and have comparable results to regular IVF,” says Ombelet, who says his team is planning similar trials in Egypt and Indonesia. “The next step is to roll it out all over the world.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: cybersecurity’s shaky alert system, and mobile IVF

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Cybersecurity’s global alarm system is breaking down

Every day, billions of people trust digital systems to run everything from communication to commerce to critical infrastructure. But the global early warning system that alerts security teams to dangerous software flaws is showing critical gaps in coverage—and most users have no idea their digital lives are likely becoming more vulnerable.

Over the past eighteen months, two pillars of global cybersecurity have been shaken by funding issues: the US-backed National Vulnerability Database (NVD)—relied on globally for its free analysis of security threats—and the Common Vulnerabilities and Exposures (CVE) program, the numbering system for tracking software flaws. 

Although the situation for both has stabilized, organizations and governments are confronting a critical weakness in our digital infrastructure: Essential global cybersecurity services depend on a complex web of US agency interests and government funding that can be cut or redirected at any time. Read the full story

—Matthew King

The first babies have been born following “simplified” IVF in a mobile lab

This week I’m sending congratulations to two sets of new parents in South Africa. Babies Milayah and Rossouw arrived a few weeks ago. All babies are special, but these two set a new precedent. They’re the first to be born following “simplified” IVF performed in a mobile lab.

This new mobile lab is essentially a trailer crammed with everything an embryologist needs to perform IVF on a shoestring. It was designed to deliver reproductive treatments to people who live in rural parts of low-income countries, where IVF can be prohibitively expensive or even nonexistent. And best of all: it seems to work! Read our story about why it’s such an exciting development. 

—Jessica Hamzelou 

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump is seeking huge cuts to basic scientific research
If he gets his way, federal science funding will be slashed by a third for the next fiscal year. (NYT $)
+ The foundations of America’s prosperity are being dismantled. (MIT Technology Review)
Senators are getting ready to push back against proposed NASA cuts. (Bloomberg $)

2 Conspiracy theorists are starting to turn on Trump
He whipped them all up over the supposed existence of Epstein’s client list, and now they’re mad nothing’s being released. (The Atlantic $)

3 AI actually slows experienced software developers down
They end up wasting lots of time checking and correcting AI models’ output. (Reuters $)

4 The Pentagon is becoming the largest shareholder in a rare earth minerals company
It shows just how much competition is hotting up to secure a steady supply of these materials. (Quartz $)
The race to produce rare earth elements. (MIT Technology Review

5 Solar power is starting to truly transform the world’s energy system 
Globally, roughly a third more power was generated from the sun this spring than last. (New Yorker $)

6 Cops’ favorite AI tool auto-deletes evidence of AI being used 
A pretty breathtaking attempt to avoid any sort of audit, transparency or accountability. (Ars Technica)
How a new type of AI is helping police skirt facial recognition bans. (MIT Technology Review)

7 Why Chinese EV brands are being forced to go global
Competition at home is becoming so intense that many have no choice but to seek profits elsewhere. (Rest of World)
China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

8 Which Big Tech execs are closest to the White House? 
Check out this scorecard showing how they’re all doing trying to stay in Trump’s good graces. (WSJ $)

9 Elon Musk says Grok is coming to Tesla vehicles
Yes, that’s the same Grok that keeps being racist. Shareholders must be delighted. (Insider $)
+ X is basically becoming a strip mine for AI training data. (Axios)

10 Trump Mobile is charging people’s credit cards without explanation
But I’m sure it’s all perfectly explicable and above board, right? Right?! (404 Media)

Quote of the day

“It has been nonstop pandemonium.”

—Augustus Doricko, who founded a cloud seeding startup two years ago, tells the Washington Post he’s received a deluge of fury online from conspiracy theorists who blame him for the catastrophic Texas floods.

One more thing

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | LUMMI

What’s next for AI in 2025

For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we gave it a go anyway back in January. As we sail pass this year’s halfway mark, it’s a good time to ask: how well did we do? Check out our predictions, and see for yourself!

—James O’Donnell, Will Douglas Heaven & Melissa Heikkilä

This piece is part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Let’s have more pop culture references in journal article titles, please.
+ Here’s some inspiration for things to cook this month (or, if it’s hot, just assemble).
+ There’s something so relaxing about gazing at these (award-winning!) landscape photos
+ If you like birds, you’ll enjoy this artist’s work

How to Survive a Million-Dollar Loss

This year I’ve sprinkled occasional “Ecommerce Conversations” episodes with real-life master classes from Beardbrand, my company. To date I’ve addressed hiringbrandingprofit-building, priority-setting, and exiting.

For this installment, I’ll share Beardbrand’s experience of losing nearly $1 million across 2023 and 2024. I’ll recap how we managed to survive our worst years in business while remaining 100% bootstrapped.

It got bad. Our cash levels dropped to where they were in year one, 2014. We were hemorrhaging money.

But we’re still here — still building and still learning. We made it through without outside funding.

Here’s what the future holds for Beardbrand. My entire audio dialog is embedded below. The transcript is condensed and edited for clarity.

Ghosted

A big portion of our loss came from Target. The company had been a seven-figure account for us for years, and we thought the relationship was solid. Every year, we pitched Target our plans. Historically, the staff there provided us with clear feedback — what worked, what didn’t, and where there was room for growth.

In 2023, Target had a sustainability initiative. We revamped our packaging, switching from glass and plastic to aluminum. It’s lighter, more recyclable, and aligns with eco-conscious goals. At the same time, we increased the size of our beard oil packaging from 1 oz. bottles to occupy more shelf space and stand out.

We committed early, produced inventory, and delivered Target’s purchase orders on time. Then silence. Nothing. After years of working with us, the staff ghosted us. No feedback, no responses. Worse, they dropped us and left us with nearly $200,000 of unpaid product.

We erred by giving Target exclusivity, which meant we weren’t selling on Amazon or Walmart. That killed our ability to move leftover inventory quickly when they dropped us. By the time we finally got on Amazon, the products had already aged out. We destroyed a large quantity that had expired.

Reserves

We’ve always run Beardbrand conservatively. That means keeping a decent amount of capital in reserve — not because we’re paranoid, but because you never know when a black swan event might hit. Having that runway lets you make clear, intentional decisions rather than panicking. It gives you time to explore solutions, test channels, and get a better night’s sleep.

Thankfully, during our stronger years, we built up a solid cushion. And that cushion is what kept us afloat during the downturn. We essentially burned through all of it. But we never dipped below zero, which meant we didn’t have to take out high-interest loans, open lines of credit, or bring in outside investors.

We did have conversations just in case. I even considered withdrawing money from my personal savings. But that’s a hard decision when things aren’t going well. When you’re in the middle of the storm, it doesn’t feel like a temporary dip — it feels like a freefall. You start wondering: Is this the bottom, or is there more pain ahead?

Writing another personal check to the business, especially after years of building wealth from it, was not something I wanted to do. And neither did my partners. We were determined to find a way forward that didn’t involve doubling down with personal capital or giving up control.

Pileup

In addition to losing Target, we experienced a series of setbacks. First, the state of Texas audited us. We cooperated fully, waited for the final numbers, and instead got slapped with a tax lien. That lien triggered Brex, our corporate credit card provider, to freeze our account, despite our perfect payment history. Thankfully, American Express stood by us and kept things moving.

Then came an ADA lawsuit, a leaked 100% off coupon code, and a $20,000 air conditioner repair at our barbershop. We also faced regulatory changes that forced us to reformulate key fragrances.

We had internal missteps, such as losing a key growth team member and coasting when we should’ve pushed harder. We focused on profitability, but the business slowly declined.

We simplified our product line to meet a manufacturer’s needs, which, in hindsight, proved to be a mistake. The lesson? Partner with vendors who value your business. You don’t want to be too small to matter, or too big to be managed. That relationship needs to be just right.

We also lowered prices to drive volume, but it backfired. Loyal customers just paid less, and those who thought we were expensive still did. Meanwhile, larger packaging reduced purchase frequency, and killing off beloved fragrances hurt loyalty. Top-line revenue got cut in half.

Furthermore, when your business shrinks, fixed costs such as office leases and payroll can become overwhelming. Our $10,000 per month lease that once felt small became a big deal.

Rebuilding

The good news? Beardbrand is alive. We’ve weathered the storm and slowly started turning things around. It hasn’t been a dramatic rebound — it’s been steady, slow progress. We have focused on improving operations, addressing inventory issues, resolving stock-outs, tightening pricing, and enhancing product quality.

We now have the right fulfillment provider, manufacturing partners, and systems in place. Instead of existential crises, we’re dealing with everyday stuff — shipping issues, ad performance, and the occasional bad product batch. That’s a massive shift. It’s not glamorous, but it’s no longer a matter of survival.

We cut costs aggressively — even eliminating $15 per month software. We reestablished healthy margins. Our customer service, returns, and product quality all depend on having room to breathe financially.

The Target fallout is behind us, the tax lien is resolved, and the ADA plaintiff dropped the bogus lawsuit. My business partner stepped out of day-to-day operations, and some team members transitioned to part-time roles, which helped improve our cash flow. We’ve managed all of this without layoffs. My team is the same one that helped us grow, and they’re still incredibly talented and dedicated.

I’ve also cut my own salary and lived off personal savings to keep things afloat. But I’m optimistic. With the business stabilizing, we can rebuild our savings and start exploring new growth opportunities again.

Momentum

Survival mode means focusing on making it through the day. Some entrepreneurs try to grow their way out of problems. For us, it started with stabilizing operations. We can finally think long-term again.

We’ve begun reinvesting in growth, supporting our paid media and Meta efforts, and expanding our creative team to produce more content and ads. More creative output means more chances to connect with customers and fuel a rebound.

We’re also rethinking channels beyond direct-to-consumer. Target was a strong retail partner for years. Retail as a channel still holds potential — perhaps it’s independent salons, boutique pharmacies, and grocery stores. The goal is to diversify. Beardbrand.com will always be our home base, but we’re a business that sells to people, not just an ecommerce brand.

It’s exciting to think ahead instead of looking back. We’re aiming for 7% profitability this year — that’s breakeven in my book. It provides us with a buffer for unpredictable events, such as lawsuits, audits, and air conditioning failures. The real goal is 17% profit — that’s when we can fund growth, hire employees, and breathe easier. Anything beyond that is the sweet spot where the stress and sacrifice start to feel worth it.

I’m excited again — for the team, for the future, and what we’re building.

Google’s Advice On Hiring An SEO And Red Flags To Watch For via @sejournal, @martinibuster

Google’s Search Off The Record podcast discussed when a business should hire an SEO consultant and what metrics of success should look like. They also talked about a red flag to watch for when considering a search marketer.

Hire An SEO When It Becomes Time Consuming

Martin Splitt started the conversation off by asking at what point a business should hire an SEO:

“…I know people are hiring agencies and SEO experts. When is the point where you think an expert or an agency should come in? What’s the bits and pieces that are not as easy to do while I do my business that I should have an expert for?”

John replied that there is no one criteria or line to cross at which point a business should hire a consultant. He did however point out that there comes a certain point where doing SEO is time consuming and takes a business person away from the tasks that are directly related to running their business. That’s a point at which hiring an SEO consultant makes sense.

He said:

“Yeah, I don’t know if there’s a one-size-fits-all answer there because it’s a bit like asking, when should I get help for marketing, especially for a small business.

You do everything yourself. At some point, you’re like, ‘Oh, I really hate bookkeeping. I’m going to hire a bookkeeper.’ At that point where you’re like, ‘Well, I don’t appreciate doing all of this work or I don’t have time for it, but I know it has to be done.’ That’s probably the point where you say, ‘Well, okay, I will hire someone for this.’ “

SEO Should Have Measurable Results?

The next factor they discussed is the measurability of results. Over more than twenty-five years of working in SEO, one of the ways that low-quality SEOs have consistently measured their results is by the number of queries a client site is ranking for. Low-quality SEOs charge a monthly retainer and generate a report of all queries the site has ranked for in the previous months, including garbage nonsense queries.

A common metric SEOs use to gauge success is ranking positions and traffic. Those metrics are a little better, and most SEOs agree that they make sense as solid metrics.

But those metrics don’t capture the true success of SEO because those ranking positions could be for low-quality search queries that don’t result in the kind of traffic that converts to leads, sales, affiliate earnings or ad clicks.

Arguably, the most important metric any business should use to gauge the effect of what was done for SEO is how much more revenue is being generated. Keyword rankings and traffic are important metrics to measure, but the most important metric is ultimately the business goal.

Google’s John Mueller appears to agree, as he cites revenue and the business result as key measures of whether the SEO is working.

He explained:

“I think, for in SEO, it kind of makes sense when you realize there’s concrete value in working on SEO for your website, where there’s some business result that comes out of it where you can actually measurably say, ‘When I started doing SEO for my website, I made so much more money’ or whatever it is that goal is that you care about, and ‘I’m happy to invest a portion of that into hiring someone to do SEO.’

That’s one way I would look at it, where if you can measure in one way or another the effects of the SEO work, then it’s easier to say, ‘Well, I will invest this much into having someone else do that for me.’”

There is a bit of a problem with measuring the effects of SEO. The effects on sales or leads from organic SEO cannot always be directly attributed. People who are obsessed with data-driven decisions will be disappointed because it’s not always possible to directly attribute a lead from an organic search. For one thing, Google hides referral data from the search results. Unlike PPC, where you can track a lead from an ad click to the sale, you can’t do that with organic search.

So if you’re using increased sales or leads as a metric, you’ll have to be able to at least separate attributable paid search from earnings, then guesstimate the rest. Not everything can be data-driven.

Hire Someone With Experience

Another thing Mueller and Splitt recommended was to hire someone who has actual experience with SEO. There are many qualifying factors that can be added, including experience monetizing their own websites, ability to interpret HTML code (which is helpful for identifying technical reasons for ranking problems), endorsements and testimonials. A red flag, in my opinion, is hiring someone from a cold call.

John Mueller observed:

“Someone else, ideally, would be someone who has more experience doing SEO. Because, as a small business owner, you have like 500 hats to wear, and you probably can figure out a little bit about each of these things, but understanding all of the details, that’s sometimes challenging.”

Martin agreed:

“Okay. So there’s no one-size-fits-all answer for this one, but you have to find that spot for yourself whenever it makes sense. All right okay. Fair.”

Red Flag About Some SEOs

Up to this point, both Mueller and Splitt avoided cautioning about red flags to watch for when hiring an SEO. Here, they segued into the topic of what to avoid, advising caution about search marketers who guarantee results.

The reason to avoid these kinds of search marketers is that search rankings depend on a wide range of factors that are not under an SEO’s control. The most an SEO can do is align a site to best practices and promote the site. After that, there are external factors, such as competitors, that cannot be influenced. Most importantly, Google is a black box system: you can see what goes in, you can observe what comes out (the search results), but what happens in between is hidden. All search ranking factors, like external signals of trustworthiness, have an unclear influence on the search results.

Here’s what Mueller said:

“One of the things I would watch out for is, if an SEO makes any promises with regards to ranking or traffic from Search, that’s usually a red flag, because a lot of things around SEO you can’t promise ahead of time. And, if someone says, “I’m an expert. I promise you will rank first for these five words.” They can’t do that. They can’t manually go into Google’s systems and tweak the dials and change the rankings.”

Listen to Google’s Search Off The Record podcast here:

Featured Image by Shutterstock/Peshkova

Google Clarifies Structured Data Rules For Returns & Loyalty Programs via @sejournal, @MattGSouthern

Google has updated its structured data documentation to clarify how merchants should implement markup for return policies and loyalty programs.

The updates aim to reduce confusion and ensure compatibility with Google Search features.

Key Changes In Return Policy Markup

The updated documentation clarifies that only a limited subset of return policy data is supported at the product level.

Google now explicitly states that comprehensive return policies must be defined using the MerchantReturnPolicy type under the Organization markup. This ensures a consistent policy across the full catalog.

In contrast, product-level return policies, defined underOffer, should be used only for exceptions and support fewer properties.

Google explains in its return policy documentation:

“Product-level return policies support only a subset of the properties available for merchant-level return policies.”

Loyalty Program Markup Must Be Separate

For loyalty programs, Google now emphasizes that the MemberProgram structured data must be defined under the Organization markup, either on a separate page or in Merchant Center.

While loyalty benefits like member pricing and points can still be referenced at the product level via UnitPriceSpecification, the program structure itself must be maintained separately.

Google notes in the loyalty program documentation:

“To specify the loyalty benefits… separately add UnitPriceSpecification markup under your Offer structured data markup.”

What’s Not Supported

Google’s documentation now states that shipping discounts and extended return windows offered as loyalty perks aren’t supported in structured data.

While merchants may still offer these benefits, they won’t be eligible for enhanced display in Google Search results.

This is particularly relevant for businesses that advertise such benefits prominently within loyalty programs.

Why It Matters

The changes don’t introduce new capabilities, but they clarify implementation rules that have been inconsistently followed or interpreted.

Merchants relying on offer-level markup for return policies or embedding loyalty programs directly in product offers may need to restructure their data.

Here are some next steps to consider:

  • Audit existing markup to ensure return policies and loyalty programs are defined at the correct levels.
  • Use product-level return policies only when needed, such as for exceptions.
  • Separate loyalty program structure from loyalty benefits, using MemberProgram under Organization, and validForMemberTier under Offer.

Staying compliant with these updated guidelines ensures eligibility for structured data features in Google Search and Shopping.


Featured Image: Roman Samborskyi/Shutterstock

How To Calculate Your ROAS & Ways To Use It via @sejournal, @coreydmorris

Return on ad spend (ROAS) is a common metric or key performance indicator for paid search campaigns. PPC managers and digital marketing executives have been using it for a long time.

In fact, it isn’t even novel to just digital marketing.

While calculating and connecting the dots with attribution for full end-to-end digital marketing is ideal, using ROAS within PPC and SEM specifically can be powerful as a quality metric that scales.

ROAS is a pretty straightforward equation to calculate on the surface.

Return on ad spend = total revenue generated by ads, divided by the cost of ad spend

However, it seems that no metric, KPI, or outcome is as easy to configure and measure nowadays, given the volume of changes in Google Ads, reporting software, and measurement platforms alone.

Beyond that, there’s no one-size-fits-all benchmark or result you’re looking for. A “good” ROAS is different for every business, and what defines good or successful is up to the business to determine.

Whether you’re confident calculating ROAS, need help with knowing how to use it, or fall somewhere in between, I encourage you to dive into the ways to use it in your own PPC efforts.

1. Setting Expectations

PPC is a great channel for getting quick results and to impact a business.

However, even with the best research on the front end, it can often lead to missed expectations.

PPC expectations can vary wildly and be subjective. ROAS provides the opportunity to set a benchmark for what success looks like.

An effective PPC manager can pull different levers to drive more traffic, spend more budget, or try to find a sweet spot in between.

By establishing a ROAS goal tied to profitability, the PPC team can utilize that metric as a key in their decisions and performance overall.

And, profitability needs to factor in the cost of software, people, and things that go beyond just the cost of an ad or media budget – but that’s for another article.

2. Budgeting

ROAS can serve as a great tool in factoring budget decisions.

Like setting expectations, ROAS can serve as a benchmark, helping teams go beyond just looking at bid, budget, click, and conversion ceilings. It is a quality metric.

Use ROAS to determine where the law of diminishing returns applies and ensure it is included in projections. When looking at real past performance, it can be used to help determine ideal budgets and ranges that are acceptable.

In most cases, I have found clients are okay with not capping the budget and looking at the ROAS number solely to determine how much to spend.

If the spend can be increased and still exceed the target ROAS, then keep spending all day, every day, as we know we’re in profitable territory, assuming we’re not creating inventory, fulfillment, sales capacity, or other operational issues.

I love this type of thinking and decision-making, as it is linked to ROI versus budget or a mindset that marketing and ad dollars are an “expense.”

3. Bid Decisions

Getting more granular, bid decisions can also be made based on ROAS.

The ROAS can be calculated at a detailed level and not just at a high level for aggregate or total spend.

When we break down our campaigns into categories like campaign, ad group, ad type, topic, etc., we can get more granular control and insight.

For example, If we’re running Google Shopping Ads which appear on Google Shopping search results pages, we can treat those as a distinct advertising format. This allows us to measure their performance separately and calculate the return on ad spend (ROAS) they generate.

Going even deeper, we can drill down to the individual product level to see how different products produce ROAS.

By knowing what the ROAS is at different levels, we can advise and optimize our bid strategies and have more control over what is driving the overall ROAS and positively impact the whole.

The ability to roll up performance drill down to the product detail level allows for measuring toward broader business goals while also providing an opportunity to test and get things dialed in over time when launching and optimizing new campaigns and ads within an account.

4. Ecommerce

One of the first types of businesses that comes to mind when thinking about ROAS and its use is ecommerce.

With a lot of the great tools and integrations available, many shopping cart platforms automatically feed revenue data back into Google Ads and Google Analytics.

By using these metrics, we can quickly arrive at our ROAS by taking total revenue divided by total spend.

Note that getting ROAS is likely the easiest part. Determining what an acceptable ROAS overall takes more time and work.

That part includes determining profit margins for products, calculating overhead, and determining the full aspect of ROI to back out what the ROAS needs to be.

5. Lead Generation

A trickier business goal type for calculating ROAS is lead generation. ROAS might be tougher to back out and measure itself.

However, in most cases, lead generation campaigns have more attention to detail on the ROI side of things and know their sales cycles and overhead.

This makes arriving at ROAS goals easier, while ROAS itself might take more time to calculate based on the length of time from conversion to final sale, if that’s how ROAS is truly calculated.

When you want to look at ROAS as a meaningful metric for lead generation, you need to have a solid definition of what a lead is.

By default, if a conversion action in Google Ads (or other platforms) is what you use to calculate this metric, you might end up off-track from what your sales team or broader effort cares about.

ROAS matters, but if the “lead” isn’t right or something you can track, you can run into trouble with the definitions of “return,” “leads,” and your overall attribution.

In most cases, the deepest you can track and attribute a lead to a sale and actual revenue is best. If you can’t get that deep, ask questions and probe. The dots should be connected from impression to customer/client.

6. Awareness & Other Campaigns

ROAS can be measured in other business goals and applications as well.

Whether it is awareness generation, page views, or other secondary goals, it can still apply.

Although, it might take more work to define the return for awareness campaigns and would need measurement through attribution modeling. But, it can still be achieved with the right work to back out the sales metric.

As a note, in B2B lead gen, attribution windows can be long, and offline conversion tracking is needed for accuracy.

An example of ROAS for an awareness campaign can look very different from one for ecommerce or lead generation.

If your goal is to create awareness for a topic, brand, or other subject matter, then you’re not as focused on direct sales or leads. You may want to cast as wide of a net as possible for your target or potential audience (even if the broader general public).

In that sense, you have to find a key metric to tie ROI to. You have the most open-ended challenge here – you have to determine the ROI for your organization. What does awareness contribute directly to ROI? How do you define it, measure it, and attribute it?

7. Beyond ROAS

While ROAS is a great benchmark and quality guide for paid media, it isn’t the end of the story. In some cases, it is just the start.

With customer retention, recency, frequency, monetary value (RFM), and lifetime value metrics that are known in businesses, we can take it even further.

Tying ROAS to other metrics beyond the sale can lead to incredible insights for use outside of media spend management.

Getting More From ROAS

Again, I know that ROAS might seem like a basic metric and be something reported on by default in so many dashboards and reports.

While in some cases, it may be simple to calculate, but using it as a metric takes more work.

Getting the foundation right, knowing what a good target ROAS is, how it scales, and that the “return” you’re getting is profitable, is the key to seeing it be a key benchmark and goal-focused KPI in your set of digital marketing metrics that ultimately map out to your business outcome results.

More Resources:


Featured Image: voronaman/Shutterstock

Mid-year SEO checkup: What’s working, what’s not? 

Midway through the year is a good time to see how your SEO is holding up. Search habits shift, rankings change, and AI is reshaping how people find information. A mid-year SEO checkup isn’t about starting over. It’s a check-in to spot what’s working, what’s not, and what to adjust going forward.

Table of contents

Traffic and rankings: What’s changed since January? 

Start your mid-year SEO review by checking how your site is performing, not just on the surface level, but deeper down. Look beyond overall traffic and into individual pages and search queries. What’s still working? What’s losing visibility? The goal is to spot slow shifts early, before they turn into bigger problems. 

Organic traffic trends 

Start with a traffic check in GA4. Compare your organic numbers from January to now, then narrow in on which landing pages have gained or lost ground. After that, use Search Console to see how impressions and clicks line up with the shifts. Look across different devices and locations, as you might notice mobile traffic dropping while desktop stays level. 

As you review, think about what’s changed. Are certain types of content sliding? Is the homepage steady while deeper articles get less visibility? Has something in the layout or search results changed how people interact with your site? These patterns will help you figure out where to adjust. 

Keyword movement and SERP features 

GA4 won’t show you how keywords are doing. For that, use Search Console or Semrush, if you want a more detailed view. It gives you a clearer view of how your top queries are performing and whether their positions are trending up or down. Focus on terms sitting somewhere between positions five and fifteen. These are close to the edge and can shift either way with the smallest change. 

Keep an eye out for new queries your site is now appearing for. Also, check if your content is showing up in features like video carousels, People Also Ask, or AI Overviews. These placements affect clicks, even if rankings stay flat. 

If CTR is dropping, it might be because the answer’s already visible in the search result. That’s common with broad questions or terms that Google can answer directly with a snippet or summary. Some of these shifts started with recent algorithm updates. If you saw a change around that time, that might explain it. 

Being on page one isn’t always enough now. What matters more is how your page shows up and whether it stands out next to everything else. 

Where’s the gap? 

Ranking alone doesn’t mean a page is performing well. Some are still showing up in search but aren’t pulling their weight anymore. Take a look at your top pages from Q1 and compare them to what’s performing now. If something dropped, check for changes. Did the URL structure shift? Was the copy updated? Did anything break during a migration or redesign? 

Segmenting traffic helps spot patterns during your mid-year SEO checkup. Blog content might be holding steady while product pages quietly slip. Or maybe a location page that once performed well is now buried. Sorting traffic this way makes it easier to see where things are improving and where they’ve gone quiet. 

And don’t ignore branded versus non-branded search. If branded terms are down, it may reflect lower awareness. If non-branded terms fell off, that usually points to stronger competition or a shift in search demand. Either way, those are signs to act on, not ignore. 

What to do next in your mid-year SEO review 

As you review performance, note content that’s lost traffic and look at how it aligns with current keyword trends. Some pages may need updates, while others might be better merged or repurposed. If certain pages are still ranking but getting few clicks, flag those, too, as there may be issues with title tags, metadata, or how the content is framed.  

Also, look for signs of new search interest or shifts in consumer behavior that are driving unexpected traffic. Those insights can help guide your Q3 and Q4 planning. A detailed mid-year SEO checkup now helps prevent bigger issues later. Small drops or mismatches in intent can add up over time, especially if you miss the early signs. Use your data to make informed decisions, not just to complete a report. 

Audit and refresh your content 

Not all content holds its value over time. Some pages stop performing due to outdated content, and others never performed well to begin with. A mid-year SEO audit helps you figure out what’s worth updating, combining, or removing altogether. 

Focus first on content that’s lost traffic or rankings. Use Google Search Console to spot declines in impressions and clicks, then compare that with GA4 engagement metrics. If a page ranks but no longer drives real value, or doesn’t match what users are looking for, it likely needs attention. 

Google wants people-first content. So if your site relies on thin tutorials, vaguely rewritten definitions, or pages written more for search engines than real users, those pages may be dragging down your overall SEO performance. 

When refreshing content, lead with clarity. Remove fluff, update stats, and make sure your answer matches the search intent. Don’t just rewrite, make the page genuinely better. In some cases, the fix might be cutting it entirely. If a page hasn’t contributed value or activity recently, rethink why it’s there. 

Diversify and focus on video 

Search results are more visual than they used to be. Video clips now show up in carousels, featured snippets, and AI responses. If your site is still relying on just blog posts, you’re missing opportunities to be seen. 

Short videos, especially how-tos, demos, and explainers, can increase visibility on Google, YouTube, and Discover. They also help with engagement, keeping visitors on your site longer. 

Start by turning high-performing articles into videos. Post them to YouTube, embed them on your site, and add basic schema markup. Just a few clear, well-structured videos can increase your presence in search results and help reach users who don’t want to read through long text. 

Video doesn’t need to be expensive or overly produced. What matters is that it’s useful, focused, and easy to watch. During your mid-year SEO checkup, you might need to improve your video strategy.

Adapting to AI and zero-click searches 

More users are getting answers directly on Google, without clicking anything. With AI Overviews becoming more common across search results, especially for question-based queries, your content needs to work even when there’s no obvious incentive to visit your page. 

That means clear structure, clean markup, and highly readable content that makes it easy for Google to understand the core answer quickly. Place key information high on the page and use a strong title, meta description, and subheadings. Organize your content with scannable sections so it’s more likely to appear in featured results. 

Don’t ignore FAQ or how-to formats, as these can still help Google identify your page’s purpose. Structured data reinforces clarity for both traditional search and AI-generated summaries. 

Zero-click doesn’t mean zero opportunity. Content that’s referenced in AI answers or shown in SERP features can strengthen brand visibility, build trust, and lead to familiar users returning via other channels later. 

What AI Mode means for search visibility 

In addition to AI Overviews, Google is adding a feature called AI Mode. This is a new search experience built for more complex, multi-part queries. It pulls information from several sources and delivers a conversational response with helpful links. 

Instead of listing links, AI Mode breaks down the query, runs multiple related searches, and returns one detailed answer. There’s less space for traditional rankings, but a chance for useful, well-structured content to be included. If your impressions are rising but clicks aren’t, your content may already appear in these summaries. 

While AI Mode is still rolling out, it shows where search is likely headed. And it’s not just Google, as tools like ChatGPT (Search) and Perplexity show that AI-powered discovery is already expanding. As this grows, you might have to rethink how you see content. Learn how to optimize for LLMs using Yoast SEO’s tools.

Refresh your keyword strategy 

Midway through the year is a good time to check if your keyword strategy still aligns with how people are searching. Start with Search Console and any SEO tools you use, and look for shifts in rankings, drops in CTR, or signs that user intent has changed. Some keywords may still rank but deliver less value, while others may be gaining traction. 

Take another look at the SERPs. Are AI Overviews, snippets, or video results pushing your links down? If your content no longer fits the query, it may need a rewrite or a new format. 

Also consider what’s surfaced since Q1. Seasonal queries, comparison searches, and longer questions might now be worth targeting. Even if they bring less volume, they often convert better. Use what you find to adjust your focus for the second half of the year.

Technical SEO clean up

Great content alone isn’t enough if your site’s technical side is holding it back. A mid-year SEO checkup is a good time to inspect the foundation. See how your site loads, how it’s crawled, and whether pages are being properly indexed. 

Start with speed. Use Google’s Core Web Vitals tools to review page load performance. Fix common issues like oversized images, unnecessary scripts, or layout shifts that hurt usability. These things don’t just impact rankings; they also affect how users experience your site, especially on mobile. 

Look at crawlability. Search Console can show you which pages aren’t being indexed, where crawl issues are popping up, or if valid content is being skipped. If strong content still isn’t performing, this could be why. 

In your mid-year SEO checkup, you should also see your internal linking. Important pages should be easy to reach. If key articles or landing pages are buried under layers of clicks or orphaned entirely, Google’s crawlers (and readers) may never find them. 

Finally, check out your structured data. Schema still gives your content a better chance of being understood by search engines. 

A light technical review every few months helps keep things healthy. You don’t need to fix everything at once, but leaving small issues unsolved can turn into long-term performance headaches. 

Monitor competitors and trends 

Search isn’t static, and neither are your competitors. Even if your strategy hasn’t changed much since Q1, theirs might have. A mid-year SEO checkup is a smart idea to see who’s gaining ground, what kind of content is outperforming yours, and what shifts are happening in your space as a whole. 

Start by checking who’s around you in the search results, especially for your highest-value keywords. Are the same domains showing up? Has a competitor overtaken you with fresher content, a better format, or a new angle? Sometimes it’s less about Google’s algorithm and more about someone else simply doing it better. 

Use ranking and backlink tools to identify newer content that’s climbing. What’s different? Is it shorter, clearer, or more visual? Has it earned links or been widely shared? These observations can shape not just what you publish next, but how you structure and present it. 

Whether you’re in an aggressive or stable position, awareness is part of strategy. Without reviewing what others are doing, you don’t have a clear view of what winning looks like right now or how quickly that picture is changing. 

Set clear goals for the rest of the year 

After reviewing performance, updating content, tightening technical issues, and refreshing keywords, the next step in your mid-year SEO checkup is setting focused goals for the rest of the year. 

Keep them specific. A goal like “get more traffic” is too vague to drive clear action. Use what you’ve learned, whether that’s from rankings, audit results, or crawl reports, to define outcomes that are tied to your time, resources, and business needs. 

Look for low-effort wins and long-term improvements. Fix pages that rank but don’t get clicks. Update content that dropped after an algorithm change. Strengthen internal links to help strong posts on the edge of page one move up. These small changes can improve results with less time than starting from scratch. 

If AI features are reducing your traffic on top queries, consider focusing more on visibility than clicks. That might mean leaning into content formats that stand out in summaries, like FAQs or short-form video. 

You can also set process goals: publish more consistently (maybe using workflow improvements from Yoast SEO’s Google Docs add-on), clean up old content, reduce crawl waste, or make reporting easier. These are just as important as traffic-focused targets, and they’re often easier to maintain over time. 

Your goals don’t need to be dramatic. Often, refining what already exists brings more gains than chasing something new. Revisit your targets regularly and track your progress without overthinking it. Most importantly, stay flexible heading into Q4, when search activity and competition both tend to spike.

Workflow improvements also help, for instance, by integrating Google Docs and Yoast SEO

Do your mid-year SEO checkup

Search has changed a lot since January, and it’s not slowing down. A mid-year SEO strategy review gives you the chance to course-correct, refocus your efforts, and keep momentum going into the back half of the year. 

You don’t need to overhaul everything. Just fix what’s broken, improve what matters, and make better decisions with what you know now. Stay consistent, track what shifts, and keep building. 

Google Explains How Long It Takes For SEO To Work via @sejournal, @martinibuster

Google’s martin Splitt and John Mueller discussed how long it takes for SEO to have an effect. Google’s John Mueller explained that there are different levels of optimization and that some have a more immediate effect than other more complex changes.

Visible Changes From SEO

Some SEOs like to make blanket statements that SEO is all about links. Others boast that their SEO work can have dramatic effect in relatively little time. And it turns out that those kinds of statements really depend on the actual work that was done.

Google’s John Mueller said that a site starting out from virtually zero optimization to some basic optimization may see near immediate ranking changes in Google.

John Mueller started this part of the conversation:

“I guess another question that I sometimes hear with regards to hiring an SEO is, how long does it take for them to make visible changes?”

Martin Splitt responded:

“Yeah. How long does it take? I’m pretty sure it’s not instant. If you say it takes like a week or a couple of weeks to pick things up, is that the reasonable time horizon or is it longer?”

John answered with the really old “it depends” line which is kind of overdone. But in this case it really does depend on multiple factors related to the scale of the work being done which in turn influences how long it will take for Google to index and then recalculate rankings. He said if it’s something simple then it won’t take Google much time. But if it’s a lot of changes then it may take significantly longer.

John’s explanation:

“I think, to speak in SEO lingo, it depends. Some changes are easy to pick up quickly, like simple text changes on a page. They just have to be recrawled and reprocessed and that happens fairly quickly.

But, if you make bigger, more strategic changes on a website, then sometimes that just takes a long time.”

Next Stage Of SEO: Monitor Progress

Mueller then says that a good SEO should monitor how the changes they made are affecting the rankings. This can be a little tricky because some changes will cause an immediate ranking boost that will last for a few days and then drop. My opinion, from my experience, is that an unshakeable top ranking is generally possible if there’s strong word of mouth and other external signals that tell Google that the content is trustworthy and high quality.

Here’s what John Mueller said:

“I think that’s something where a good SEO should be able to help monitor the progress along there. So it shouldn’t be that they go off and make changes and say, ‘Okay, now you have to keep paying me for the next year until we wait what happens.’ They should be able to tell you what is happening, what the progress is, give you some input on the different things that they’re doing regularly. But it is something that is more of a longer term thing.”

Mueller doesn’t go into details about what the hypothetical SEO is “doing regularly” but in my opinion it’s always helpful to be doing basic promotion that boils down to telling people that this content is out there, measuring how people respond to it, getting feedback about it and then making changes or improvement based on those changes.

For content sites, a great way to get immediate user feedback is to enable a moderated comment section in which only comments that are approved can show up. I have received a lot of positive feedback from readers on some of my content sites from what’s in the comments. It’s also useful to make it easy for users to contact the publisher from any page of the site, whether it’s an ecommerce site or an informational blog. User feedback is absolute gold.

Mueller continued his answer:

“I think if you have a website that has never done anything with SEO, probably you’ll see a nice big jump in the beginning as you ramp up and do whatever the best practices are. At some point, it’ll kind of be slow and regular more from there on.”

Martin Splitt expressed how this part about waiting and monitoring requires patience and Mueller agreed, saying:

“I think being patient is good. But you also need someone like an SEO as a partner to give you updates along the way and say, ‘Okay, we did all of these things,’ and they can list them out and tell you exactly what they did. ‘These things are going to take a while, and I can show you when Google crawls, we can follow along to see like what is happening there. Based on that, we can give you some idea of when to expect changes.’”

Takeaways:

SEO Timelines Vary By Scale Of Change

  • Simple on-page edits may result in quick ranking changes.
  • Larger structural or strategic SEO efforts take significantly longer to be reflected in Google rankings.

SEO Results Are Not Instant

  • Indexing and ranking recalculations take time, even for smaller changes.

Monitoring And Feedback Are Necessary

  • Good SEOs track progress and explain what is happening over time.
  • Ongoing feedback from users can help guide further optimization.

Transparency And Communication

  • Effective SEOs regularly report on their actions and expected timeframes for results.

Google’s John Mueller explained that the time it takes for search optimizations to show results depends on the complexity of changes made, with simple updates being processed faster and large-scale changes requiring more time. He emphasized that good SEO isn’t just about making changes because it also involves tracking how those changes affect rankings, communicating progress clearly, and continuous work.

I suggested that user response to content is an important form of feedback because it helps site owners understand what is resonating well with users and where the site is falling short. User feedback, in my opinion, should be a part of the SEO process because Google tracks user behavior signals that indicate a site is trustworthy and relevant to users.

Listen to Search Off The Record Episode 95

Featured Image by Shutterstock/Khosro

This tool strips away anti-AI protections from digital art

A new technique called LightShed will make it harder for artists to use existing protective tools to stop their work from being ingested for AI training. It’s the next step in a cat-and-mouse game—across technology, law, and culture—that has been going on between artists and AI proponents for years. 

Generative AI models that create images need to be trained on a wide variety of visual material, and data sets that are used for this training allegedly include copyrighted art without permission. This has worried artists, who are concerned that the models will learn their style, mimic their work, and put them out of a job.

These artists got some potential defenses in 2023, when researchers created tools like Glaze and Nightshade to protect artwork by “poisoning” it against AI training (Shawn Shan was even named MIT Technology Review’s Innovator of the Year last year for his work on these). LightShed, however, claims to be able to subvert these tools and others like them, making it easy for the artwork to be used for training once again.

To be clear, the researchers behind LightShed aren’t trying to steal artists’ work. They just don’t want people to get a false sense of security. “You will not be sure if companies have methods to delete these poisons but will never tell you,” says Hanna Foerster, a PhD student at the University of Cambridge and the lead author of a paper on the work. And if they do, it may be too late to fix the problem.

AI models work, in part, by implicitly creating boundaries between what they perceive as different categories of images. Glaze and Nightshade change enough pixels to push a given piece of art over this boundary without affecting the image’s quality, causing the model to see it as something it’s not. These almost imperceptible changes are called perturbations, and they mess up the AI model’s ability to understand the artwork.

Glaze makes models misunderstand style (e.g., interpreting a photorealistic painting as a cartoon). Nightshade instead makes the model see the subject incorrectly (e.g., interpreting a cat in a drawing as a dog). Glaze is used to defend an artist’s individual style, whereas Nightshade is used to attack AI models that crawl the internet for art.

Foerster worked with a team of researchers from the Technical University of Darmstadt and the University of Texas at San Antonio to develop LightShed, which learns how to see where tools like Glaze and Nightshade splash this sort of digital poison onto art so that it can effectively clean it off. The group will present its findings at the Usenix Security Symposium, a leading global cybersecurity conference, in August. 

The researchers trained LightShed by feeding it pieces of art with and without Nightshade, Glaze, and other similar programs applied. Foerster describes the process as teaching LightShed to reconstruct “just the poison on poisoned images.” Identifying a cutoff for how much poison will actually confuse an AI makes it easier to “wash” just the poison off. 

LightShed is incredibly effective at this. While other researchers have found simple ways to subvert poisoning, LightShed appears to be more adaptable. It can even apply what it’s learned from one anti-AI tool—say, Nightshade—to others like Mist or MetaCloak without ever seeing them ahead of time. While it has some trouble performing against small doses of poison, those are less likely to kill the AI models’ abilities to understand the underlying art, making it a win-win for the AI—or a lose-lose for the artists using these tools.

Around 7.5 million people, many of them artists with small and medium-size followings and fewer resources, have downloaded Glaze to protect their art. Those using tools like Glaze see it as an important technical line of defense, especially when the state of regulation around AI training and copyright is still up in the air. The LightShed authors see their work as a warning that tools like Glaze are not permanent solutions. “It might need a few more rounds of trying to come up with better ideas for protection,” says Foerster.

The creators of Glaze and Nightshade seem to agree with that sentiment: The website for Nightshade warned the tool wasn’t future-proof before work on LightShed ever began. And Shan, who led research on both tools, still believes defenses like his have meaning even if there are ways around them. 

“It’s a deterrent,” says Shan—a way to warn AI companies that artists are serious about their concerns. The goal, as he puts it, is to put up as many roadblocks as possible so that AI companies find it easier to just work with artists. He believes that “most artists kind of understand this is a temporary solution,” but that creating those obstacles against the unwanted use of their work is still valuable.

Foerster hopes to use what she learned through LightShed to build new defenses for artists, including clever watermarks that somehow persist with the artwork even after it’s gone through an AI model. While she doesn’t believe this will protect a work against AI forever, she thinks this could help tip the scales back in the artist’s favor once again.