How To Break Through An Affiliate Site Plateau & Find New Growth – Ask An SEO via @sejournal, @rollerblader

This week’s Ask an SEO question is:

“I’ve been running an affiliate site for 2 years but hit a plateau. What advanced data analysis techniques can help me identify new growth opportunities that I might be missing?

This is one of my favorite questions that come up at conferences and in the affiliate marketing programs we manage. Most of the time, the affiliate submits their site or niche, and I can give direct examples and opportunities. But for this, we want to keep everything anonymous, so I’ll share the processes and ideas so you and anyone else reading can implement, no matter what industry, type of content, etc., you produce.

Breaking The Plateaus

There are a few plateaus affiliates face more than others, including:

  • Traffic stagnation.
  • New products and services for recommendation.
  • Revenue flat lines.
  • Topics to talk about.

These are the most common with this question, so I’ll focus on them. If anyone reading this has hit a different one and is looking for ways to overcome them, send the question through my author bio page here. If I’ve worked through it, I’ll do my best to answer it in an upcoming column.

Traffic Stagnation

If you have a website and traffic has stagnated because you dominate for all the main queries and topics, look outside your own writing and knowledge base for help. Instead of hiring writers to help with more content based on what exists within your platform, try funneling new visitors in from other platforms (websites, podcasts, apps, etc.) or bringing people in to create unique content for you by featuring them and asking them to promote it.

To find new topics, ideas, and questions people have, adding a forum or community can help bring new traffic and ideas to your website or community. Some search engines like Google tend to reward this authentic user-generated content, but it does come with a decent amount of manual labor with monitoring and quality control. The benefit here is you build a community that creates content for you.

Pro-tip: Add a prompt on the main website pages like “question not answered, click here to ask the community,” where it goes to the forum, or have it go to an answer box where you collect it and create a new guide. Similar to how Search Engine Journal has a “Submit Questions” section for me and other “Ask an” columnists.

The UGC can begin showing up in Google as well as LLMs like ChatGPT, Perplexity, and Claude, and you can begin getting new traffic in and a new user base. This can all be monetized. But maybe you don’t want the hassle and risk of a UGC platform; there are more options.

Take your top guides and articles and begin turning them into videos. A long-form video can help with YouTube and bringing in traffic; it can be uploaded to Skool if you create a course. Skool and other platforms let you charge a fee for access, and each chapter in the video can become a long-form video or a short that works for YouTube, TikTok, and likely Instagram. With the exception of the shorts, affiliate links can be used on all of these platforms. The benefit of videos is a lot of the platforms like YouTube can be steady streams of traffic vs. IG or TikTok where it only lasts for a couple days to a week.

Now begin adding text versions to social media platforms in ways that fit. LinkedIn allows long-form and encourages users to ask questions, answer polls, and then you can link to your website. Bluesky and X are short-form but allow quick and easy links to your website or pages, although the traffic is in short bursts. Pinterest is short form, but image-heavy, and a pin that is done well and gets attention can be consistent traffic for a year and sometimes longer.

Some partners decide they want to start podcasting. Every topic on your website can become a theme or session, or combined into a really strong one that becomes a course you can monetize. Find other people with complementary knowledge and/or who have audiences and invite them to participate. You’ll be helping to grow each other’s traffic and sharing expertise. Sometimes your guest may spur new content ideas for you, too.

New Products And Services And Fixing Revenue Flatlines

When you run out of products and services to promote, or you’re hitting the highest AOVs available, revenue begins to flatline. While you cannot control what happens on the merchant or lead gen website’s funnel, you can control how you make money. This is an affiliate post so I won’t talk about driving higher EPCs and CPMs or getting pageviews to increase ad media, instead it is about using affiliate links and offers.

Here’s where to begin looking.

Survey Your Audience Or Use Your Analytics For Demographics

Having your audience’s demographics, including age, urban/rural/suburban, likes and interests, and anything else, can make you a ton of money. If it turns out the majority have dogs and are urban, but you run a cooking site, add in pet-friendly matching recipes or toys for dogs that get them exercise and stimulation when they cannot be outside more regularly to burn energy.

If the same demographics are local-based, like a group for parents in New England, create snow day resources where you review family-friendly tabletop games for snow days, lists of local restaurants across the area that offer kids eat free or family deals, and affordable snowbird family vacations.

If your audience has a large portion in rural areas, think about the ingredients that are hard to come by in rural areas due to smaller grocery stores, then share online resources to access them. This is a low-hanging fruit item I see as recipe sites will focus on the tools and products, but they can also monetize ingredients.

Learn What Else They’re Into

Once you know who your audience is and where they skew demographically, survey them to find their interests. If you can’t get them to take surveys, even with incentives like gift cards or prizes via a drawing (assuming it’s legal where you and your audience live), look up free research documents and use your marketing skills to find hobbies, stores, and associations that have similar audiences.

Maybe your audience is 50-year-old suburbanites that love bird watching. You’ve already maxed out sportswear and hiking equipment, same with books on birds and binoculars. Maybe it turns out they’re also into photography, so you can sell cameras, photo storage solutions, ways to print the photos and sell them, editing software, and guides to using the camera and setting up different types of shots.

It could also turn out they love to travel. Create guides of where to go that are friendly for people 50-60 years old, including the types of birds that they could see in each spot along the route, and what to pack based on the season, as weather can change. You can now use affiliate links for hotels and airfare, travel supplies, camera bags for different climates, and ebooks or physical books with trail maps, travel guides, and bird watching books to check off the ones they see.

You do need to watch adding too much content that is not the core topic of your channel, so you don’t accidentally uncategorize your platforms for SEO or alienate your core reader base. When you go off topic too often, you chase away current and new subscribers while also confusing algorithms. This is easy to resolve with tech SEO by using metarobots or robots.txt, and having an editorial calendar, but that is a different topic.

Now you have new products and services to promote, new merchants to work with, and this leads to more affiliate sales, increasing your revenue. Shopping guides, comparison grids, listicles, etc.

New Topics To Talk About

Above, I mentioned podcast guests, UGC, and a few ways that can spark new ideas for topics when you run out of things to talk about. So here are a few other ways I break writer’s block with the programs I work on for myself, for clients, and the affiliates in our programs.

  • AlsoAsked.com: You plug in a topic like “running shoes,” and it spits out a ton of potential questions about them. From there, I go to Google or an LLM and type it in, then I look to see what shows up. To go a step further, I may ask, “What are similar questions to this one?” or “What are complementary but different questions to this one?” as a second query to see what I may be missing.
  • Rank trackers: Take a URL for a blog or forum and plug it into a rank tracking tool. It’ll provide a list of keywords, questions, and phrases it shows up for.
  • Comments: Read the comments on YouTube videos for channels that are directly related to your business. These are things people want to know about and can be a way to get new traffic while breaking writer’s block.
  • AI and LLMs: Ask AI for a list of ideas that are related but not covered on your platform yet, and then have it double-verify. Not everything it recommends will be relevant, but it could spark ideas for you.

There are almost always solutions to preventing stagnation for affiliates, no matter if it is traffic, revenue, topics, or products and services to promote. You may need to expand your offerings to other types of products and services that match the same demographics or look to other platforms and competitors for content inspiration. I hope this helps, and thank you for asking.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Three new tasks, better navigation, and a bug fix in the Yoast SEO Task List 

We launched the Yoast SEO Task List in December to give you a clear, actionable to-do list for your site’s SEO. In this update, we’ve added two new tasks, improved how you navigate to fixes, and resolved a bug that was showing tasks in the wrong language. 

A quick recap: what does the Task List do? 

The Task List scans your site and surfaces specific content that needs attention, ranked by priority with an estimated time to fix. Instead of guessing what to work on next, you click a task and Yoast takes you directly to the right place to make the improvement. Think of it as a personal SEO assistant that knows your site. 

What’s new in this update 

New task: improve your meta descriptions 

Meta descriptions are the short snippets that appear under your page title in Google search results. They don’t directly affect rankings, however they have a significant impact on whether someone clicks your link. The Task List will now flag recent posts where the meta description is missing or could be stronger, and point you to where you can fix it. Premium users can use the AI Generate button to write one in seconds. 

New task: delete your sample page 

Every new WordPress site comes with a default “Sample Page” that most people never delete. It adds no value and can create unnecessary noise for search engines. The Task List will now remind you to remove it if it’s still there. It’s a two-minute job that’s easy to overlook. 

New task: set social sharing images  

Available with Yoast SEO Premium, Yoast WooCommerce SEO, and Yoast SEO AI+

When someone shares your content on Facebook or X, the image that appears alongside it can make a real difference to whether people click. The Task List will now remind you to set a custom social sharing image for your posts and pages, so your content looks its best every time it gets shared. 

Go directly to the right place in the editor 

Previously, clicking a task would open the post editor and leave you to find the right section yourself. Now, Yoast takes you to the exact part of the editor you need: the SEO tab, the readability panel, or the meta description field. Less scrolling, faster fixing. 

Bug fix: tasks now appear in your language 

We fixed a bug where task descriptions were showing up in the site’s language rather than the logged-in user’s language. If you manage a multilingual site, or your personal language settings differ from your site’s default, tasks will now display correctly for you. 

Also in this release 

  • We’ve added a new Yoast tab to the WordPress Plugins screen that groups all your installed Yoast plugins in one place. This requires WordPress 7.0+. 
  • We fixed a bug where alt text changes made via the inline image editor in How-to and FAQ blocks weren’t saving correctly to the frontend. Thanks to @param-chandarana for the report. 

What’s coming next 

We’re continuing to expand the Task List with improvements that surface high-impact changes specific to your content. Users of paid plans will see additional tasks in upcoming releases.

Update to Yoast SEO 27.4 to get these improvements automatically, or download the latest version from the WordPress plugin directory. 

Job titles of the future: Wildlife first responder

Grizzly bears have made such a comeback across eastern Montana that in 2017, the state hired its first-ever prairie-based grizzly manager: wildlife biologist Wesley Sarmento. 

For some seven years, Sarmento worked to keep both the bears, which are still listed as threatened under the Endangered Species Act, and the humans, who are sprawling into once-wild spaces, out of trouble. Based in the small city of Conrad, population 2,553, he acted sort of like a first responder, trying to defuse potentially dangerous situations. He even got caught in some himself—which is why, before he left the role to pursue a PhD, he turned to drones to get the job done. 

The bear necessities

Sarmento was studying mountain goats in Glacier National Park when he first started working with bears. To better understand how goats responded to the apex predator, he dressed up in a bear costume once a week for over three years. 

When he later started as grizzly manager, he often drove long distances to push bears away from farms. Bears are drawn to spilled or leaking grains, and an open silo quickly turns into a buffet. Sarmento would typically arrive armed with a shotgun, cracker shells, and bear spray, but after he narrowly escaped getting mauled one day, he knew he had to pivot.

“In that moment,” he says, “I was like, I am gonna get myself killed.”

A bird’s-eye view

Sarmento first turned to two Airedale dogs, a breed known for deterring bears on farms, but the dogs were easily sidetracked. Meanwhile, drones were slowly becoming more common tools for biologists in a range of activities, including counting birds and mapping habitats.

He first took one into the field in 2022, when a grizzly mom and two cubs were found rummaging around in a silo outside of town. The drone’s infrared sensors helped him quickly find their location, and he used the aircraft’s sound to drive them away from the property. (Researchers suspect bears instinctively dislike the whir of blades because it sounds like a swarm of bees.) “The whole thing was so clean and controlled,” he says. “And I did it all from the safety of my truck.”

Since then, the flying machine that Sarmento bought for $4,000—a fairly simple model with a thermal camera and 30 minutes of battery life—has shown its potential for detecting grizzlies in perilous terrain he’d otherwise have to approach on foot, like dense brush or hard-to-reach river bottoms.

A new technological foundation

Now studying wildlife ecology at the University of Montana, Sarmento is hoping to design a drone campus police can use to deter black bears from school grounds. In the future, he hopes, AI image recognition might be broadly integrated into his wildlife management work—maybe even helping drones identify bears and autonomously divert them from high-traffic areas.

All this helps keep bears from learning behaviors that lead to conflict with people—which typically ends badly for the bear and is occasionally fatal for humans.

“The out-of-the-box technology doesn’t exist yet, but the hope is to keep exploring applications,” he says. “Drones are the next frontier.” 

Emily Senkosky is a writer with a master’s degree in environmental science journalism from the University of Montana.

You have no choice in reading this article—maybe

Uri Maoz loved doing his human research, back when he was getting his PhD. He was studying a very specific topic in computational neuroscience: how the brain instructs our arms to move and how our gray matter in turn perceives that motion. 

Then his professor asked him to deliver an undergrad lecture. Maoz assumed his boss was going to tell him exactly what to do, or at least throw some PowerPoint slides his way. But no. Maoz had free rein to teach anything, as long as it was relevant to the students. “I could have gone to human brain augmentation,” he says. “Cyborgs or whatever.”

Yet that admittedly fun and borderline sci-fi topic wasn’t what popped, unbidden, into his mind. His idea, he recalls with excitement: “What neuroscience has to say about the question of free will!” 

How—or whether—humans make decisions (like, say, about what to discuss in an undergrad lecture) had been on his mind since he’d read an article in his early twenties suggesting that … maybe they didn’t. This question might naturally beget others: Had he even had a choice about whether to read that article in the first place? How would he ever know if he was responsible for making decisions in his life or if he just had the illusion of control?

“After that, there was no turning back,” says Maoz, now a professor at Chapman University, in California. He finished his PhD work in human movement, but afterward he scooted further up the neural chain to find out how desires and beliefs turn into actions—from raising an arm to choosing someone to ask out to dinner on a Friday night.

Today, Maoz is a central figure in the attempt to (sort of, maybe) answer how that neural chain functions. His research has since overturned and reinter­preted canonical neuroscience studies and united the straight-scientific and philosophical sides of the free-will question. More than anything, though, he’s succeeded in uncovering new wrinkles in the debate.

Machines and magic tricks

The concept of free will seems straightforward, but it doesn’t have a universally accepted definition. One intuitive notion is that it’s the ability to make our own decisions and take our own actions on purpose—that we control our lives. But physicists might ask if the universe is deterministic, following a preordained path, and if human choices can still happen in such a universe. 

That’s a question for them, Maoz says. What neuroscientists can do is figure out what’s going on in the brain when people make decisions. “And that’s what we’re trying to do: to understand how our wishes, desires, beliefs, turn into actions,” he says.

By the time Maoz had finished his PhD, in 2008, neuroscientific research into the question had been going on for decades. One foundational study from the 1960s showed that a hand movement—something a person seemingly decides to do—was preceded by the appearance in the brain of an electrical signal called the “readiness potential.” 

Building on that result, in the 1980s a neuroscientist named Benjamin Libet did the experiment that had first piqued Maoz’s interest in the topic—one that many, until recently, interpreted as a death knell for the concept of free will.

An electrical impulse in our brains can shed only so much light on whether we truly are the architects of our own fates.

“He just had people sit there, and whenever they feel like it, they would go like this,” says Maoz, wiggling his wrist. Libet would then ask where a rotating dot was on a screen when they first had the urge to flick. He found that the readiness potential appeared not only before they moved their hand but before they reported having the urge to move—or, in Libet’s interpretation, before they knew they were going to move. 

Studies since have confirmed the observation and shown that the readiness potential appears a second or two—and maybe, fMRI implies, up to 10 seconds—before participants report making a conscious decision. “It suggests we are essentially passengers in a self-driving car,” says Maoz. “The unconscious biological machine does all the steering, but our conscious mind sits in the driver’s seat and takes the credit.” 

Maoz initially approached his own research with variations on Libet’s experiments. He worked with epilepsy patients who already had electrodes in their brains, for clinical purposes, and was able to predict which hand they would raise before they raised it. 

Still, some of the Libet-inspired studies people were doing nagged at him. “All these results were about completely arbitrary decisions. Raise your hand whenever you feel like it,” he says. “Why? No reason.” A decision like that is quite different from, say, choosing to break up with your partner. Try telling someone they weren’t in the driver’s seat for that

The field wasn’t looking at meaningful decisions, he says—the ones that actually set the course of lives. 

Maoz began pulling in philosophers to help guide his approach. They would challenge him to confront the semantic differences between things like intention, desire, and urge. Neuroscientists have tended to lump those concepts together, but philosophers tease them apart: Desire is a want that doesn’t necessarily progress toward an action; urge carries implications of immediacy and compulsion; and intention involves committing to a plan. (Maoz has come to focus specifically on intention—including, recently, the potential intentions of AI.)

In 2017, he organized his first in a series of free-will conferences, drawing many autonomy-interested philosophers. “Thank you so much for coming,” he recalls saying at the opening of the meeting. “As if you had a choice.” One day, the crew took an excursion out on a lake. As the group munched on shrimp, someone joked that they hoped the boat didn’t sink, because everybody in the field would die. 

The comment didn’t make Maoz feel existential dread. Instead, he figured that if the whole field was already there, why not lasso them all into writing a research grant? “He just thinks what should be the next step and just has a very good ability to just make it happen,” says Liad Mudrik, a neuroscientist at Tel Aviv University and a frequent collaborator.

That ability is special among scientists, says Chapman colleague Aaron Schurger, with whom Maoz co-directs the Laboratory for Understanding Consciousness, Intentions, and Decision-Making (LUCID, appropriately). “I really think that Uri is kind of at the nexus of this field right now because he’s really, really good at bringing people together around these big ideas,” he says.

Donations and interruptions

Maoz has recently been making progress on one of the big ideas that have consistently occupied his working hours: how trivial and significant decisions play out differently in the brain. In collaborations with Mudrik, he’s parsed the neural difference between picking and choosing—their terms for arbitrary decisions and those that change your life and tug on your emotions. 

Readiness potential? Their measurements didn’t clock it ahead of choices. In 2019, Maoz and a crew published a paper measuring the electrical activity in people’s brains as they pressed a key to choose one of two nonprofits to donate $1,000 to—for real, with actual dollars. Then the researchers compared that activity with what they saw when the same group pressed a key at random to donate $500 each to two nonprofits. The team saw the readiness potential in the arbitrary decision, but not for the $1,000 question. 

Libet’s result, they concluded, doesn’t apply to the important stuff, which means readiness potential might not actually be a sign that your brain is making a choice before you’re aware of it. “If Libet would have chosen to focus on deliberate decisions, then maybe the entire debate about neuroscience proving free will to be an illusion would have been spared from us,” Mudrik says. 

Maoz’s research has spurred others to reinterpret Libet’s work. It’s “enriched my thought process a great deal,” says Bianca Ivanof, a psychologist whose dissertation scrutinized Libet’s methods. They turn out to identify readiness potential at different times depending on how the rotating-dot setup is designed, complicating the ability to compare and interpret results.

Maoz has also continued to gather data on the subject. Last year, for example, he used an EEG to measure electrical signals in people’s brains as they got ready to press a keyboard space bar. At random moments, he interrupted their preparations with an audible tone and asked them about their intentions. He saw no connection between the readiness potential and whether or not they were planning to tap the key—evidence that the potential doesn’t represent the buildup of either conscious or unconscious plans. The team did see a signal, though, in a different part of the brain when people said they were preparing to move.

So … that’s free will? Sadly, Maoz would be compelled to say Well, not exactly. An electrical impulse in our brains can shed only so much light on whether we truly are the architects of our own fates. And maybe the confusing data from neurons is actually the point. “I don’t think it is a yes-or-no question,” Maoz says. Maybe our less meaningful choices aren’t mindfully made but big ones are; maybe we have the conscious power to change an intended action, but only if our brains are in a particular state. 

Neuroscientists likely can’t figure out, on their own, if free will exists. But they can, Maoz says, parse how semantically distinct decision-making forces—desires, urges, intentions, wishes, beliefs—manifest in our brains and become actions. “That is something that we are making progress on,” he says, “and I think that that’s going to help us understand what we do control.” And perhaps also help us make peace with what we do not. 

Sarah Scoles is a freelance science journalist and author based in southern Colorado.

The Download: how humans make decisions, and Moderna’s “vaccine” word games

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

You have no choice in reading this article—maybe

How do humans make decisions? The question has been on Uri Maoz’s mind since he read an article in his early twenties suggesting that… maybe they didn’t.  
 
Had he even had a choice about whether to read that article in the first place? How would he ever know if he was truly responsible for making any decisions? “After that, there was no turning back,” says Maoz, now a professor of computational neuroscience at Chapman University. 
 
Today, Maoz is a central figure in efforts to understand how desires and beliefs turn into actions. He’s also uncovered new wrinkles in the debate. Read the full story on his discoveries.

—Sarah Scoles

This article is from the next issue of our print magazine, packed with stories all about nature. Subscribe now to read the full thing when it lands on Wednesday, April 22.

What’s in a name? Moderna’s “vaccine” vs. “therapy” dilemma 

Moderna, the covid-19 shot maker, is using its mRNA technology to destroy tumors through a very, very promising technique known as a cancer vacc— 

“It’s not a vaccine,” a spokesperson for Merck said before the V-word could be uttered. “It’s an individualized neoantigen therapy.” 

Oh, but it is a vaccine, and it looks like a possible breakthrough. But it’s been rebranded to avoid vaccine fearmongering—and not everyone is happy about the word game. Read the full story. 

—Antonio Regalado

This article is from The Checkup, our weekly newsletter covering the latest in biotech. Sign up to receive it in your inbox every Thursday. 

The must reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Sam Altman’s home has been attacked twice in two days 
A driver reportedly fired a gun at his property on Sunday. (SF Standard
+ A Molotov cocktail was thrown at his home on Friday. (NBC News
+ The suspect wrote essays warning AI would end humanity. (SF Chronicle
+ The attacks expose growing divides in opinion on AI. (Axios

2 AI weapons are ushering in a new kind of arms race 
Countries are racing to deploy AI in military systems. (NYT $) 
+ The Pentagon wants AI firms to train on classified data. (MIT Technology Review
+ Where OpenAI’s technology could show up in Iran. (MIT Technology Review

3 Artemis II was a success 
Astronauts did an array of experiments that will be crucial to the future of both the program itself and deep-space missions. (Guardian
+ But next steps for the Artemis missions are uncertain. (Ars Technica

4 OpenAI and Elon Musk are heading toward a massive courtroom clash
The company has accused Musk of a “legal ambush.” (Engadget
He’s lost a streak of cases ahead of the showdown. (FT $) 

5 AI job fears in China are fueling a viral “ability harvester” project 
It claims to turn human skills into AI tools. (SCMP
+ Hustlers are cashing in on China’s OpenClaw AI craze. (MIT Technology Review

6 Governments are hiding information about the Iran war online 
Through restrictions on internet access and satellite imagery. (NPR)  

7 Apple is testing four smart glasses that could rival Meta Ray-Bans 
They’re part of a broader wearables strategy. (Bloomberg $) 

8 Meta is building an AI version of Mark Zuckerberg to interact with staff
It’s being trained on his mannerisms, voice, and statements. (FT $) 

9 Anthropic is asking Christian leaders for guidance 
It’s seeing advice on building moral machines. (WP $) 
+ AI agents have spread their own religions. (MIT Technology Review

10 A dancer with MND is performing again through an avatar 
Her brainwaves powered the digital dancer. (BBC

Quote of the day

“Earth was this lifeboat hanging in the universe.”

—Artemis II astronaut Christina Koch describes her view of Earth from space, the Guardian reports.

One more thing

figure in a Wikipedia logo jacket tries to clean up glowing characters strewn about a landscape by a digital tornado

RAVEN JIANG

How AI and Wikipedia have sent vulnerable languages into a doom spiral

When Kenneth Wehr started managing the Greenlandic-language version of Wikipedia, he discovered that almost every article had been written by people who didn’t speak the language.  

A growing number of them had been copy-pasted into Wikipedia from machine translators—and were riddled with elementary mistakes. This is beginning to cause a wicked problem. 

AI systems, from Google Translate to ChatGPT, learn new languages by scraping text from Wikipedia. This could push the most vulnerable languages on Earth toward the precipice. 

Read the full story on what happens when AI gets trained on junk pages

—Jacob Judah 

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Hungary’s next health minister can throw some serious shapes.  
+ Here’s a welcome route to an AI-free Google search
Movievia eschews endless scrolling to find the right film for your needs
+ A photography trick has turned a giant glacier into a tiny, living diorama.

Want to understand the current state of AI? Check out these charts.

<div data-chronoton-summary="

  • The US-China AI race is closer than you think: Chinese models from DeepSeek and Alibaba now trail American ones by razor-thin margins. Meanwhile, the US has more data centers and capital, while China leads in research publications and robotics.
  • AI benchmarks are badly broken: One popular math benchmark has a 42% error rate, and models can game tests by training on the answers. Strong test scores increasingly fail to predict how AI actually performs in the real world.
  • Jobs and anxiety are both rising: Software developer employment for workers aged 22–25 has dropped nearly 20% since 2022, with AI likely a factor. Globally, 59% of people think AI will do more good than harm—but 52% say it still makes them nervous.
  • Regulation is losing the race: The EU banned predictive policing AI, and US states passed a record 150 AI-related bills, but experts say lawmakers don’t yet understand the technology well enough to govern it effectively.

” data-chronoton-post-id=”1135675″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise. 

Despite predictions that AI development may hit a wall, the report says that the top models just keep getting better. People are adopting AI faster than they picked up the personal computer or the internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies meant to govern it, and the job market are struggling to keep up. AI is sprinting, and the rest of us are trying to find our shoes.

All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. Annual water use from running OpenAI’s GPT-4o alone may exceed the drinking water needs of 12 million people. At the same time, the supply chain for chips is alarmingly fragile. The US hosts most of the world’s AI data centers, and one company in Taiwan, TSMC, fabricates almost every leading AI chip. 

The data reveals a technology evolving faster than we can manage. Here’s a look at some of the key points from this year’s report. 

The US and China are nearly tied

In a long, heated race with immense geopolitical stakes, the US and China are almost neck and neck on AI model performance, according to Arena, a community-driven ranking platform that allows users to compare the outputs of large language models on identical prompts. In early 2023, OpenAI had a lead with ChatGPT, but this gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model built by the Chinese lab DeepSeek, briefly matched the top US model, ChatGPT. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. With the best AI models separated in the rankings by razor-thin margins, they’re now competing on cost, reliability, and real-world usefulness. 

Chart of the performance of top models on the Arena by select providers, showing the Arena score from May 2023 to Jan 2026 with the models all trending upward.  The scores are tightly packed by US based Anthropic, xAI, Google and OpenAI lead Alibaba, DeepSeek and Mistral (in that order.) Meta trails the pack.

The index notes that the US and China have different AI advantages. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (more than 10 times as many as any other country), China leads in AI research publications, patents, and robotics. 

As competition intensifies, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter counts, or data-set sizes. “We don’t know a lot of things about predicting model behaviors,” says Yolanda Gil, a computer scientist at the University of Southern California who coauthored the report. This lack of transparency makes it difficult for independent researchers to study how to make AI models safer, she says.

AI models are advancing super fast

Despite predictions that development will plateau, AI models keep getting better and better. By some measures, they now meet or exceed the performance of human experts on tests that aim to measure PhD-level science, math, and language understanding. SWE-bench Verified, a software engineering benchmark for AI models, saw top scores jump from around 60% in 2024 to almost 100% in 2025. In 2025, an AI system produced a weather forecast on its own.  

“I am stunned that this technology continues to improve, and it’s just not plateauing in any way,” says Gil.

line chart of Select AI Index technical performance benchmarks vs human performance, showing that skills such as image classification, English language understanding, multitask language understanding, visual reasoning, medium level reading comprehension, multimodal understanding and reasoning have surpassed the human baseline at or before 2025, with autonomous software engineering, mathmatical reasoning and agent multimodal computer use trending towards meeting the human baseline by 2026.

However, AI still struggles in plenty of other areas. Because the models learn by processing enormous amounts of text and images rather than by experiencing the physical world, AI exhibits “jagged intelligence.” Robots are still in their early days and succeed in only 12% of household tasks. Self-driving cars are farther along: Waymos are now roaming across five US cities, and Baidu’s Apollo Go vehicles are shuttling riders around in China. AI is also expanding into professional domains like law and finance, but no model dominates the field yet. 

But the way we test AI is broken

These reports of progress should be taken with a grain of salt. The benchmarks designed to track AI progress are struggling to keep up as models quickly blow past their ceilings, the Stanford report says. Some are poorly constructed—a popular benchmark that tests a model’s math abilities has a 42% error rate. Others can be gamed: when models are trained on benchmark test data, for example, they can learn to score well without getting smarter. 

Because AI is rarely used the same way it’s tested, strong benchmark performance doesn’t always translate to real-world performance. And for complex, interactive technologies such as AI agents and robots, benchmarks barely exist yet. 

AI companies are also sharing less about how their models are trained, and independent testing sometimes tells a different story from what they report. “A lot of companies are not releasing how their models do in certain benchmarks, particularly the responsible-AI benchmarks,” says Gil. “The absence of how your model is doing on a benchmark maybe says something.” 

AI is starting to affect jobs

Within three years of going mainstream, AI is now used by more than half of people around the world, a rate of adoption faster than the personal computer or the internet. An estimated 88% of organizations now use AI, and four in five university students use it. 

It’s early days for deployment, and AI’s impact on jobs is hard to measure. Still, some studies suggest AI is beginning to affect young workers in certain professions. According to a 2025 study by economists at Stanford, employment for software developers aged 22 to 25 has fallen nearly 20% since 2022. The decline might not be pinned on AI alone, as broader macroeconomic conditions could be to blame, but AI appears to be playing a part.

two line charts showing the normalized headcount trends by age group from 2021 through 2025. On the left for software developers the early career (age 22-25) cohort drops rapidly after a peak in September 2022, with other ages still rising albeit less steeply.  On the right, customer support agents see a similar trend, although the decline for the early career group is less steep than for software developers.

Employers say that hiring may continue to tighten. According to a 2025 survey conducted by McKinsey & Company, a third of organizations expect AI to shrink their workforce in the coming year, particularly in service and supply chain operations and software engineering. AI is boosting productivity by 14% in customer service and 26% in software development, according to research cited by the index, but such gains are not seen in tasks requiring more judgment. Overall, it’s still too early to understand the bigger economic impact of AI. 

People have complicated feelings about AI 

Around the world, people feel both optimistic and anxious about AI: 59% of people think that it will provide more benefits than drawbacks, while 52% say that it makes them nervous, according to an Ipsos survey cited in the index. 

Notably, experts and the public see the future of AI very differently, according to a Pew survey. The biggest gap is around the future of work: While 73% of experts think that AI will have a positive impact on how people do their jobs, only 23% of the American public thinks so. Experts are also more optimistic than the public about AI’s impact on education and medical care, but they agree that AI will hurt elections and personal relationships.

Bar chart of US perceptions of AI's societal impact contrasting US adults with AI experts, with the percentage of AI experts saying that AI will have a positive impact in the next 20 years is 2-3 times higher than the US adults.  The most optimistic AI experts are in the field of medical care with 84% predicting a positive outcome (versus 44% of US adults.) The greatest difference is for jobs with experts polling at 73% and US adults  polling at 23%.  Both groups have a similar (11% for experts and 9% of adults.) expectation for a positive outcome for AI in elections.

Among all countries surveyed, Americans trust their government least to regulate AI appropriately, according to another Ipsos survey. More Americans worry federal AI regulation won’t go far enough than worry it will go too far. 

Governments are struggling to regulate AI

Governments around the world are struggling to regulate AI, but there were some minor successes last year. The EU AI Act’s first prohibitions, which ban the use of AI in predictive policing and emotion recognition, took effect. Japan, South Korea, and Italy also passed national AI laws. Meanwhile, the US federal government moved toward deregulation, with President Trump issuing an executive order seeking to handcuff states from regulating AI. 

Despite this federal action, state legislatures in the US passed a record 150 AI-related bills. California enacted landmark legislation, including SB 53, which mandates safety disclosures and whistleblower protections for developers of AI models. New York passed the RAISE Act, requiring AI companies to publish safety protocols and report critical safety incidents.

line chart showing the number of AI-related bills passed into law by all US states from 2016-2025, which increases sharply in 2023 and peaks with 150 bills in 2025.

But for all the legislative activity, Gil says, regulation is running behind the technology because we don’t really understand how it works. “Governments are cautious to regulate AI because … we don’t understand many things very well,” she says. “We don’t have a good handle on those systems.”

Why opinion on AI is so divided

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after all.)

This year’s report, which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country.  

There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild.

But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report: “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.)

Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now?  

The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.”

That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.)

I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day. Maybe that’s tongue-in-cheek, but there’s definitely something to it.

The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them.

This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others.

The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued.

Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other.

Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.

Google Patent Signals New Search Layer

Google has obtained a U.S. patent for a system that generates AI landing pages personalized to each user.

The patent, “AI-generated content page tailored to a specific user,” makes 20 claims suggesting that Google may want to build custom landing pages for specific search queries.

How It Could Work

The system outlined in the patent starts with evaluation. Google analyzes a query, the user’s context, and a set of candidate landing pages — likely the pages it would have ranked otherwise.

The system grades pages on several points. Low grades might result from missing product details, thin content, weak navigation, or poor engagement signals. The system could then generate new versions of those pages tailored to individual users.

Two searchers who enter identical queries for running shoes, for example, might see different landing pages: one shows product comparisons, while the other provides a direct path to purchase.

The AI-generated pages are not static. The patent describes feedback loops that measure user behavior, such as clicks, time on page, and conversions. Those signals go back into the system, refining future versions.

The result is a dynamic experience. Google could generate many pages and send each searcher to a unique, customized version. Shopping-related queries could conceivably land on a page with purchase options.

A likely path for dynamic pages is through AI Overviews, which already summarize information. A next step could expand those summaries into interactive experiences and, perhaps, new web pages.

Google logo and search page behind a magnifying glass

Google increasingly provides on-page answers to search queries, separating businesses from would-be customers.

Trend

The patent — US12536233B1, issued by the U.S. Patent and Trademark Office on January 27, 2026 — has drawn significant attention.

For example, Greg Zakowicz, an ecommerce and marketing consultant, described the concept as “a new layer in the economics of search.”

That idea of a new layer points to the growing tension between website owners and the various platforms that index and ingest their pages.

Yet there has long been something of a give-and-take between search and content. Each party — platform and page owner — needed the other. But over the years, an evolving search industry has separated would-be customers from businesses.

  • Discovery. Early on, Google returned blue links that sent users to websites for answers and transactions.
  • Monetization. Advertising added a commercial layer, placing sponsored (paid) links alongside organic.
  • Answers. Google introduced its Knowledge Graph in 2012 and began surfacing facts directly from its own entity database.
  • Evaluation. Rich results used structured data to display reviews, product details, and recipes, helping searchers with decisions.
  • Extraction. In 2014, Google rolled out featured snippets that extracted answers from websites, providing information without a click.
  • Interaction. Vertical search experiences, such as Shopping, Flights, and Hotels, introduced full interfaces for comparison and decision-making.
  • Synthesis. More recently, AI Overviews ingest content from external pages into a single response, guiding decisions in a more conversational format.
  • Experience. The patent described here suggests a next step wherein AI-generated pages get the clicks.

Each new layer changes the “economics of search,” as Zakowitz puts it.

Ecommerce Impact

Patents do not guarantee outcomes. Google may never introduce intermediary landing pages. But the concept aligns with a natural progression in search.

To a degree, each new layer lessens the influence of website owners, including ecommerce merchants, over layout, messaging, and product presentation. The experience becomes algorithmically assembled.

That shift places a premium on relationships that merchants control.

Owned audiences, such as email and SMS subscribers, are direct connections that search interfaces or AI layers do not mediate.

A shopper who arrives via a newsletter or a marketing message has chosen the brand, not an algorithmically assembled page. As more discovery happens within platforms, those direct channels become a form of insulation.

Conversely, data becomes important for search visibility. If systems as described in the patent rely on structured inputs, then product feeds, Schema.org markup, and clean attribute data may determine how and whether items appear in generated experiences. In effect, the merchant’s role shifts from designing pages to supplying quality inputs. The opportunity to garner clicks remains.

Thus the combined challenges of generating direct traffic and encouraging search discovery have familiar solutions: (i) own the customer relationship whenever possible, and (ii) optimize content so bots, programs, and algorithms can read it.

New Google Spam Policy Targets Back Button Hijacking via @sejournal, @MattGSouthern

Google added a new section to its spam policies designating “back button hijacking” as an explicit violation under the malicious practices category. Enforcement begins on June 15, giving websites two months to make changes.

Google published a blog post explaining the policy. It also updated the spam policies documentation to list back-button hijacking alongside malware and unwanted software as a malicious practice.

What Is Back Button Hijacking

Back button hijacking occurs when a site interferes with browser navigation and prevents users from returning to the previous page. Google’s blog post describes several ways this can happen.

Users might be sent to pages they never visited. They might see unsolicited recommendations or ads. Or they might be unable to navigate back at all.

Google wrote in the blog post:

“When a user clicks the ‘back’ button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation.”

Why Google Is Acting Now

Google said it’s seen an increase in this behavior across the web. The blog post noted that Google has previously warned against inserting deceptive pages into browser history, referencing a 2013 post on the topic, and said the behavior “has always been against” Google Search Essentials.

Google wrote:

“People report feeling manipulated and eventually less willing to visit unfamiliar sites.”

What Enforcement Looks Like

Sites involved in back button hijacking risk manual spam penalties or automated demotions, both of which can lower their visibility in Google Search results.

Google is giving a two-month grace period before enforcement starts on June 15. This follows a similar pattern to the March 2024 spam policy expansion, which also gave sites two months to comply with the new site reputation abuse policy.

Third-Party Code As A Source

Google’s blog post acknowledges that some back-button hijacking may not originate from the site owner’s code.

Google wrote:

“Some instances of back button hijacking may originate from the site’s included libraries or advertising platform.”

Google’s wording indicates sites can be affected even if issues come from third-party libraries or ad platforms, placing responsibility on websites to review what runs on their pages.

How This Fits Into Google’s Spam Policy Framework

The addition falls under Google’s category of malicious practices. That section discusses behaviors causing a gap between user expectations and experiences, including malware distribution and unwanted software installation. Google expanded the existing spam policy category instead of creating a new one.

The March 2026 spam update completed its rollout less than three weeks ago. That update enforced existing policies without adding new ones. Today’s announcement adds new policy language ahead of the June 15 enforcement date.

Why This Matters

Sites using advertising scripts, content recommendation widgets, or third-party engagement tools should audit those integrations before June 15. Any script that manipulates browser history or prevents normal back-button navigation is now a potential spam violation.

The two-month window is the compliance period. After June 15, Google can take manual or automated action.

Sites that receive a manual action can submit a reconsideration request through Search Console after fixing the issue.

Looking Ahead

Google hasn’t indicated whether enforcement will come through a dedicated spam update or through ongoing SpamBrain and manual review.

The Dangerous Seduction Of Click-Chasing

It works, until it doesn’t.

The Chase

Imagine you’re a news publisher. Your journalism is good, you write original stories, and your website is relatively popular within your editorial niche.

Revenue is earned primarily via advertising. Google search is your biggest source of visitors.

Management demands growth, and elevates traffic to the throne of all key performance indicators. Engagement, loyalty, subscriptions – these are now secondary objectives. Getting the click, that is the driving purpose.

You look at your channels to determine where growth is most likely to come from. Search seems the most viable channel. So, you make SEO a key focus area.

As part of your SEO efforts, you come across specific tactics that cause your stories to generate more clicks. These tactics are very effective. Applying them to your stories results in significantly more traffic than before.

You’ve caught the scent. The chase for clicks is on.

These tactics demand that your stories focus on clicks above all. Within the context of these SEO-first tactics, every story is a traffic opportunity.

At first, you manage to apply these tactics within the framework of your existing journalism. Your stories are still good and unique, and you apply SEO as best you can to ensure each gets the best chance of generating traffic. It works, and your traffic grows.

But the pressures of management demand more. More growth. More revenue. More ad impressions. More traffic.

The newsroom submits. Stories are commissioned only if they have sufficient traffic potential. Journalists learn to just write stories that generate clicks. Headlines are crafted to maximize click-through rates, not to inform readers. You write multiple stories about the exact same news, each with a slightly different angle. Articles bury the lede.

Everything is subject to the chase.

Your scope expands. You don’t just write stories within your established specialism – you branch out. Different topics. New sections. Product reviews and recommendations. Listicles.

Everything is fair game, as long as it generates clicks.

And it works. Oh boy, does it work.

Image Credit: Barry Adams

The flywheel gathers momentum. You learn exactly what people click on, how to craft the perfect headline, select the ideal image, find the precise angle that will make people stop scrolling and tap on your article.

Traffic keeps growing.

But, somehow, you don’t feel entirely at ease. Because you know that, when you look at your content objectively, something has been lost. Your site used to be about journalism, about informing readers, improving knowledge and awareness, and enabling policies and decisions. It used to be good.

Now, none of that really matters anymore. Your site is about clicks. Everything else is secondary.

But management is happy. Revenue is up. Profits surge. So it’s alright, isn’t it?

Isn’t it?

The first Google core update that hurts
Image Credit: Barry Adams

Google rolls out a core algorithm update. You lose 20% of your search traffic overnight. It’s a shot across the bow. A warning. But you ignore it. You focus on the chase even more. Tighter content focus. More variations of the same stories. Better SEO.

Traffic stabilizes. No more growth, but you’re chugging along nicely. You maybe change a few things, try to get back onto a growth curve. Nothing works, but you’re not losing either. Things look stable. You can live with this.

Then the next Google core update hits. You lose 50% of your current search traffic. It’s code red in the newsroom. All hands on deck.

How do we recover? How do we get this traffic back? It’s our traffic, Google owes us!

You do what you’ve gotten very good at. You SEO the hell out of your site. Everything is optimized and maximized. Your technical SEO goes from “that will do” to a state of such perfection it could make a web nerd cry. Your content output becomes even more focused on areas with the biggest traffic potential.

In the chase for revenue, you try alternative monetization. Affiliate content. Gambling promos. Advertorials. More listicles. More product recommendations. More of everything.

Then the next update arrives. You lose again.

And the next one.

And the next one.

You lose, almost every single time.

Every Google core update causes further decline
Image Credit: Barry Adams

It worked. Until it didn’t.

And now your site is on Google’s shitlist. Your relentless focus on growth at the expense of quality has accumulated so many negative signals that Google will not allow you to return to your previous heights.

You know none of what you try will work. Those traffic graphs won’t go back up. Every Google core update causes a new surge of existential dread: How much will we lose this time?

And yet, you still chase. You’ve long since lost the scent. But the chase still rules. Because you know that, to stop the chase, something needs to change. Something big and profound. And making that change will be painful. Extremely painful.

But do you have a choice?

Hindsight

I wish this scenario was unique, a singular publisher making the mistake of focusing on traffic at the expense of quality. But it’s a tragically common theme, played out in digital newsrooms hundreds of times over the last 10 years.

In every instance, at some point, the seductive appeal of traffic began to outweigh the journalistic principles of the organization. Compromises were made so growth could be achieved.

And because these compromises had the intended result – at first – there was nothing to caution the publisher from traveling further down this path.

Well, nothing besides Google shouting at every opportunity that you should focus on quality, not clicks.

Besides every SEO professional that has ever dealt with a bad algorithm update saying you should focus on quality, not clicks.

Besides your best journalists abandoning ship in favor of a quality-focused outlet or their own Substack.

Besides your own loyal readers abandoning your site because you stopped focusing on quality and went after clicks.

The writing has been on the wall, in huge capital letters, for the better part of a decade. Arguably, since 2018, when Google began rolling out algorithm updates to penalize low-effort content. If you’d been paying attention, none of this would have been a surprise.

Hey, maybe you did see it coming. But you weren’t able to make the required changes, because the clicks were still there. You were never going to deliberately abandon growth for some vague promise of sustainable traffic and audience loyalty.

If only you’d known that, once the Google hammer came down, the damage would be permanent. Maybe you wouldn’t have started the chase in the first place.

If only you’d known.

Recovery

When a site is so heavily affected by consecutive Google core updates, is there any hope of recovery? Can a website climb its way back to those vaulted traffic heights?

We need to be realistic and accept that those halcyon days of near-limitless traffic growth are not coming back. The ecosystem has changed. Growth is harder to achieve, and online news is working under a lower ceiling than ever before.

But recovery is possible, to an extent. You will never achieve the same traffic peaks as in your prime days, but you can claw back a significant chunk. Providing you are willing to do what it takes.

The recipe is simple, on paper: Everything you do should be in service of the reader.

Every story needs to be crafted to deliver maximum value for your readers. Every design element on your site needs to be optimized for the best user experience. Every headline must be informative first and foremost. Every article must deliver on its headline’s promise in spades. Every piece of content should serve to inform, educate, and delight your audience.

In short, your entire output should revolve around audience loyalty.

Not growth. Not traffic.

Loyalty.

Build a news platform so good that your readers don’t ever think about going anywhere else.

Of course, you still need traffic, but this must be a secondary concern. Start with your audience, and then apply layers on top of your stories to aid their traffic potential.

Your output should be focused on original journalism – not rehashing the same stories that others are reporting. If all you do is take someone else’s story and write different angles on it, you’re not doing journalism.

Provide breaking news, expert commentary, detailed analysis, and a deep focus on your editorial specialties.

And accept that your audience isn’t a singular entity, but consumes news on multiple platforms and in multiple formats. Video, podcasts, newsletters, social media, you name it. Fire on all channels, as best you can.

Sounds simple. But very few publishers I’ve spoken with have the internal fortitude for such drastic cultural changes in their online newsroom. Most of the publishers I consult with that were affected by core updates just want a list of quick wins, some easy fixes they can implement, and get their traffic back.

They want busy-work. They’re not interested in meaningful change. Because meaningful change is hard, and painful.

But also absolutely necessary.

That’s it for another edition. As always, thanks for reading and subscribing, and I’ll see you at the next one!

More Resources:


This post was originally published on SEO For Google News.


Featured Image: Roman Samborskyi/Shutterstock