These board games want you to beat climate change

It’s game night, and I’m crossing my fingers, hoping for a hurricane. 

I roll the die and it clatters across the board, tumbling to a stop to reveal a tiny icon of a tree stump. Bad news: I just triggered deforestation in the Amazon. That seals it. I failed to stop climate change—at least this board-game representation of it.

The urgent need to address climate change might seem like unlikely fodder for a fun evening. But a growing number of games are attempting to take on the topic, including a version of the bestseller Catan released this summer.

As a climate reporter, I was curious about whether games could, even abstractly, represent the challenge of the climate crisis. Perhaps more crucially, could they possibly be any fun? 

My investigation started with Daybreak, a board game released in late 2023 by a team that includes the creator of Pandemic (infectious disease—another famously light topic for a game). Daybreak is a cooperative game where players work together to cut emissions and survive disasters. The group either wins or loses as a whole.

When I opened the box, it was immediately clear that this wouldn’t be for the faint of heart. There are hundreds of tiny cardboard and wooden pieces, three different card decks, and a surprisingly thick rule book. Setting it up, learning the rules, and playing for the first time took over two hours.

the components of the game Daybreak which has Game cards depicting Special Drawing Rights, Clean Electricity Plants, and Reforestation themed play cards
Daybreak, a cooperative board game about stopping climate change.
COURTESY OF CMYK

Daybreak is full of details, and I was struck by how many of them it gets right. Not only are there cards representing everything from walkable cities to methane removal, but each features a QR code players can use to learn more.

In each turn, players deploy technologies or enact policies to cut climate pollution. Just as in real life, emissions have negative effects. Winning requires slashing emissions to net zero (the point where whatever’s emitted can be soaked up by forests, oceans, or direct air capture). But there are multiple ways for the whole group to lose, including letting the global average temperature increase by 2 °C or simply running out of turns.

 In an embarrassing turn of events for someone who spends most of her waking hours thinking about climate change, nearly every round of Daybreak I played ended in failure. Adding insult to injury, I’m not entirely sure that I was having fun. Sure, the abstract puzzle was engaging and challenging, and after a loss, I’d be checking the clock, seeing if there was time to play again. But once all the pieces were back in the box, I went to bed obsessing about heat waves and fossil-fuel disinformation. The game was perhaps representing climate change a little bit too well.

I wondered if a new edition of a classic would fare better. Catan, formerly Settlers of Catan, and its related games have sold over 45 million copies worldwide since the original’s release in 1995. The game’s object is to build roads and settlements, setting up a civilization. 

In late 2023, Catan Studios announced that it would be releasing a version of its game called New Energies, focused on climate change. The new edition, out this summer, preserves the same central premise as the original. But this time, players will also construct power plants, generating energy with either fossil fuels or renewables. Fossil fuels are cheaper and allow for quicker expansion, but they lead to pollution, which can harm players’ societies and even end the game early.

Before I got my hands on the game, I spoke with one of its creators, Benjamin Teuber, who developed the game with his late father, Klaus Teuber, the mastermind behind the original Catan.

To Teuber, climate change is a more natural fit for a game than one might expect. “We believe that a good game is always around a dilemma,” he told me. The key is to simplify the problem sufficiently, a challenge that took the team dozens of iterations while developing New Energies. But he also thinks there’s a need to be at least somewhat encouraging. “While we have a severe topic, or maybe even especially because we have a severe topic, you can’t scare off the people by making them just have a shitty evening,” Teuber says.

In New Energies, the first to gain 10 points wins, regardless of how polluting that player’s individual energy supply is. But if players collectively build too many fossil-fuel plants and pollution gets too high, the game ends early, in which case whoever has done the most work to clean up their own energy supply is named the winner.

That’s what happened the first time I tested out the game. While I had been lagging in points, I ended up taking the win, because I had built more renewable power plants than my competitors.

This relatively rosy ending had me conflicted. On one hand, I was delighted, even if it felt like a consolation prize. 

But I found myself fretting over the messages that New Energies will send to players. A simple game that crowns a winner may be more playable, but it doesn’t represent how complicated the climate crisis is, or how urgently we need to address it. 

I’m glad climate change has a spot on my game shelf, and I hope these and other games find their audiences and get people thinking about the issues. But I’ll understand the impulse to reach for other options when game night rolls around, because I can’t help but dwell on the fact that in the real world, we won’t get to reset the pieces and try again.

Biotech companies are trying to make milk without cows

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. They’re doing it by engineering yeasts and plants with bovine genes so they make the key proteins responsible for milk’s color, satisfying taste, and nutritional punch.

The proteins they’re copying are casein, a floppy polymer that’s the most abundant protein in milk and is what makes pizza cheese stretch, and whey, a nutritious combo of essential amino acids that’s often used in energy powders.

It’s part of a larger trend of replacing animals with ingredients grown in labs, steel vessels, or plant crops. Think of the Impossible burger, the veggie patty made mouthwatering with the addition of heme, a component of blood that’s produced in the roots of genetically modified soybeans.

One of the milk innovators is Remilk, an Israeli startup founded in 2019, which has engineered yeast so it will produce beta-lactoglobulin (the main component of whey). Company cofounder Ori Cohavi says a single biotech factory of bubbling yeast vats feeding on sugar could in theory “replace 50,000 to 100,000 cows.” 

Remilk has been making trial batches and is testing ways to formulate the protein with plant oils and sugar to make spreadable cheese, ice cream, and milk drinks. So yes, we’re talking “processed” food—one partner is a local Coca-Cola bottler, and advising the company are former executives of Nestlé, Danone, and PepsiCo.

But regular milk isn’t exactly so natural either. At milking time, animals stand inside elaborate robots, and it looks for all the world as if they’re being abducted by aliens. “The notion of a cow standing in some nice green scenery is very far from how we get our milk,” says Cohavi. And there are environmental effects: cattle burp methane, a potent greenhouse gas, and a lactating cow needs to drink around 40 gallons of water a day

“There are hundreds of millions of dairy cows on the planet producing greenhouse waste, using a lot of water and land,” says Cohavi. “It can’t be the best way to produce food.”  

For biotech ventures trying to displace milk, the big challenge will be keeping their own costs of production low enough to compete with cows. Dairies get government protections and subsidies, and they don’t only make milk. Dairy cows are eventually turned into gelatin, McDonald’s burgers, and the leather seats of your Range Rover. Not much goes to waste.

At Alpine Bio, a biotech company in San Francisco (also known as Nobell Foods), researchers have engineered soybeans to produce casein. While not yet cleared for sale, the beans are already being grown on USDA-sanctioned test plots in the Midwest, says Alpine’s CEO, Magi Richani

Richani chose soybeans because they’re already a major commodity and the cheapest source of protein around. “We are working with farmers who are already growing soybeans for animal feed,” she says. “And we are saying, ‘Hey, you can grow this to feed humans.’ If you want to compete with a commodity system, you have to have a commodity crop.”

Alpine intends to crush the beans, extract the protein, and—much like Remilk—sell the ingredient to larger food companies.

Everyone agrees that cow’s milk will be difficult to displace. It holds a special place in the human psyche, and we owe civilization itself, in part, to domesticated animals. In fact, they’ve  left their mark in our genes, with many of us carrying DNA mutations that make cow’s milk easier to digest.  

But that’s why it might be time for the next technological step, says Richani. “We raise 60 billion animals for food every year, and that is insane. We took it too far, and we need options,” she says. “We need options that are better for the environment, that overcome the use of antibiotics, and that overcome the disease risk.”

It’s not clear yet whether the bird flu outbreak on dairy farms is a big danger to humans. But making milk without cows would definitely cut the risk that an animal virus will cause a new pandemic. As Richani says: “Soybeans don’t transmit diseases to humans.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Hungry for more from the frontiers of fromage? In the Build issue of our print magazine, Andrew Rosenblum tasted a yummy brie made only from plants. Harder to swallow was the claim by developer Climax Foods that its cheese was designed using artificial intelligence.

The idea of using yeast to create food ingredients, chemicals, and even fuel via fermentation is one of the dreams of synthetic biology. But it’s not easy. In 2021, we raised questions about high-flying startup Ginkgo Bioworks. This week its stock hit an all-time low of $0.49 per share as the company struggles to make … well, anything.

This spring, I traveled to Florida to watch attempts to create life in a totally new way: using a synthetic embryo made in a lab. The action involved cattle at the animal science department of the University of Florida, Gainesville.


From around the web

How many human bird flu cases are there? No one knows, because there’s barely any testing. Scientists warn we’re flying blind as US dairy farms struggle with an outbreak. (NBC)  

Moderna, one of the companies behind the covid-19 shots, is seeing early success with a cancer vaccine. It uses the same basic technology: gene messages packed into nanoparticles. (Nature)

It’s the covid-19 theory that won’t go away. This week the New York Times published an op-ed arguing that the virus was the result of a lab accident. We previously profiled the author, Alina Chan, who is a scientist with the Broad Institute. (NYTimes)

Sales of potent weight loss drugs, like Ozempic, are booming. But it’s not just humans who are overweight. Now the pet care industry is dreaming of treating chubby cats and dogs, too. (Bloomberg)

This London non-profit is now one of the biggest backers of geoengineering research

A London-based nonprofit is poised to become one of the world’s largest financial backers of solar geoengineering research. And it’s just one of a growing number of foundations eager to support scientists exploring whether the world could ease climate change by reflecting away more sunlight.

Quadrature Climate Foundation, established in 2019 and funded through the proceeds of the investment fund Quadrature Capital, plans to provide $40 million for work in this field over the next three years, Greg De Temmerman, the organization’s chief science officer, told MIT Technology Review

That’s a big number for this subject—double what all foundations and wealthy individuals provided from 2008 through 2018 and roughly on par with what the US government has offered to date. 

“We think we can have a very strong impact in accelerating research, making sure it’s happening, and trying to unlock some public money at some point,” De Temmerman says.

Other nonprofits are set to provide tens of millions of dollars’ worth of additional grants to solar geoengineering research or related government advocacy work in the coming months and years. The uptick in funding will offer scientists in the controversial field far more support than they’ve enjoyed in the past and allow them to pursue a wider array of lab work, modeling, and potentially even outdoor experiments that could improve our understanding of the benefits and risks of such interventions. 

“It just feels like a new world, really different from last year,” says David Keith, a prominent geoengineering researcher and founding faculty director of the Climate Systems Engineering Initiative at the University of Chicago.

Other nonprofits that have recently disclosed funding for solar geoengineering research or government advocacy, or announced plans to provide it, include the Simons Foundation, the Environmental Defense Fund, and the Bernard and Anne Spitzer Charitable Trust. 

In addition, Meta’s former chief technology officer, Mike Schroepfer, told MIT Technology Review he is spinning out a new nonprofit, Outlier Projects. He says it will provide funding to solar geoengineering research as well as to work on ocean-based carbon removal and efforts to stabilize rapidly melting glaciers.

Outlier has already issued grants for the first category to the Environmental Defense Fund, Keith’s program at the University of Chicago, and two groups working to support research and engagement on the subject in the poorer, hotter parts of the world: the Degrees Initiative and the Alliance for Just Deliberation on Solar Geoengineering.

Researchers say that the rising dangers of climate change, the lack of progress on cutting emissions, and the relatively small amount of government research funding to date are fueling the growing support for the field.

“A lot of people are recognizing the obvious,” says Douglas MacMartin, a senior research associate in mechanical and aerospace engineering at Cornell, who focuses on geoengineering. “We’re not in a good position with regard to mitigation—and we haven’t spent enough money on research to be able to support good, wise decisions on solar geoengineering.”

Scientists are exploring a variety of potential methods of reflecting away more sunlight, including injecting certain particles into the stratosphere to mimic the cooling effect of volcanic eruptions, spraying salt toward marine clouds to make them brighter, or sprinkling fine dust-like material into the sky to break up heat-trapping cirrus clouds.

Critics contend that neither nonprofits nor scientists should support studying any of these methods, arguing that raising the possibility of such interventions eases pressure to cut emissions and creates a “slippery slope” toward deploying the technology. Even some who support more research fear that funding it through private sources, particularly from wealthy individuals who made their fortunes in tech and finance, may allow studies to move forward without appropriate oversight and taint public perceptions of the field.

The sense that we’re “putting the climate system in the care of people who have disrupted the media and information ecosystems, or disrupted finance, in the past” could undermine public trust in a scientific realm that many already find unsettling, says Holly Buck, an assistant professor at the University of Buffalo and author of After Geoengineering.

‘Unlocking solutions’

One of Quadrature’s first solar geoengineering grants went to the University of Washington’s Marine Cloud Brightening Program. In early April, that research group made headlines for beginning, and then being forced to halt, small-scale outdoor experiments on a decommissioned aircraft carrier sitting off the coast of Alameda, California. The effort entailed spraying a mist of small sea salt particles into the air. 

Quadrature was also one of the donors to a $20.5 million fund for the Washington, DC, nonprofit SilverLining, which was announced in early May. The group pools and distributes grants to solar geoengineering researchers around the world and has pushed for greater government support and funding for the field. The new fund will support that policy advocacy work as well as efforts to “promote equitable participation by all countries,” Kelly Wanser, executive director of SilverLining, said in an email.

She added that it’s crucial to accelerate solar geoengineering research because of the rising dangers of climate change, including the risk of passing “catastrophic tipping points.”

“Current climate projections may even underestimate risks, particularly to vulnerable populations, highlighting the urgent need to improve risk prediction and expand response strategies,” she wrote.

Quadrature has also issued grants for related work to Colorado State University, the University of Exeter, and the Geoengineering Model Intercomparison Project, an effort to run the same set of modeling experiments across an array of climate models. 

The foundation intends to direct its solar geoengineering funding to advance efforts in two main areas: academic research that could improve understanding of various approaches, and work to develop global oversight structures “to enable decision-making on [solar radiation modification] that is transparent, equitable, and science based.”

“We want to empower people to actually make informed decisions at some point,” De Temmerman says, stressing the particular importance of ensuring that people in the Global South are actively involved in such determinations. 

He says that Quadrature is not advocating for specific outcomes, taking no position on whether or not to ultimately use such tools. It also won’t support for-profit startups. 

In an emailed response to questions, he stressed that the funding for solar geoengineering is a tiny part of the foundation’s overall mission, representing just 5% of its $930 million portfolio. The lion’s share has gone to accelerate efforts to cut greenhouse-gas pollution, remove it from the atmosphere, and help vulnerable communities “respond and adapt to climate change to minimize harm.”

Billionaires Greg Skinner and Suneil Setiya founded both the Quadrature investment fund as well as the foundation. The nonprofit’s stated mission is unlocking solutions to the climate crisis, which it describes as “the most urgent challenge of our time.” But the group, which has 26 employees, has faced recent criticism for its benefactors’ stakes in oil and gas companies. Last summer, the Guardian reported that Quadrature Capital held tens of millions of dollars in investments in dozens of fossil-fuel companies, including ConocoPhillips and Cheniere Energy.

In response to a question about the potential for privately funded foundations to steer research findings in self-interested ways, or to create the perception that the results might be so influenced, De Temmerman stated: “We are completely transparent in our funding, ensuring it is used solely for public benefit and not for private gain.”

More foundations, more funds 

To be sure, a number of wealthy individuals and foundations have been providing funds for years to solar geoengineering research or policy work, or groups that collect funds to do so.

A 2021 paper highlighted contributions from a number of wealthy individuals, with a high concentration from the tech sector, including Microsoft cofounder Bill Gates, Facebook cofounder Dustin Moskovitz, Facebook alum and venture capitalist Matt Cohler, former Google executive (and extreme skydiver) Alan Eustace, and tech and climate solutions investors Chris and Crystal Sacca. It noted a number of nonprofits providing grants to the field as well, including the Hewlett Foundation, the Alfred P. Sloan Foundation, and the Blue Marble Fund.

But despite the backing of those high-net-worth individuals, the dollar figures have been low. From 2008 through 2018, total private funding only reached about $20 million, while government funding just topped $30 million. 

The spending pace is now picking up, though, as new players move in.

The Simons Foundation previously announced it would provide $50 million to solar geoengineering research over a five-year period. The New York–based nonprofit invited researchers to apply for grants of up to $500,000, adding that it “strongly” encouraged scientists in the Global South to do so. 

The organization is mostly supporting modeling and lab studies. It said it would not fund social science work or field experiments that would release particles into the environment. Proposals for such experiments have sparked heavy public criticism in the past.

Simons recently announced a handful of initial awards to researchers at Harvard, Princeton, ETH Zurich, the Indian Institute of Tropical Meteorology, the US National Center for Atmospheric Research, and elsewhere.

“For global warming, we will need as many tools in the toolbox as possible,” says David Spergel, president of the Simons Foundation. 

“This was an area where there was a lot of basic science to do, and a lot of things we didn’t understand,” he adds. “So we wanted to fund the basic science.”

In January, the Environmental Defense Fund hosted a meeting at its San Francisco headquarters to discuss the guardrails that should guide research on solar geoengineering, as first reported by Politico. EDF had already provided some support to the Solar Radiation Management Governance Initiative, a partnership with the Royal Society and other groups set up to “ensure that any geoengineering research that goes ahead—inside or outside the laboratory—is conducted in a manner that is responsible, transparent, and environmentally sound.” (It later evolved into the Degrees Initiative.)

But EDF has now moved beyond that work and is “in the planning stages of starting a research and policy initiative on [solar radiation modification],” said Lisa Dilling, associate chief scientist at the environmental nonprofit, in an email. That program will include regranting, which means raising funds from other groups or individuals and distributing them to selected recipients, and advocating for more public funding, she says. 

Outlier also provided a grant to a new nonprofit, Reflective. This organization is developing a road map to prioritize research needs and pooling philanthropic funding to accelerate work in the most urgent areas, says its founder, Dakota Gruener. 

Gruener was previously the executive director of ID2020, a nonprofit alliance that develops digital identification systems. Cornell’s MacMartin is a scientific advisor to the new nonprofit and will serve as the chair of the scientific advisory board.

Government funding is also slowly increasing. 

The US government started a solar geoengineering research program in 2019, funded through the National Oceanic and Atmospheric Administration, that currently provides about $11 million a year.

In February, the UK’s Natural Environment Research Council announced a £10.5 million, five-year research program. In addition, the UK’s Advanced Research and Invention Agency has said it’s exploring and soliciting input for a research program in climate and weather engineering.

Funding has not been allocated as yet, but the agency’s programs typically provide around £50 million.

‘When, not if’

More funding is generally welcome news for researchers who hope to learn more about the potential of solar geoengineering. Many argue that it’s crucial to study the subject because the technology may offer ways to reduce death and suffering, and prevent the loss of species and the collapse of ecosystems. Some also stress it’s crucial to learn what impact these interventions might have and how these tools could be appropriately regulated, because nations may be tempted to implement them unilaterally in the face of extreme climate crises.

It’s likely a question of “when, not if,” and we should “act and research accordingly,” says Gernot Wagner, a climate economist at Columbia Business School, who was previously the executive director of Harvard’s Solar Geoengineering Research Program. “In many ways the time has come to take solar geoengineering much more seriously.”

In 2021, a National Academies report recommended that the US government create a solar geoengineering research program, equipped with $100 million to $200 million in funding over five years.

But there are differences between coordinated government-funded research programs, which have established oversight bodies to consider the merit, ethics, and appropriate transparency of proposed research, and a number of nonprofits with different missions providing funding to the teams they choose. 

To the degree that they create oversight processes that don’t meet the same standards, it could affect the type of science that’s done, the level of public notice provided, and the pressures that researchers feel to deliver certain results, says Duncan McLaren, a climate intervention fellow at the University of California, Los Angeles.

“You’re not going to be too keen on producing something that seems contrary to what you thought the grant maker was looking for,” he says, adding later: “Poorly governed research could easily give overly optimistic answers about what [solar geoengineering] could do, and what its side effects may or may not be.”

Whatever the motivations of individual donors, Buck fears that the concentration of money coming from high tech and finance could also create optics issues, undermining faith in research and researchers and possibly slowing progress in the field.

“A lot of this is going to backfire because it’s going to appear to people as Silicon Valley tech charging in and breaking things,” she says. 

Cloud controversy

Some of the concerns about privately funded work in this area are already being tested.

By most accounts, the Alameda experiment in marine cloud brightening that Quadrature backed was an innocuous basic-science project, which would not have actually altered clouds. But the team stirred up controversy by moving ahead without wide public notice.

City officials quickly halted the experiments, and earlier this month the city council voted unanimously to shut the project down.

Alameda mayor Marilyn Ezzy Ashcraft has complained that city staffers received only vague notice about the project up front. They were then inundated with calls from residents who had heard about it in the media and were concerned about the health implications, she said, according to CBS News.

In response to a question about the criticism, SilverLining’s Wanser said in an email: “We worked with the lease-holder, the USS Hornet, on the process for notifying the city of Alameda. The city staff then engaged experts to independently evaluate the health and environmental safety of the … studies, who found that they did not pose any environmental or health risks to the community.”

Wanser, who is a principal of the Marine Cloud Brightening Program, stressed they’ve also received offers of support from local residents and businesses.

“We think that the availability of data and information on the nature of the studies, and its evaluation by local officials, was valuable in helping people consider it in an informed way for themselves,” she added.

Some observers were also concerned that the research team said it selected its own six-member board to review the proposed project. That differs from a common practice with publicly funded scientific experiments, which often include a double-blind review process, in which neither the researchers nor the reviewers know each other’s names. The concern with breaking from that approach is that scientists could select outside researchers who they believe are likely to greenlight their proposals, and the reviewers may feel pressure to provide more favorable feedback than they might offer anonymously.

Wanser stressed that the team picked “distinguished researchers in the specialized field.”

“There are different approaches for different programs, and in this case, the levels of expertise and transparency were important features,” she added. “They have not received any criticism of the design of the studies themselves, which speaks to their robustness and their value.”

‘Transparent and responsible’

Solar geoengineering researchers often say that they too would prefer public funding, all things being equal. But they stress that those sources are still limited and it’s important to move the field forward in the meantime, so long as there are appropriate standards in place.

“As long as there’s clear transparency about funding sources, [and] there’s no direct influence on the research by the donors, I don’t precisely see what the problem is,” MacMartin says. 

Several nonprofits emerging or moving into this space said that they are working to create responsible oversight structures and rules.

Gruener says that Reflective won’t accept anonymous donations or contributions from people whose wealth comes mostly from fossil fuels. She adds that all donors will be disclosed, that they won’t have any say over the scientific direction of the organization or its chosen research teams, and that they can’t sit on the organization’s board. 

“We think transparency is the only way to build trust, and we’re trying to ensure that our governance structure, our processes, and the outcomes of our research are all public, understandable, and readily available,” she says.

In a statement, Outlier said it’s also in favor of more publicly supported work: “It’s essential for governments to become the leading funders and coordinators of research in these areas.” It added that it’s supporting groups working to accelerate “government leadership” on the subject, including through its grant to EDF. 

Quadrature’s De Temmerman stresses the importance of public research programs as well, noting that the nonprofit hopes to catalyze much more such funding through its support for government advocacy work. 

“We are here to push at the beginning and then at some point just let some other forms of capital actually come,” he says.

Google Answers Question About Toxic Link Sabotage via @sejournal, @martinibuster

Google’s Gary Illyes answered a question about how to notify Google that someone is poisoning their backlink profile with “toxic links” which is a problem that many people have been talking about for at least fifteen years.

Question About Alerting Google To Toxic Links

Gary narrated the question:

“Someone’s asking, how to alert Google of sabotage via toxic links?”

And this is Gary’s answer:

I know what I would do: I’d ignore those links.

Generally Google is really, REALLY good at ignoring links that are irrelevant to the site they’re pointing at. If you feel like it, you can always disavow those “toxic” links, or file a spam report.

Disavow Links If You Feel Like It

Gary linked to Google’s explainer about disavowing links where it’s explained that the disavow tool is for a site owner to tell Google about links that they are responsible for in some way, like paid links or some other link scheme.

This is what it advises:

“If you have a manual action against your site for unnatural links to your site, or if you think you’re about to get such a manual action (because of paid links or other link schemes that violate our quality guidelines), you should try to remove the links from the other site to your site. If you can’t remove those links yourself, or get them removed, then you should disavow the URLs of the questionable pages or domains that link to your website.”

Google suggests that a link disavow is only necessary when two conditions are met:

  1. “You have a considerable number of spammy, artificial, or low-quality links pointing to your site,
    AND
  2. The links have caused a manual action, or likely will cause a manual action, on your site.”

Both of the above conditions must be met in order to file a valid link disavow tool.

Origin Of The Phrase Toxic Links

As Google became better at penalizing sites for low quality links and paid links, some in the highly competitive gambling industry started creating low quality links to sabotage their competitors. The practice was called negative SEO.

The phrase toxic link is something that was never heard of until after the Penguin link updates in 2012 which required penalized sites to remove all the paid and low quality links they created and then disavow the rest. An industry grew around disavowing links and it was that industry that invented the phrase Toxic Links for use in their marketing.

Confirmation That Google Is Able To Ignore Links

I have shared this anecdote before and I’ll share it here again. Someone I knew contacted me and said that their site lost rankings from negative SEO links. I took a look and their site had a ton of really nasty looking links. So out of curiosity (and because I knew that the site was this person’s main income), I emailed someone at Google Mountain View headquarters about it. That person checked it and replied that the site didn’t lose rankings because of the links. They lost rankings because of a Panda update related content issue.

That was around 2012 and it showed me how good Google was at ignoring links. Now, if Google was that good at ignoring really bad links back then, they’re probably better at it now, twelve years later now that they have the spam brain AI.

Listen to the question and answer at the 8:22 minute mark:

Featured Image by Shutterstock/New Africa

Natural Dog CEO Is an Acquisition Entrepreneur

Bill D’Alessandro is a 14-year ecommerce owner. Natural Dog Company, an omnichannel seller of canine health products, is his eighth brand. “My niche is acquisition entrepreneurship,” he told me. “I’ll buy a small brand, grow it, improve it, and eventually sell it.”

Along the way, he’s learned lessons such as focus, industry selection, and product pricing.

He and I recently discussed those experiences and more. The entire audio of our conversation is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Tell our listeners what you’re doing.

Bill D’Alessandro: I am the CEO of Natural Dog Company. We sell dog supplements, fish oils, and topicals on Amazon, our website, and in about 6,000 retail stores. I’ve been in ecommerce for 14 years. This is my eighth brand. My niche is acquisition entrepreneurship. I’ll buy a small brand, grow it, improve it, and eventually sell it. I’ve done that seven times now, and Natural Dog Company is what I’m working on now.

At the peak, I owned eight brands at once. We had 62 people in the company, which was not enough. Owners with one brand frequently have the idea to buy another. You might have all the employees, the third-party fulfillment provider, and the infrastructure. It seems pretty easy. But it underestimates how it fractures your focus. You do two, and then you do three, and then you do eight, and before you know it, you’re surface level on everything, and you can’t go deep.

In 2024, ecommerce is hard. It’s data and keyword-intensive. Ranking on Amazon is tough. There’s a lot of competition. Dividing time across multiple brands is how you get smoked. One plus one does not equal two. It equals one and a half. It took me years to realize that.

Running a single business is hard enough. Something goes catastrophically wrong at least once a year, and you have to fix it. If you own eight businesses, something goes catastrophically wrong every six weeks. There’s constant firefighting and reacting if you’re trying to be CEO of all the businesses.

You need to install highly competent, highly compensated management. You can’t be CEO of eight. You need CEOs for each of them. They will make $150, $200 grand a year or more. The business has to be big enough to accommodate that overhead.

Bandholz: How do you pick the right industry?

D’Alessandro:  Bigger businesses are easier but require bigger markets. And that was what I realized. We had eight brands — seven were collectively 25% of revenue, and one was 75%.

It was the 80-20 Pareto principle in real life. These other brands sold, like, natural sunscreen and athletic detergent. I didn’t see the potential. But a ton of people are getting dogs. That market is growing. So I said, “If I’m gonna spend my time, my one precious life here, I want to focus where I have the most headroom to grow.”

There are other components beyond the industry. We had a business with an average order value of $14. That’s harder to make work. By the time you ship it and pay Amazon fees, there’s not a lot of room left. But a price point of $100, $200, or $800, that’s a lot easier. To me, the perfect price point is $70 to $170. It’s low enough to convince somebody to buy quickly but high enough to cover shipping and customer acquisition costs.

Bandholz: You’re omnichannel now with digital and in-person sales.

D’Alessandro: A couple of years ago it was clear ecommerce was getting harder. In-person retail was attracting more interest. It’s different than getting on Amazon, where you hustle for a week, set up the listing, and you’re done.

A retail store or chain might have a line review once a year, perhaps in October for on-shelf placement in April. If you wait until October, you’ve missed the review for an entire year. And don’t expect approval on the first pitch.

Big retailers such as Walmart want proof it will work. They only have a few feet of shelf space for a product line — each inch of shelf space could be worth millions of dollars a year in sales. The best way to convince them is to show results from other retailers. We started in the most accessible places: independent mom-and-pop pet stores.

We scraped Google Maps and started calling pet stores. We said, “We’re a natural dog food company. We’d love to send you some samples.”

We built our entire funnel that way. We called, sent samples, and followed up. We got better over several years, eventually selling in thousands of independent locations. It was a grind. Once we were in 2,000 or so, we started pulling data. We learned about average monthly sales, unit sales, etcetera. Then we approached small chains.

Smaller chains don’t typically have as rigid review cycles. We went ad hoc with those guys. After that, we approached big regionals, those with 300 or 400 locations, using data from the smaller outlets. Only then did we approach national chains.

We climbed the ladder. Our product works, and it’s selling through. That’s how we did it.

Bandholz: Where can people learn more from you?

D’Alessandro: Our site is NaturalDog.com. I host a twice-weekly podcast called Acquisitions Anonymous. It’s about buying and selling businesses. My own website is Billda.com, and my X is @BillDA.

Google On Traffic Diversity As A Ranking Factor via @sejournal, @martinibuster

Google’s SearchLiaison tweeted encouragement to diversify traffic sources, being clear about the reason he was recommending it. Days later, someone followed up to ask if traffic diversity is a ranking factor, prompting SearchLiaison to reiterate that it is not.

What Was Said

The question of whether diversity of traffic was a ranking factor was elicited from a previous tweet in a discussion about whether a site owner should be focusing on off-site promotion.

Here’s the question from the original discussion that was tweeted:

“Can you please tell me if I’m doing right by focusing on my site and content – writing new articles to be found through search – or if I should be focusing on some off-site effort related to building a readership? It’s frustrating to see traffic go down the more effort I put in.”

SearchLiaison split the question into component parts and answered each one. When it came to the part about off-site promotion, SearchLiaison (who is Danny Sullivan), shared from his decades of experience as a journalist and publisher covering technology and search marketing.

I’m going to break down his answer so that it’s clearer what he meant

This is the part from the tweet that talks about off-site activities:

“As to the off-site effort question, I think from what I know from before I worked at Google Search, as well as my time being part of the search ranking team, is that one of the ways to be successful with Google Search is to think beyond it.”

What he is saying here is simple, don’t limit your thinking about what to do with your site to thinking about how to make it appeal to Google.

He next explains that sites that rank tend to be sites that are created to appeal to people.

SearchLiaison continued:

“Great sites with content that people like receive traffic in many ways. People go to them directly. They come via email referrals. They arrive via links from other sites. They get social media mentions.”

What he’s saying there is that you’ll know that you’re appealing to people if people are discussing your site in social media, if people are referring the site in social media and if other sites are citing it with links.

Other ways to know that a site is doing well is when when people engage in the comments section, send emails asking follow up questions, and send emails of thanks and share anecdotes of their success or satisfaction with a product or advice.

Consider this, fast fashion site Shein at one point didn’t rank for their chosen keyword phrases, I know because I checked out of curiosity. But they were at the time virally popular and making huge amounts of sales by gamifying site interaction and engagement, propelling them to become a global brand. A similar strategy propelled Zappos when they pioneered no-questions asked returns and cheerful customer service.

SearchLiaison continued:

“It just means you’re likely building a normal site in the sense that it’s not just intended for Google but instead for people. And that’s what our ranking systems are trying to reward, good content made for people.”

SearchLiaison explicitly said that building sites with diversified content is not a ranking factor.

He added this caveat to his tweet:

“This doesn’t mean you should get a bunch of social mentions, or a bunch of email mentions because these will somehow magically rank you better in Google (they don’t, from how I know things).”

Despite The Caveat…

A journalist tweeted this:

“Earlier this week, @searchliaison told people to diversify their traffic. Naturally, people started questioning whether that meant diversity of traffic was a ranking factor.

So, I asked @iPullRank what he thought.”

SearchLiaison of course answered that he explicitly said it’s not a ranking factor and linked to his original tweet that I quoted above.

He tweeted:

“I mean that’s not exactly what I myself said, but rather repeat all that I’ll just add the link to what I did say:”

The journalist responded:

“I would say this is calling for publishers to diversify their traffic since you’re saying the great sites do it. It’s the right advice to give.”

And SearchLiaison answered:

“It’s the part of “does it matter for rankings” that I was making clear wasn’t what I myself said. Yes, I think that’s a generally good thing, but it’s not the only thing or the magic thing.”

Not Everything Is About Ranking Factors

There is a longstanding practice by some SEOs to parse everything that Google publishes for clues to how Google’s algorithm works. This happened with the Search Quality Raters guidelines. Google is unintentionally complicit because it’s their policy to (in general) not confirm whether or not something is a ranking factor.

This habit of searching for “ranking factors” leads to misinformation. It takes more acuity to read research papers and patents to gain a general understanding of how information retrieval works but it’s more work to try to understand something than skimming a PDF for ranking papers.

The worst approach to understanding search is to invent hypotheses about how Google works and then pore through a document to confirm those guesses (and falling into the confirmation bias trap).

In the end, it may be more helpful to back off of exclusively optimizing for Google and focus at least equally as much in optimizing for people (which includes optimizing for traffic). I know it works because I’ve been doing it for years.

Featured Image by Shutterstock/Asier Romero

What You Need To Generate Leads With Content via @sejournal, @duchessjenm

This is an excerpt from the B2B Lead Generation ebook, which draws on SEJ’s internal expertise in delivering leads across multiple media types.

What, exactly, do you need to create a sustainable and scalable lead generation strategy with content?

It starts with an exceptional piece of content that the leads want – your “lead magnet” – but it doesn’t end there. Modern content marketing requires resources.

Without a content marketing plan and the ability to execute it, you’ll quickly exhaust your audience pool, and the leads will dry up. The good news is you don’t have to do all of this internally, but you need to assess the best use of your resources.

Let’s start with a map of all the pieces required.

Assets & Bandwidth

The four major components of successful lead generation with content are:

  1. Understanding your available market audience and captive audience size.
  2. Consistently creating high-quality, hyper-relevant inbound content and the research behind it to reach existing and new audiences.
  3. Consistently maintaining a high volume of lead-generating content required for the audience and individual people within that audience.
  4. Consistently testing and improving your content.

Market & Audience Research

Research goes into every step of content creation. First, to create a “lead magnet,” you need to be super dialed in on your audience’s specific challenges and immediate needs that you can solve.

You need to understand what a model of success looks like for them and provide a resource that gets them at least part of the way toward that success.

In B2B, that doesn’t just go for your audience. You also need to understand the needs and problems your audience’s own audience has.

It’s a bit of a mind-bender. You must think backward and then forward at the same time. Before you can understand your audience, you need to understand what their audience is asking of them and get fully immersed in that consumer’s journey to your customer – and how that creates a need that applies to you.

When you provide a solution for your target audience, why is your target audience there? What is their audience asking of them?

Why does their audience need their solution, and why does that create a need for your solution?

You must think about all of those layers to provide the best content for them to solve their problem for their audience.

You have to create a whole experience of total immersion to create a remarkable lead generation strategy.

And you have to do this often. One lead magnet, solving one specific problem, gives you a lifespan of leads. But content becomes out of date, and the needs of your customers – and their customers – change.

The knowledge you need to create lead magnets isn’t a matter of a one-time research project. It’s the culmination of constant analysis and regular direct touchpoints with audience members.

You also need to know where you are now and where you can reasonably get to in terms of your audience size. Do you have an audience currently? How large is it? Do you have a plan to grow your audience?

While you absolutely can generate leads with direct tactics like ads, to do it with content marketing, you need an audience first.

The first step is knowing your current marketable audience. Then, develop a plan to expand it with your own content marketing efforts and partnerships that expose new audiences to your brand.

And, of course, you need to develop a distribution plan for your lead magnet content to put it in front of your current marketable audience and new audiences who might be interested.

Check out our upcoming webinar to get an exclusive peek into tactics we use when developing our own lead gen campaigns – case study style. 

Creating & Maintaining Exceptional Content

Audience research moves you toward planning content. As a business trying to generate leads, you need supporting content for each step of the process.

First, there’s the organic strategy that comes with building an audience. Here’s where the deep understanding of audiences really starts to matter.

Content that adds value for free creates trust and goodwill. It’s the kind of long-term thinking that allows you to generate leads from your own audiences and also creates leads passively from people growing to recognize and trust your voice.

Then, there’s all the supporting content that lead magnets need to thrive: landing pages, email copy, supporting articles, social media posts, ads, etc. All of these content pieces must also be carefully targeted toward the direct problems your audiences face, as well as the specific words and phrases that drive interest and action.

More than that, you need to understand what channels and platforms audience members with specific problems use. Your supporting content must be optimized for that channel and fulfill the expectations that users of that channel generally have in addition to the problems you address.

Creating Lead Magnets

Now, we come to the lead magnets themselves, which need to be exceptionally helpful.

An underwhelming experience with lead magnet content can turn a lead off. If you fail to uphold your end of the deal – providing a path to a specific definition of success in exchange for personal information – then you’ll struggle to convert leads.

Success could look like:

  • “With this resource, I can perform a difficult task more efficiently or easily.”
  • “With this resource, I learned something new, and I can use this knowledge directly to solve a problem.”
  • “I can use this resource as a reference that will save me time or energy.”
  • “I can use the data in this resource to build or change my approach to a problem.”
  • “This resource changed my perspective and assumptions about a topic I already know something about, and I can take this innovation back to my team to discuss a new approach.”

To build a content resource that meets one or more of these goals, you need deep and expert knowledge of not just the subject matter and your products, but also, how to be useful.

You need to know how to teach someone something or persuade someone into considering new perspectives. You need to know what information matters and why.

You need to be a leader in:

  • Knowledge of the subject matter.
  • The craft of content, teaching, and curating impactful information.
  • Empathy for your audience and the ability to approach problems from their point of view.

Then, there are the technical skills that go into data analysis, the design skills that go into laying out a document, visual assets, and much more.

One person might possess all of these skills. They might likely exist disparately among different people on your team, in which case you need to align them.

Very likely, you’ll need to find external partners to supplement one or more of these skills.

Testing & Optimization

Often, when content isn’t performing as well as a business wants, its answer is to put more money behind it in terms of distribution, for example, more ads.

That’s because it’s somewhat rare for a business to have the resources to keep content updated as frequently as it should be.

But if there’s a problem with the content, that’s what needs to be assessed. More distribution might get more eyes on content, but if the content is outdated or not quite the right answer, this will be a failing strategy.

Continually testing, updating, and producing new content can be a massive resource sink. Not only does every piece of the content puzzle need refinement – from organic intent analysis to CTA testing – but you also need consistent new and updated content to scale a lead generation strategy.

Updating and producing new organic content helps grow your marketable audience. And new lead magnets that solve specific problems create new opportunities to turn readers and subscribers into leads.

The “updating” part of this is critical. Many businesses focus on making new assets but not maintaining old ones. You should apply the insights that new research gives you about your audience to existing content.

But, again, we return to the problem of assets and bandwidth.

Get more tips on how we, here at SEJ, create holistic content campaigns to drive leads on this exclusive webinar.

What You Really Need Is A Content Team

When businesses apply ineffective fixes to boost content marketing, it usually comes down to resource issues, knowledge issues, or both.

Content marketing is the work of a skilled team of specialists.

Many businesses simply don’t have the resources to deploy the knowledge and time required to do it right.

Building content teams involves a mix of internal stakeholders and external partnerships. Even here at SEJ, where inbound traffic is our bread and butter, we use strategic distribution partnerships to expand our marketable audience. You can’t do it all on your own.

The great thing about a specialist distribution partner is they can help you build the knowledge and research you need to create stronger content efforts internally.

Publishers and influencers thrive on acutely understanding and serving the needs of their audiences. They’re a direct line not just to your audiences themselves, but also to:

  • Up-to-date analysis on trends your audience cares about.
  • Insights on the exact language your audience does and doesn’t respond to.
  • The tone and content types that resonate with your audience.
  • Deep understanding of your audience’s problems and anxieties and how they want to be helped.

But there are all kinds of external partners you can work with to fill gaps in your team, from content production to testing and research.

Don’t ignore the insight and knowledge you gain from working with external specialists, whether they’re helping you with distribution or creating the actual content assets.

Take everything you learn back to your team so that when you’re able to expand your resources, you have knowledge to build on.

The toughest thing about content marketing and lead generation is that all of these aspects flow into one another at different points. A sale could happen before someone even becomes a lead.

A lead could spend months in your “lead nurturing” (more later) flow before finally converting. And people can drop out of this process and never think about you again at any point.

Keep testing, perform new audience research, and relentlessly improve your value. That’s when you’ll start delivering exceptional leads to your sales teams through content marketing.

More resources:


Featured Image: Andrey_Popov/Shutterstock

Google: Should H1 & Title Tags Match? via @sejournal, @martinibuster

Google’s Office Hours podcast answered the important question of whether it matters if the title element and the H1 element match. It’s a good question because Google handles these elements in a unique way that’s different from how traditional SEO thinks about it.

How Important Is It For H1 & Title Tags To Match?

The question and answer are short. Google’s Gary Illyes answers the question and then links to documentation about how Google produces “title links” in the search engine results pages (SERPs).

This is the question:

“…is it important for title tags to match the H1 tag?”

Gary answers:

“No, just do whatever makes sense from a user’s perspective.”

That’s a useful answer but it’s also missing the explanation of why it’s not important that the title tag matches the first heading element.

The Title And H1 Elements

The title element is in the section with the other metadata and scripts that are used by search engines and browsers. The role of the element is to offer a general but concise description of what the web page is about before a potential site visitor clicks from the SERPs to the web page. So the title must describe the web page in a way that tells the potential visitor that the web page contains the content about whatever topic the page is about and if that’s a match to what the person is looking for then they’ll click through.

So it’s not that the title tag entices a click. It’s job is to say this is what’s on the page.

Now the heading elements (H1, H2, etc) are like mini titles, they describe what each section of a web page is about. Except for the first heading, which is usually an H1 (but could be an H2, it doesn’t matter to Google).

The first heading offers a concise description of what the web page is about to a site visitor that already knows what the page is about in a general way. So the H1 element can be said to be a little more specific in a way.

The official W3C HTML documentation for the H1 tells how the H1 is supposed to be used:

“It is suggested that the the text of the first heading be suitable for a reader who is already browsing in related information, in contrast to the title tag which should identify the node in a wider context.”

How Does Google Use H1 and Titles?

Google uses the headings and titles as a source of information about what the web page is about. But it also uses them to create the title link, which is the title that shows in the SERPs. So if the element is inappropriate because it’s got a popular keyword phrase that the SEO wants to rank for but doesn’t describe what the page is about, Google’s going to check the heading tags and use one of those as the title link.

Twenty years ago it used to be mandatory to put the keyword phrase you wanted to rank for in the title tag. But ranking factors don’t work like that anymore because Google has natural language processing, neural networks, machine learning and AI that helps it understand concepts and topics.

That’s why the title tag and the heading tags are not parking spots for the keywords you want to rank for. They are best used to describe the page in a general (title element) and a bit more specific (H1) way.

Google’s Rules For Title Links

Gary Illyes of Google linked to documentation about how Google uses titles and headings to produce title links.

Titles must be descriptive and concise. Yes, use keywords but remember that the title must accurately describe the content.

Google’s guidelines explain:

“Title links are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It’s often the primary piece of information people use to decide which result to click on, so it’s important to use high-quality title text on your web pages.”

Avoid Boilerplate

Boilerplate is a phrase that’s repeated across the site. It’s usually templated content, like:

(type of law) Lawyers In (insert city name), (insert state name) – Name Of Website

Google’s documentation recommends that a potential site visitor should be able to distinguish between different pages by the title elements.

This is the recommendation:

“Avoid repeated or boilerplate text in elements. It’s important to have distinct text that describes the content of the page in the <title> element for each page on your site.”

Branding In Title Tags

Another helpful tip is about website branding. Google advises that the home page is an appropriate location to provide extra information about the site.

Google provides this example:

ExampleSocialSite, a place for people to meet and mingle

The extra information about the site is not appropriate to have on the inner pages because that looks really bad when Google ranks more than one page from the website plus it’s missing the point about what the title tag is supposed be about.

Google advises:

“…consider including just your site name at the beginning or end of each element, separated from the rest of the text with a delimiter such as a hyphen, colon, or pipe like this:

ExampleSocialSite: Sign up for a new account.

Content That Google Uses For Title Links

Google uses the following content for creating title links:

  • “Content in elements
  • Main visual title shown on the page
  • Heading elements, such as

    elements

  • Other content that’s large and prominent through the use of style treatments
  • Other text contained in the page
  • Anchor text on the page
  • Text within links that point to the page
  • WebSite structured data”

Takeaways:

  • Google is choosing the title element to display as the title link. If it’s not a good match it may use the first heading as the title link in the SERPs. If that’s not good enough then it’ll search elsewhere on the page.
  • Use the title to describe what the page is about in a general way.
  • Headings are basically section “titles,” so the first heading (or H1) can be an opportunity to describe what the page is about in a more precise way than the title so that the reader is compelled to start reading or shopping or whatever they’re trying to do.
  • All of the headings in a web page together communicate what the entire page is about, like a table of contents.
  • The title element could be seen as serving the function similar to the title of a non-fiction book.
  • The first heading is more specific than the title about what the page is about.

Listen to the question and answer at the 10:46 minute mark:

Featured Image by Shutterstock/Khosro

What Is Schema Markup & Why Is It Important For SEO? via @sejournal, @ChuckPrice518

Schema.org is a collection of vocabulary (or schemas) used to apply structured data markup to web pages and content. Correctly applying schema can improve SEO outcomes through rich snippets.

Structured data markup is translated by platforms such as Google and Microsoft to provide enhanced rich results (or rich snippets) in search engine results pages or emails. For example, you can markup your ecommerce product pages with variants schema to help Google understand product variations.

Schema.org is an independent project that has helped establish structured data consistency across the internet. It began collaborating with search engines such as Google, Yahoo, Bing, and Yandex back in 2011.

The Schema vocabulary can be applied to pages through encodings such as RDFa, Microdata, and JSON-LD. JSON-LD schema is preferred by Google as it is the easiest to apply and maintain.

Does Schema Markup Improve Your Search Rankings?

Schema is not a ranking factor.

However, your webpage becomes eligible for rich snippets in SERPs only when you use schema markup. This can enhance your search visibility and increase CTR on your webpage from search results.

Schema can also be used to build a knowledge graph of entities and topics. Using semantic markup in this way aligns your website with how AI algorithms categorize entities, assisting search engines in understanding your website and content.

The information provided by structured data can provide context to an otherwise ambiguous webpage. It can also help you clarify entities with multiple potential meanings.

According to Schema.org:

“Most webmasters are familiar with HTML tags on their pages. Usually, HTML tags tell the browser how to display the information included in the tag. For example,

Avatar

tells the browser to display the text string “Avatar” in a heading 1 format.

However, the HTML tag doesn’t give any information about what that text string means—“Avatar” could refer to the hugely successful 3D movie, or it could refer to a type of profile picture—and this can make it more difficult for search engines to intelligently display relevant content to a user.”

This means that search engines should have additional information to help them figure out what the webpage is about.

You can even link your entities directly to sites like Wikipedia or Google’s knowledge graph to build explicit connections. Using Schema this way can have positive SEO results, according to Martha van Berkel, CEO of Schema App:

“At Schema App, we’ve tested how entity linking can impact SEO. We found that disambiguating entities like places resulted in pages performing better on [near me] and other location-based search queries.

Our experiments also showed that entity linking can help pages show up for more relevant non-branded search queries, increasing click-through rates to the pages.

Here’s an example of entity linking. If your page talks about “Paris”, it can be confusing to search engines because there are several cities in the world named Paris.

If you are talking about the city of Paris in Ontario, Canada, you can use the sameAs property to link the Paris entity on your site to the known Paris, Ontario entity on Wikipedia, Wikidata, and Google’s Knowledge Graph.”

By helping search engines understand content, you are assisting them in saving resources (especially important when you have a large website with millions of pages) and increasing the chances for your content to be interpreted properly and ranked well. While this may not be a ranking factor directly, Schema helps your SEO efforts by giving search engines the best chance of interpreting your content correctly, giving users the best chance of discovering it.

What Is Schema Markup Used For?

Listed above are some of the most popular uses of schema, which are supported by Google and other search engines.

You may have an object type that has a schema.org definition but is not supported by search engines.

In such cases, it is advised to implement them, as search engines may start supporting them in the future, and you may benefit from them as you already have that implementation.

Types Of Schema Encoding: JSON-LD, Microdata, & RDFa

There are three primary formats for encoding schema markup:

  • JSON-LD.
  • Microdata.
  • RDFa.

Google recommends JSON-LD as the preferred format for structured data. Microdata is still supported, but JSON-LD schema is recommended.

In certain circumstances, it isn’t possible to implement JSON-LD schema due to website technical infrastructure limitations such as old content management systems). In these cases, the only option is to markup HTML via Microdata or RDFa.

You can now mix JSON-LD and Microdata formats by matching the @id attribute of JSON-LD schema with the itemid attribute of Microdata schema. This approach helps reduce the HTML size of your pages.

For example, in a FAQ section with extensive text, you can use Microdata for the content and JSON-LD for the structured data without duplicating the text, thus avoiding an increase in page size. We will dive deeper into this below in the article when discussing each type in detail.

1. JSON-LD Schema Format

JSON-LD encodes data using JSON, making it easy to integrate structured data into web pages. JSON-LD allows connecting different schema types using a graph with @ids, improving data integration and reducing redundancy.

Let’s look at an example. Let’s say that you own a store that sells high-quality routers. If you were to look at the source code of your homepage, you would likely see something like this:

TechHaven

The best routers you’ll find online!

Address:

459 Humpback Road

Rialto, Ca

Tel: 909 574 3903

Click here to view our best routers!

We’re open:

Mon-Sat 8am - 10:30pm

Sun: 2pm - 8pm

Once you dive into the code, you’ll want to find the portion of your webpage that discusses what your business offers. In this example, that data can be found between the two

tags.

The following JSON-LD formatted text will markup the information within that HTML fragment on your webpage, which you may want to include in your webpage’s section.


This snippet of code defines your business as a store via the attribute"@type": "Store".

Then, it details its location, contact information, hours of operation from Monday to Saturday, and different operational hours for Sunday.

By structuring your webpage data this way, you provide critical information directly to search engines, which can improve how they index and display your site in search results. Just like adding tags in the initial HTML, inserting this JSON-LD script tells search engines specific aspects of your business.

Let’s review another example of WebPage schema connected with Organization and Author schemas via @id. JSON-LD is the format Google recommends and other search engines because it’s extremely flexible, and this is a great example.


In the example:

  • Website links to the organization as the publisher with @id.
  • The organization is described with detailed properties.
  • WebPage links to the WebSite with isPartOf.
  • NewsArticle links to the WebPage with isPartOf, and back to the WebPage with mainEntityOfPage, and includes the author property via @id.

You can see how graph nodes are linked to each other using the"@id"attribute. This way, we inform Google that it is a webpage published by the publisher described in the schema.

The use of hashes (#) for IDs is optional. You should only ensure that different schema types don’t have the same ID by accident. Adding custom hashes (#) can be helpful, as it provides an extra layer of insurance that they will not be repeated.

You may wonder why we use"@id"to connect graph nodes. Can’t we just drop organization, author, and webpage schemas separately on the same page, and it is intuitive that those are connected?

The issue is that Google and other search engines cannot reliably interpret these connections unless explicitly linked using @id.

Adding to the graph additional schema types is as easy as constructing Lego bricks. Say we want to add an image to the schema:

{
   "@type": "ImageObject",
   "@id": "https://www.example.com/#post-image",
   "url": "https://www.example.com/example.png",
   "contentUrl": "https://www.example.com/example.png",
   "width": 2160,
   "height": 1215,
   "thumbnail": [
     {
        "@type": "ImageObject",
        "url": "https://example.com/4x3/photo.jpg",
        "width": 1620,
        "height": 1215
      },
      {
        "@type": "ImageObject",
        "url": "https://example.com/16x9/photo.jpg",
        "width": 1440,
        "height": 810
      },
      {
        "@type": "ImageObject",
        "url": "https://example.com/1x1/photo.jpg",
        "width": 1000,
        "height": 1000
      }
    ]
}

As you already know from the NewsArticle schema, you need to add it to the above schema graph as a parent node and link via @id.

As you do that, it will have this structure:


Quite easy, isn’t it? Now that you understand the main principle, you can build your own schema based on the content you have on your website.

And since we live in the age of AI, you may also want to use ChatGPT or other chatbots to help you build any schema you want.

2. Microdata Schema Format

Microdata is a set of tags that aims to make annotating HTML elements with machine-readable tags much easier.

However, the one downside to using Microdata is that you have to mark every individual item within the body of your webpage. As you can imagine, this can quickly get messy.

Take a look at this sample HTML code, which corresponds to the above JSON schema with NewsArticle:

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000, we have grown to a team of 200 dedicated employees.

Our slogan is: "Innovation at its best".

Contact us at +1-800-555-1212 for customer service.

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author: John Doe. Connect with John on Twitter and LinkedIn.

Example image

If we convert the above JSON-LD schema into Microdata format, it will look like this:

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000-01-01, we have grown to a team of 200 dedicated employees.

Our slogan is: Innovation at its best.

Contact us at +1-800-555-1212 for Customer Service.

Example Company Logo

Connect with us on: Facebook, Twitter, LinkedIn

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author:

Example image

This example shows how complicated it becomes compared to JSON-LD since the markup is spread over HTML. Let’s understand what is in the markup.

You can see

tags like:

By adding this tag, we’re stating that the HTML code contained between the

blocks identifies a specific item.

Next, we have to identify what that item is by using the ‘itemtype’ attribute to identify the type of item (Person).

An item type comes in the form of a URL (such as https://schema.org/Person). Let’s say, for example, you have a product you may use http://schema.org/Product.

To make things easier, you can browse a list of item types here and view extensions to identify the specific entity you’re looking for. Keep in mind that this list is not all-encompassing but only includes ones that are supported by Google, so there is a possibility that you won’t find the item type for your specific niche.

It may look complicated, but Schema.org provides examples of how to use the different item types so you can see what the code is supposed to do.

Don’t worry; you won’t be left out in the cold trying to figure this out on your own!

If you’re still feeling a little intimidated by the code, Google’s Structured Data Markup Helper makes it super easy to tag your webpages.

To use this amazing tool, just select your item type, paste in the URL of the target page or the content you want to target, and then highlight the different elements so that you can tag them.

3. RDFa Schema Format

RDFa is an acronym for Resource Description Framework in Attributes. Essentially, RDFa is an extension to HTML5 designed to aid users in marking up structured data.

RDFa isn’t much different from Microdata. RDFa tags incorporate the preexisting HTML code in the body of your webpage. For familiarity, we’ll look at the same code above.

The HTML for the same JSON-LD news article will look like:

vocab="https://schema.org/" typeof="WebSite" resource="https://www.example.com/#website">

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000-01-01, we have grown to a team of 200 dedicated employees.

Our slogan is: Innovation at its best.

Contact us at +1-800-555-1212 for Customer Service.

https://www.example.com Example Company Logo

Connect with us on: Facebook, Twitter, LinkedIn

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

https://www.example.com/about

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author: John Doe Profile Twitter LinkedIn

Example image

Unlike Microdata, which uses a URL to identify types, RDFa uses one or more words to classify types.

vocab=”http://schema.org/” typeof=”WebPage”>

If you wish to identify a property further, use the ‘typeof’ attribute.

Let’s compare JSON-LD, Microdata, and RDFa side by side. The @type attribute of JSON-LD is equivalent to the itemtype attribute of Microdata format and the typeof attribute in RDFa. Furthermore, the propertyName of JSON-LD attribute would be the equivalent of the itemprop and property attributes.

Attribute Name JSON-LD Microdata RDFa
Type @type itemtype typeof
ID @id itemid resource
Property propertyName itemprop property
Name name itemprop=”name” property=”name”
Description description itemprop=”description” property=”description”

For further explanation, you can visit Schema.org to check lists and view examples. You can find which kinds of elements are defined as properties and which are defined as types.

To help, every page on Schema.org provides examples of how to apply tags properly. Of course, you can also fall back on Google’s Structured Data Testing Tool.

4. Mixing Different Formats Of Structured Data With JSON-LD

If you use JSON-LD schema but certain parts of pages aren’t compatible with it, you can mix schema formats by linking them via @id.

For example, if you have live blogging on the website and a JSON-LD schema, including all live blogging items in the JSON schema would mean having the same content twice on the page, which may increase HTML size and affect First Contentful Paint and Largest Contentful Paint page speed metrics.

You can solve this either by generating JSON-LD dynamically with JavaScript when the page loads or by marking up HTML tags of live blogging via the Microdata format, then linking to your JSON-LD schema in the head section via “@id“.

Here is an example of how to do it.

Say we have this HTML with Microdata markup with itemid="https://www.example.com/live-blog-page/#live-blog"

Live Blog Headline

Explore the biggest announcements from DevDay

OpenAI is taking the first step in gradual deployment of GPTs – tailored ChatGPT for a specific purpose – for safety purposes.

ChatGPT now uses GPT-4 turbo with current knowledge.

It also knows which tool to choose for a task with GPT-4 All Tools.

Microsoft CEO Satya Nadella joined Altman to announce deeper partnership with OpenAI to help developers bring more AI advancements.

We can link to it from the sample JSON-LD example we had like this:


If you copy and paste HTML and JSON examples underneath in the schema validator tool, you will see that they are validating properly.

The schema validator does validate the above example.The schema validator does validate the above example.

The SEO Impact Of Structured Data

This article explored the different schema encoding types and all the nuances regarding structured data implementation.

Schema is much easier to apply than it seems, and it’s a best practice you must incorporate into your webpages. While you won’t receive a direct boost in your SEO rankings for implementing Schema, it can:

  • Make your pages eligible to appear in rich results.
  • Ensure your pages get seen by the right users more often.
  • Avoid confusion and ambiguity.

The work may seem tedious. However, given time and effort, properly implementing Schema markup is good for your website and can lead to better user journeys through the accuracy of information you’re supplying to search engines.


Image Credits

Featured Image: Paulo Bobita
Screenshot taken by author

Read more:

Ecommerce in Brazil: Growth Despite Hurdles

Retail ecommerce in Brazil more than doubled to 185 billion reais ($34.5 billion) in 2023 from $70 billion reais in 2018, while the average order increased in the same period from 435 reais to 470, according to the Brazilian Electronic Commerce Association.

By comparison, U.S. retail ecommerce sales in 2023 were $1.14 trillion, per eMarketer.

In Brazil, perfumery and cosmetics had the most online orders in 2023, followed by home and decor, health and food, and beverages.

Electronics in 2023 represented 31% of total ecommerce revenue, according to ECBD, a Brazil-based analysis firm, followed by fashion at 27%, hobby and leisure at 14%, and furniture and homeware at 11%.

Mercado Livre holds a dominant ecommerce position in Latin America. It was Brazil’s most trafficked retail website in March, with over 216 million visits, followed by Amazon, Shopee, OLX, and Ali Express. All are marketplaces. Amazon.com in the U.S. received 3.15 billion visits in March.

In Q1 2024, about 16% of total retail sales in Brazil came from digital channels — apps, sites, email. That’s comparable to the U.S. for the same period. In China, ecommerce in Q1 was 23% of total retail sales.

International Sellers

A 2024 study commissioned by Alibaba showed that cross-border ecommerce represented a mere 0.5% of total retail sales in Brazil, likely due to the difficulty of doing business there.

Despite consumer demand for phones, brand-name clothing, and baby gear, among other goods, it’s expensive and difficult to get things into the country.

“Doing business in Brazil requires in-depth knowledge of the local environment, including the high direct and indirect costs of doing business,” according to the U.S. International Trade Administration. Regulators have for years attempted to enact reforms but continue to face complex tax schemes, restrictive labor laws, and vexing import barriers.

Those hurdles have collectively restricted access to international goods, prompting many Brazilians to shop abroad.

Last year Brazilian lawmakers created a tax exemption for online purchases of $50 or less from international sellers, but a pushback from domestic merchants may result in its revocation, replaced by a 20% fee. Purchases above $50 are already subject to a 60% tax.

Brazilian logistics are an ecommerce barrier, with inadequate infrastructure in the world’s fifth largest country, most of which is rainforest. There aren’t enough roads, maintenance is poor, and ports have limited capacity. Cargo theft is a problem.

That’s as inflation has expanded, reaching a five-year monthly peak of 12% in April 2022.

Payments

Despite the challenges, the country has excelled in modernizing payments. In 2020 the Brazilian Central Bank introduced Pix, a real-time payments system requiring only an email address, phone number, or local ID — no bank account.

By 2023 Pix represented 41% of all retail transactions — online and in-store — followed by credit cards at 15% and debit cards at 13%. Buy-now pay-later services are also popular.

Brazil is the largest economy in Latin America, representing 57% of ecommerce sales with projected growth of about 14% annually through 2026, according to Payments and Commerce Market Intelligence, a global research firm.

The growth was bolstered by the pandemic, forcing Brazilians who didn’t fully trust the web to go online anyway. But Brazil remains among the most unequal countries, with the bottom 40% of families earning less in 2021 than in 2016, per the World Bank. Fewer jobs, persistent inflation, and a drop in government support could limit ecommerce growth, at least for the medium term.