This startup claims it can stop lightning and prevent catastrophic wildfires

On June 1, 2023, as a sweltering heat wave baked Quebec, thousands of lightning strikes flashed across the province, setting off more than 120 wildfires.

The blazes ripped through parched forests and withered grasslands, burned for weeks, and compounded what was rapidly turning into Canada’s worst fire year on record. In the end, nearly 7,000 fires scorched tens of millions of acres across the country, generated nearly 500 millions tons of carbon emissions, and forced hundreds of thousands of people to flee their homes.

Lightning sparked almost 60% of the wildfires—and those blazes accounted for 93% of the total area burned.

Now a Vancouver-based weather modification startup, Skyward Wildfire, says it can prevent such catastrophic fires in the future—by stopping the lightning strikes that ignite them. It just raised millions of dollars in a funding round that it plans to use to accelerate its product development and expand its operations.

Until last week the company, which highlights the role lightning played in the 2023 infernos, stated on its website that it has demonstrated technology capable of preventing “up to 100% of lightning strikes.”

It was an eye-catching claim that went well beyond the confidence level of researchers who have studied the potential for humans to suppress lightning—and the company took it down following inquiries from MIT Technology Review.

“While the statement reflected an observed result under specific conditions, it was not intended to suggest uniform outcomes and has been removed,” Nicholas Harterre, who oversees government partnerships at Skyward, said in an email. “In complex atmospheric systems, consistent 100% outcomes are not realistic, as the experts you spoke to rightly pointed out.” 

The company now states it demonstrated that it “can prevent the majority of cloud-to-ground lightning strikes in targeted storm cells.” So far, Skyward hasn’t publicly revealed how it does so, and in response to our questions Harterre said only that the materials are “inert and selected in accordance with regulatory standards.” 

But online documents suggest the company is relying on an approach that US government agencies began evaluating in the early 1960s: seeding clouds with metallic chaff, or narrow fiberglass strands coated with aluminum. 

The military uses the material to disrupt radar signals; fighter jets, for example, deploy it during dogfights to throw off guided missile systems. Field trials conducted decades ago by US agencies suggest it could help reduce lightning strikes, at least to some degree and under certain conditions.

If Skyward could employ it reliably on significant scales, it might offer a powerful tool for countering rising fire risks as climate change drives up temperatures, dries out forests, and likely increases the frequency of lightning strikes.

“Preventing lightning on high-risk days saves lives, billions in wildfire costs, and is one of the highest-leverage and most immediate climate solutions available,” Sam Goldman, Skyward’s founder and chief executive, said in a statement posted on LinkedIn last year.

But researchers and environmental observers say there are plenty of remaining uncertainties, including how well the seeding may work under varying weather and climate conditions, how much material would need to be released, how frequently it would have to be done, and what sorts of secondary environmental impacts might result from lighting suppression on commercial scales.

Some observers are also concerned that the company appears to have moved ahead with weather modification field trials in parts of Canada without providing wide public notice or openly discussing what materials it’s putting into the clouds.

Given the escalating fire dangers, it’s “reasonable” to evaluate the potential for new technologies to mitigate them, says Keith Brooks, programs director at Environmental Defence, a Canadian advocacy organization.

“But we should be doing so cautiously and really transparently, with a robust scientific methodology that’s open to scrutiny,” he says.

Seeding the clouds

Skyward’s website offers few technical details, but the company says it worked with Canadian wildfire agencies in 2024 and 2025 to demonstrate its technology. The company also says it has developed AI tools to predict lightning strikes that could set off fires.

Skyward announced last month that it raised $7.9 million in Canadian dollars ($5.7 million), in an extension of a seed round initially closed early last year. Investors included Climate Innovation Capital, Active Impact Investments, and Diagram Ventures.

“Our first season demonstrated that prevention is possible at scale,” Goldman said in a statement. “This funding allows us to expand into new regions and support partners who need reliable, operational tools to reduce wildfire risk before emergencies begin.”

The company doesn’t use the term “cloud seeding” on its site or in its recent announcements. But a press release highlighting its selection as a finalist last year in a conservation group’s Fire Grand Challenge states that it suppresses lightning “by cloud seeding with safe, non-toxic materials to neutralize storm charges,” as The Narwhal previously reported.

In addition, Unorthodox Philanthropy, a foundation that provided a grant to support Skyward’s efforts “to test and deploy” the technology, offered more detail in an awardee write-up about Goldman.

It states: “The Skyward team … settled on an inert substance consisting of aluminum covered glass fibers, which is regularly used in military operations to intercept and confuse enemy radar and can also dis-charge clouds.”

Additional details were disclosed in a document marked “Proprietary and Confidential,” which the World Bank nonetheless released within a package of materials from companies developing means of addressing fire risks.

Skyward’s diagrams show planes dropping particles into clouds to prevent cloud-to-ground lightning strikes in “high risk areas.” The company also notes in the document that it uses artificial intelligence for a number of purposes, including forecasting lightning storms, prioritizing treatments, targeting storm cells, and optimizing flight paths.  

Harterre stressed that the company would deploy the technology judiciously and reserve it for storm events with elevated wildfire risk, adding that such storms account for less than 0.1% of lightning activity in a given area.

“Our objective is to reduce the probability of ignition on the limited number of extreme-risk days when fires threaten lives, critical infrastructure, and ecosystems, and when suppression costs and impacts can escalate rapidly,” he said.

The document posted by the World Bank states that Skyward partnered with Alberta Wildfire in August of 2024 to “prove suppression by plane and drone,” and that its process produced a “60-100% reduction” in lightning compared with “control cells” (which likely means storm cells that weren’t seeded). 

The document added that the company would be carrying out additional field trials in the summer of 2025 with the wildfire agencies in British Columbia and Alberta to “provide landscape level solutions with more advanced aircraft, sensors and forecasting.”

“BC Wildfire Service is aware that Skyward is developing technology that aims to reduce instances of lightning in targeted situations,” the British Columbia agency acknowledged in a statement provided to MIT Technology Review. “Last year, preliminary trials were conducted by Skyward to gain a better understand [sic] of the technology and its applicability in B.C. Should a project/technology like this move forward in B.C., we would engage with the project team in an effort to learn and ensure we’re using every tool available to us to respond to wildfire in B.C.”

The BC agency declined to make anyone available for an interview and didn’t respond to questions about what materials were used, where the tests were carried out, or whether it provided public disclosures or required the company to. Alberta Wildfire didn’t respond to similar questions from MIT Technology Review.

Rising lightning risks

Clouds are just water in various forms—vapor, droplets, and ice crystals, condensed enough to form the floating Rorschach tests we see in the sky. Within them, snowflakes and tiny ice pellets known as graupel rub together, causing atoms to trade electrons. This process creates highly reactive ions with negative and positive charges. 

Updrafts separate the light snowflakes from the graupel, building up larger differences in the charges across the electrical field until … crack! An electrostatic discharge occurs in the form of a lightning strike.

The 2023 fire season wasn’t a particularly big year for lightning strikes in Canada—but then it didn’t have to be. It was so hot and dry that every bolt that struck the surface had a better than usual chance of igniting a fire, says Piyush Jain, a research scientist at the Canadian Forest Service and lead author of a study published in Nature Communications that analyzed the year’s fires.  

aerial image of 2023 wildfire in Quebec
A fire burns in Mistissini, Québec, on June 12, 2023.
CPL MARC-ANDRé LECLERC/CANADIAN ARMED FORCES

Climate change is, however, likely to produce more lightning strikes, if it hasn’t started to already. Warmer air holds more moisture and adds more convective energy to the atmosphere, which drives the vertical movement of air that forms clouds and stirs up lightning storms. 

“So the conditions are there, and the conditions are likely to increase,” Jain says.

Different models arrive at different lightning forecasts for some regions of the world. But a clearer trend is already emerging in the northernmost latitudes, where the planet is warming fastest. Studies show that lightning-ignited fires have substantially increased in the Arctic boreal region, and predict that they will continue to rise

This combines with other growing risks like longer fire seasons, warmer temperatures, and drier vegetation, together raising the odds of more severe fires and more greenhouse-gas emissions, says Brendan Rogers, a senior scientist at the Woodwell Climate Research Center who studies the effect of fires on permafrost thaw.

In fact, Canada’s emissions from the 2023 fires were more than four times its emissions from fossil fuels.

Midcentury field trials

Scientists have conducted a variety of experiments exploring the possibility of preventing lightning, but most of it happened in the later half of the last century. 

Amid the cultural optimism and booming economy of the postwar period, US research agencies and corporations went on a tear of cloud seeding experiments aimed at conquering nature—or at least moderating its dangers. Research teams launched or dropped materials like dry ice and silver iodide into clouds in attempts to boost rainfall, reduce hail, dissipate fog, and redirect hurricanes.

“Cloud seeding activity was so intensive that at its peak in the early 1950s, approximately 10% of the US land area was under some kind of weather modification program,” wrote MIT’s Phillip Stepanian and Earle Williams in a 2024 history of lightning suppression efforts in the Bulletin of the American Meteorological Society. (MIT Technology Review is owned by MIT but is editorially independent.) 

Harry Gisborne, then chief of the division of fire research at the US Forest Service, wondered if the technique could be used to trigger downpours that might extinguish hard-to-reach wildfires on public lands. But when he put the question to Vincent Schaefer of General Electric, who had done pioneering research in cloud seeding, Schaefer thought they could perhaps do one better: prevent the lighting that sparked the fires in the first place.

The conversations kicked off what would become Project Skyfire, a multiagency private-public research program that carried out a series of experiments through the 1950s and 1960s. Research teams seeded clouds over the San Francisco Peaks of Arizona, the Bitterroot Mountains at the edge of Idaho, and the Deerlodge National Forest in Montana, among other places.

After comparing treated and untreated storm clouds, the researchers concluded that seeding decreased cloud-to-ground lightning by more than half. But as MIT’s Stepanian and Williams noted, the sample sizes were small, and questions remained about the statistical significance of the findings.

(Soviet scientists also carried out some field experiments on lightning suppression in the 1950s, as well as some related research that involved using rockets to launch lead iodide into thunderstorms in the 1970s, but it’s difficult to find further details about those programs.)

A near tragedy reignited US government interest in the possibility of lightning suppression in 1969, when lightning struck the Apollo 12 space shuttle twice within seconds of launch. The astronauts were able to reset their systems and successfully complete their mission to the moon, but it was a very close call.

In the aftermath, NASA and NOAA teamed up on what became known as Project Thunderbolt, which relied on the metallic chaff normally used in military countermeasures.

Researchers at the US Army Electronics Laboratory had previously proposed the possibility of suppressing lightning by deploying this material, which a handful of defense contractors manufacture. The idea is that chaff acts as a conductor in a forming electrical field, stripping electrons from some oxygen and nitrogen molecules and adding them to others. The mismatched electrons already collecting in cloud water molecules, thanks to all that rubbing between snowflakes and graupel, can then leap over to those newly charged atoms. That, in turn, should reduce the buildup of static electricity that otherwise results in lightning.

“By continuously redistributing—and thereby neutralizing—charges within the storm in a weak electric field, the strong electric fields required to produce lightning would never develop,” Stepanian and Williams wrote.

NASA and NOAA carried out a series of experiments seeding clouds with chaff from the early to mid 1970s, over Boulder, Colorado, and later at the Kennedy Space Center. Here, too, the experiments showed “generally promising field results.” But NASA eventually grew concerned about the possibility that chaff could affect radio communications and shuttered the program.

“Lightning suppression research was once again abandoned, and the responsibility for mitigating lightning hazards reverted to weather forecasters,” Stepanian and Williams concluded.

‘Hard to draw conclusions’

So what does all this tell us about our ability to prevent lightning?

“In my opinion, it’s unambiguously true that this technique can be used to reduce lightning strikes in a storm,” says Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group. “With some major caveats.”

For example, it’s not clear how much material you would need to release, how long it would persist, and how the effectiveness might change under different climate and weather conditions.

(Stepanian consulted with Skyward in its early stages, and he declined to discuss the startup.)

His coauthor on the history of lightning suppression seems a tad more skeptical. In an email, Williams, a research scientist at MIT who studies physical meteorology and atmospheric electricity, said there’s unmistakable evidence that chaff “has an impact on the electrification of thunderstorms.” But in email responses, he said its effectiveness in reducing or eliminating lighting activity “remains controversial” and requires further testing. (Williams says he did not consult for Skyward.) 

In his own written reviews, he’s highlighted a number of potential shortcomings with earlier research, including unaccounted-for differences in cloud heights between treated and untreated storms. In addition, he’s noted that some studies used detection systems that pick up only cloud-to-ground strikes, not intracloud lightning, which is far more common. 

He also points to the results of a more recent study that he and Stepanian collaborated on with researchers at New Mexico Tech. They relied upon data from weather radars in Tampa and Melbourne, Florida, located on opposite sides of the state, to detect the presence of chaff released over the central part of the state during military training and testing exercises. 

They compared 35 storms during which chaff was clearly detected in clouds with 35 instances when it wasn’t.

According to an abstract of the paper—which hasn’t been peer-reviewed or published but was presented at the American Geophysical Union conference in December—storms that occurred when chaff was present were generally “smaller and shorter-lived.” 

But the number of total flashes—which includes ground strikes as well as lightning within and between clouds and the air—was actually significantly higher in clouds carrying chaff: 62,250 versus 24,492.

“In summary, so far, it is hard to draw any conclusion about lightning suppression using chaff,” the authors wrote.

Williams says their results and other studies suggest that large chaff concentrations may be needed to suppress lightning. That could be because there’s a strong tendency for the ions released from the chaff fibers to be captured by cloud droplets before they reach the charged particles that would need to be neutralized.

But that may also present a significant deployment challenge, since chaff quickly becomes dilute once it’s released into the midst of turbulent storm clouds, Williams adds. 

Skyward’s Harterre said he couldn’t comment on the results of the Florida study but noted that storms in the state are very different from those that occur in the Canadian provinces where his company operates.

“Our work to date has focused on regions where operational feasibility has been evaluated and wildfire risk is highest,” he wrote.

‘Unintended consequences’

The possibility of releasing more chaff into the air also raises the questions of what else it could do in the atmosphere, and what will happen once it lands. 

The US military has produced a number of studies exploring the environmental and health effects of chaff and found that it disperses widely, breaks down in the environment, and is “generally nontoxic.”

For instance, a Naval Health Research Center report assessing environmental impacts from decades of training exercises near Chesapeake Bay concluded that “current and estimated use of aluminized chaff by American forces worldwide” will not raise total aluminum levels above the Environmental Protection Agency’s established limits. 

But a US Government Accountability Office report in 1998 raised a few other flags, noting that chaff can also affect civilian air traffic control radar and weather forecasts. It also highlighted a “potential but remote chance of collecting in reservoirs and causing chemical changes that may affect water and the species that use it.”

Stepanian says that if lightning suppression efforts require more chaff than the military currently releases, further studies may be needed to properly evaluate the environmental effects. 

Brooks of Environmental Defence Canada says he wants to know more about what materials Skyward is using, where they’re sourced from, what the effort leaves behind in the environment, and what the impacts on animals could be. He is also wary of the possible secondary effects of intervening in storms.

“I just think there’s the potential for unintended consequences if we start to mess with a complex system, like weather,” Brooks says, adding: “It makes me nervous to think there are pilots going on without people knowing about them.”

Harterre said that the company abides by any applicable regulations, and that it conducts its field activities “in coordination with relevant authorities and with appropriate authorization.”

He added that it releases seeding materials at lower volumes and concentrations than those associated with defense use and that deployments “are limited to defined high-wildfire-risk storm conditions.”

Remaining doubts

It’s not clear whether or to what degree Skyward has meaningfully advanced the science of lightning suppression or cleared up the questions that have lingered since the studies from the last century. 

The company hasn’t released data from its field trials, published any papers in peer-reviewed literature, or disclosed how its tests were performed, as far as MIT Technology Review was able to determine. 

Without such information it’s impossible to assess its claims, Williams says. He and two of his New Mexico Tech coauthors—associate professor Adonis Leal and master’s student Jhonys Moura—had all expressed skepticism about the company’s previous claim of “up to 100%” lightning prevention.

Harterre said Skyward intends to release more technical information as its programs mature.

“We look forward to the opportunity to share more detailed information,” he wrote.

In the meantime, Skyward’s investors have high hopes for the company and see “tremendous opportunity” in its potential ability to counteract fire dangers.

“Mitigating the exponentially increasing risk of wildfires can only happen if we shift from reactive suppression to proactive prevention,” Kevin Kimsa, managing partner of Climate Innovation Capital, said in a statement when the company’s recent funding was announced.

Rogers, of the Woodwell Climate Research Center, has spoken with Skyward several times but hasn’t worked with them. He also stressed that it’s crucial to understand potential environmental impacts from lightning suppression and to consult with citizens in affected areas, including Indigenous communities.

But he says he’s “optimistic” about the role that lighting suppression could play, if it works effectively and without major downsides.

That’s because preventing wildfires is far cheaper than putting them out, and it avoids risks to firefighters, ecosystems, infrastructure and local communities.

“If you’re able to go after fires before they’ve even ignited, you remove a lot of that from the equation,” he says.

The Download: The startup that says it can stop lightning, and inside OpenAI’s Pentagon deal

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This startup claims it can stop lightning and prevent catastrophic wildfires

Startup Skyward Wildfire says it can prevent catastrophic fires by stopping the lightning strikes that ignite them. So far, it hasn’t publicly revealed how it does so, but online documents suggest the company is relying on an approach the US government began evaluating in the early 1960s: seeding clouds with metallic chaff, or narrow fiberglass strands coated with aluminum. 

It just raised millions of dollars to accelerate its product development and expand its operations. But researchers and environmental observers say uncertainties remain, including how well the seeding may work under varying conditions, how much material would need to be released, how frequently it would have to be done, and what sorts of secondary environmental impacts might result. Read the full story. 

—James Temple

OpenAI’s “compromise” with the Pentagon is what Anthropic feared

OpenAI has reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.”

OpenAI has taken great pains to say that it has not caved to allow the Pentagon to do whatever it wants with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. 

But it’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. Read the full story.

—James O’Donnell

The story is from The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Gulf states are racing against time to intercept Iran’s drone attacks

They could run out of interceptors very soon. (WSJ $)

  • Amazon says it lost three data centers in the strikes. (Business Insider $)
  • There has been a spike in GPS attacks too, affecting nearby shipping. (Wired)
  • Crypto stocks are tumbling in response. (Bloomberg)

2 Apple is considering using Google’s Gemini AI to power Siri

It’s also set to deepen its reliance on Google’s cloud infrastructure. (The Information $)

3 A database shows which topics fall foul of the Trump administration

National parks are being forced to erase any exhibits that display “partisan ideology”. (WP $)

  • The transatlantic battle over free speech is coming. (FT $)
  • What it’s like to be banned in the US for fighting online hate. (MIT Technology Review)

4 Can AI actually enhance jobs, not just destroy them?

Three economists take the optimistic view (New Yorker)

5 Are “bossware” apps tracking you? 

Tools to watch what workers are doing are getting more and more sophisticated. (NYT)

6 RFK Jr says he is about to unleash 14 banned peptides

By reversing a Biden-era FDA ban on their production. (Gizmodo)

7 Meta is testing an AI shopping research tool

It hopes to rival Gemini and ChatGPT. (Bloomberg)

8 Maybe data centers in space aren’t as crazy as they sound? 

They could be cheaper, with the right tech. (Economist

9 Why climate change is making turbulence worse

Buckle up, people. (New Yorker)

10 6G is on its way!

And the hype cycle is doing its thing again. (The Verge $)

Quote of the day

“We don’t list markets directly tied to death. When there are markets where potential outcomes involve death, we design the rules to prevent people from profiting from death.”

—Tarek Mansour, CEO and founder of prediction market company Kalshi, tries to justify the $54 million bet on “Ali Khamenei out as Supreme Leader?” on his platform, 404 Media reports.

One More Thing

surveillance and control concept

EDEL RODRIGUEZ

South Africa’s private surveillance machine is fueling a digital apartheid

Johannesburg is birthing a uniquely South African surveillance model. Over the past decade, the city has become host to a centralized, coordinated, entirely privatized mass surveillance operation. These tools have been enthusiastically adopted by the local security industry, grappling with the pressures of a high-crime environment.

Civil rights activists worry the new surveillance is fueling a digital apartheid and unraveling people’s democratic liberties, but a growing chorus of experts say the stakes are even higher. 

They argue that the impact of artificial intelligence is repeating the patterns of colonial history, and here in South Africa, where colonial legacies abound, the unfettered deployment of AI surveillance offers just one case study in how a technology that promised to bring societies into the future is threatening to send them back to the past. Read the full story.

—Karen Hao and Heidi Swart

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These influencers are on a mission to save the UK’s pubs. 

+ Here’s what a map of America solely made up of its rivers would look like.

+ The winner of the Underwater Photographer of the Year awards is incredibly cute.
+ Pokémon may have turned 30 years old, but the franchise is more popular than ever.

Payment Friction Wins in Africa

The ideal ecommerce checkout is frictionless and linear: enter one’s address and payment details and then await product delivery.

In Africa, providing digital payment info is a leap of faith. The checkout process is often conversational and skeptical.

Consumers may click “Buy,” but they aren’t reaching for their payment details. They first need proof of the product and company. They may ask via WhatsApp for real-time product photos and delivery timelines. They might demand a voice note to ensure a human is on the other side of the screen. It’s a do-it-yourself verification system.

“Cautious consumers” is McKinsey & Company’s term for Africa and Middle East-based ecommerce shoppers in its 2020 report (PDF).

Conversational Commerce

It is a mistake to view this reliance on WhatsApp as a workaround. For consumers in Africa, a WhatsApp chat is akin to looking a seller in the eye.

Consider the January 2026 partnership in Nigeria between PayPal and Paga, the mobile payment platform. After two decades of restrictions, Nigerians could finally receive international funds from PayPal into their Paga wallets.

The reception, however, was not great. Freelancers flooded Nigerian X with vitriol and skepticism stemming from a long memory of frozen PayPal funds.

This collective memory creates a psychological barrier that the partnership may struggle to overcome.

Trust

Paystack’s instant bank transfer settles transactions in one day.

Local payment platforms such as Flutterwave and Stripe-owned Paystack have succeeded because they understood consumers’ memories of money restrictions and failed transactions. The infrastructure of both reflects how people actually move capital.

Bank transfers. In Nigeria, merchants need settlement within one day of the transaction to keep their businesses running. For the customer, the transfer is final and verifiable.

 M-Pesa. In Kenya, STK Push is a consumer-controlled security protocol enabling money transfers on mobile devices. Africa accounts for roughly 70% of global mobile money payments; ignoring STK Push is costly.

Kiosks. In Egypt, consumers often demand physical confirmation before payment. Fawry’s cash-at-kiosk model allows shoppers to order online but pay at one of thousands of physical kiosks.

Success

Foreign ecommerce merchants cannot buy their way into Africa with tech alone. Success comes from leaning into the friction consumers require.

  • Use social media to consummate transactions. In Africa, an abandoned cart could mean that a shopper is waiting for the merchant on WhatsApp to prove it’s real.
  • Localize the rails. Don’t force a Kenyan to use a Visa card or a Nigerian to rely on an international gateway that might flag the transaction as high risk. Use recognizable payment methods such as instant transfers, mobile payments, and in-person dialogue.
  • Invest in the boring stuff. Don’t invest excessively in technology while ignoring operations. Logistics and customer support are where trust is either cemented or broken.
New: Futureproof your website for the agentic web with Yoast SEO Schema Aggregation 

In November 2025, Yoast announced a collaboration with NLWeb, an open web protocol developed by Microsoft designed to simplify building conversational interfaces for the web.

Today, we are proud to introduce the first major result of that work: Yoast SEO Schema Aggregation. This is an opt in feature that brings your website’s structured data together in a clearer and more consistent way. By choosing to enable it, you can help search engines and intelligent agents better understand and use your content.

If you want to see which schema types are available for your WordPress setup, our schema overview explains what is included across different product plans.

Bridging the gap: from discovery to conversation

Yoast has a history of helping WordPress websites be represented fairly and responsibly in the open web.

2019: Yoast introduced the first of its kind schema graph and API, helping search engines better understand your content as they moved beyond keywords and evolved into discovery engines.

Today: we are taking the next step. As the agentic web becomes more important, we are helping your WordPress site move from being discovered to being understood and engaged with through conversation.

Starting today, the new Schema Aggregation feature in Yoast SEO is here. It establishes a standardized connection between your website’s structured data and the systems that power AI-driven discovery and interaction. These include large language models, agents, and conversational assistants such as Copilot. It helps ensure your published content can be understood correctly by AI. This matters as AI becomes part of how people find and use information online.

The NLWeb + Yoast integration is built in collaboration with the NLWeb team, including R.V. Guha, co-founder of Schema.org. Together, we are extending the open web standards you already rely on, so your WordPress website can participate confidently in the emerging agentic web in a responsible and future ready way.

Benefits of the Schema Aggregation feature

Questions about AI often come down to one thing: who can access your data. This feature is built with a privacy first approach from the start.

  • Complete: All indexable content included
  • Clean: No duplicate entities, no navigation clutter
  • Connected: Relationships between entities preserved (author → articles)
  • Compliant: Respects exisiting privacy settings
  • Fast: Sub-100ms cached responses, pagination for large sites

For developers and technical users who want more control, we have developer documentation on schema markup. It explains how to inspect and extend your schema graph. This gives you maximum personalization, while retaining standardization at scale.

“You can’t stop the AI wave, but you can direct it. Our integration with NLWeb puts you back in charge. It allows you to manage server load efficiently and ensures that when AIs do access your content, they get the rich, semantic understanding necessary to represent you correctly.” Alain Schlesser – Principal Architect, Yoast.

What’s new

The next time you log in and open Yoast SEO (updated to 27.1), you’ll see a short guided walkthrough. It introduces the new Schema Aggregation feature. It also shows how to enable it using a simple toggle.

We have added a new endpoint to Yoast SEO (free), making the Schema Aggregation feature available to all customers who choose to enable it. The endpoint exposes your site’s full structured data graph in a proposed new standard called a schemamap.

That means, instead of an AI system crawling hundreds of pages individually (or however many pages you have on your website), it can now retrieve your site’s schema, including articles, authors, products, and organizational data, in one optimized request.

Before and after: from pages to a connected site

Below is an example of the structured data Yoast already outputs on an individual page. This page level schema helps search engines understand what that specific page is about, including its content type, author, and relationships.

An example of Yoast schema markup at the individual page level, the example shown is yoast.com

With Schema Aggregation enabled, Yoast provides a site-level view. Instead of looking at pages in isolation, your entire website’s structured data is connected. It consolidates into a single output called a schemamap. This can appear quite overwhelming to look at. It makes it easier for AI systems to understand your content. They can see how your articles, authors, products, and organisation relate to each other across the site.

Nothing about your existing schema changes. The same data is reused, simply organized in a way that reflects how your website works as a whole. Here is a schema map example from Yoast.com, displayed with the Yoast SEO Schema Visualizer.

How it works: Standardized, connected, and deduplicated

The Schema Aggregation feature doesn’t just share data; it organizes it for AI consumption:

  • Eliminates data mess: It merges duplicate mentions of authors, products, or articles into one scalable, connected record.
  • Integrates automatically: If you use one of our Schema API partners like The Events Calendar or WP Recipe Maker, those schema types are included in the graph automatically.

Developers can also explore our Schema Integrations page to see how Schema API partners connect to and extend the Yoast SEO Schema Framework (the graph).

Collaborative innovation

When working at scale across tens of millions of websites, careful testing is essential to ensure a safe and reliable launch. This feature was developed with agencies and advanced users in mind, and tested in controlled environments.

We collaborated closely with Syde, our Innovation Partner, to test the new feature across a diverse range of real-world client scenarios. The approach for this release was tested in controlled environments to confirm scalability and consistent output quality before deployment.

Syde’s feedback has been instrumental in refining the schema aggregation logic. We look forward to continuing this partnership, working together to help clients remain visible and accurately represented as AI driven systems evolve.

Be visible, understood, and represented

The rules of discovery are shifting, but your site doesn’t have to be left behind. With NLWeb and Yoast, your website stays at the center of the conversation.

Ready to see it in action? Update to the latest version of Yoast SEO and enable the NLWeb integration in your Yoast SEO settings today. For more information about how to enable Schema Aggregation, visit this help article.

I checked out one of the biggest anti-AI protests yet

Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple of hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta, and Google DeepMind, chanting slogans and waving signs. The march was organized by two separate activist groups, Pause AI and Pull the Plug, which billed it as the largest protest of its kind yet.

The range of concerns on show covered everything from online slop and abusive images to killer robots and human extinction. One woman wore a large homemade billboard on her head that read “WHO WILL BE WHOSE TOOL?” (with the Os in “TOOL” cut out as eye holes). There were signs that said “Pause before there’s cause” and “EXTINCTION=BAD” and “Demis the Menace” (referring to Demis Hassabis, the CEO of Google DeepMind). Another simply stated: “Stop using AI.”

An older man wearing a sandwich board that read “AI? Over my dead body” told me he was concerned about the negative impact of AI on society: “It’s about the dangers of unemployment,” he said. “The devil finds work for idle hands.”

This is all familiar stuff. Researchers have long called out the harms, both real and hypothetical, caused by generative AI—especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about them.  

The first time I ran into anti-AI protesters was in May 2023, outside a London lecture hall where Sam Altman was speaking. Two or three people stood heckling an audience of hundreds. In June last year Pause AI, a small but international organization set up in 2023 and funded by private donors, drew a crowd of a few dozen people for a protest outside Google DeepMind’s London office. This felt like a significant escalation.

“We want people to know Pause AI exists,” Joseph Miller, who heads its UK branch and co-organized Saturday’s march, told me on a call the day before the protest: “We’ve been growing very rapidly. In fact, we also appear to be on a somewhat exponential path, matching the progress of AI itself.”

Miller is a PhD student at Oxford University, where he studies mechanistic interpretability, a new field of research that involves trying to understand exactly what goes on inside LLMs when they carry out a task. His work has led him to believe that the technology may forever be beyond our control and that this could have catastrophic consequences.

It doesn’t have to be a rogue superintelligence, he said. You just needed someone to put AI in charge of nuclear weapons. “The more silly decisions that humanity makes, the less powerful the AI has to be before things go bad,” he said.

After a week in which the US government tried to force Anthropic to let it use its LLM Claude for any “legal” military purposes, such fears seem a little less far-fetched. Anthropic stood its ground, but OpenAI signed a deal with the DOD instead. (OpenAI declined an invitation to comment on Saturday’s protest.)

For Matilda da Rui, a member of Pause AI and co-organizer of the protest, AI is the last problem that humans will face. She thinks that either the technology will allow us to solve—once and for all—every other problem that we have, or it will wipe us out and there will be nobody left to have problems anymore. “It’s a mystery to me that anyone would really focus on anything else if they actually understood the problem,” she told me.

And yet despite that urgency, the atmosphere at the march was pleasant, even fun. There was no sense of anger and little sense that lives—let alone the survival of our species—were at stake. That could be down to the broad range of interests and demands that protesters brought with them.

A chemistry researcher I met ticked off a litany of complaints, which ranged from the conspiracy-adjacent (that data centers emit infrasound below the threshold of human hearing, inducing paranoia in people who live near them) to the reasonable (that the spread of AI slop online is making it hard to find reliable academic sources). The researcher’s solution was to make it illegal for companies to profit from the technology: “If you couldn’t make money from AI, it wouldn’t be such a problem.”

Most people I spoke to agreed that technology companies probably wouldn’t take any notice of this kind of protest. “I don’t think that the pressure on companies will ever work,” Maxime Fournes, the global head of Pause AI, told me when I bumped into him at the march. “They are optimized to just not care about this problem.”

But Fournes, who worked in the AI industry for 12 years before joining Pause AI, thinks he can make it harder for those companies. “We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job, that actually it’s a terrible job—you can dry up the talent pipeline.”

In general, most protesters hoped to make as many people as possible aware of the issues and to use that publicity to push for government regulation. The organizers had pitched the march as a social event, encouraging anyone curious about the cause to come along.

It seemed to have worked. I met a man who worked in finance who had tagged along with his roommate. I asked why he was there. “Sometimes you don’t have that much to do on a Saturday anyway,” he said. “If you can see the logic of the argument, if it sort of makes sense to you, then it’s like ‘Yeah, sure, I’ll come along.’”

He thought raising concerns around AI was hard for anyone to fully oppose. It’s not like a pro-Palestine protest, he said, where you’d have people who might disagree with the cause. “With this, I feel like it’s very hard for someone to totally oppose what you’re marching for.”

After winding its way through King’s Cross, the march ended in a church hall in Bloomsbury, where tables and chairs had been set up in rows. The protesters wrote their names on stickers, stuck them to their chests, and made awkward introductions to their neighbors. They were here to figure out how to save the world. But I had a train to catch, and I left them to it. 

The Download: protesting AI, and what’s floating in space

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

I checked out one of the biggest anti-AI protests ever

Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta and Google DeepMind, chanting slogans and waving signs. The march was organized by a coalition of two separate activist groups, Pause AI and Pull the Plug, who billed it as the largest protest of its kind yet.

This is all familiar stuff. Researchers have been calling out the harms, both real and hypothetical, caused by generative AI— especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini—for years. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about it. Read the full story.

—Will Douglas Heaven

We’re putting more stuff into space than ever. Here’s what’s up there.

Earth’s a medium-size rock with some water on top, enveloped by gases that keep everything that lives here alive. Just at the edge of that envelope begins a thin but dense layer of human-built, high-tech stuff.

People started putting gear up there in 1957, and now it’s a real habit. Telescopes look up and out at the wild universe. Humans live in an orbiting metal bubble. In the last five years, the number of active satellites in space has increased from barely 3,000 to about 14,000—and climbing. And then there’s the garbage. Here’s a closer look at Earth’s ever-thickening shell of human-made matter—the anthroposphere.

—Jonathan O’Callaghan

This story is from the latest print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land. 

MIT Technology Review is a 2026 ASME finalist in reporting

The American Society of Magazine Editors has named MIT Technology Review as a finalist for a 2026 National Magazine Award in the reporting category. 

The shortlisted story—“We did the math on AI’s energy footprint. Here’s the story you haven’t heard”—is part of our Power Hungry package on AI’s energy burden. 

In a rigorous investigation, senior AI reporter James O’Donnell and senior climate reporter Casey Crownhart spent six months digging through hundreds of pages of reports, interviewing experts, and crunching the numbers. Read more about what they found out.

What comes after the LLMs?

The AI industry is organized around LLMs: tools, products, and business models. Yet many researchers believe the next breakthroughs may not look like language models at all. Join us for a LinkedIn Live discussion at 12.30pm ET on Tuesday March 3 to dive into the emerging directions that could define AI’s next era. Register here!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Pentagon wanted Anthropic to analyze bulk data collected from Americans 
It proved the sticking point in talks as OpenAI swooped in to ink a new deal. (The Atlantic $)+ Anthropic has vowed to legally challenge its “security risk” label. (FT $)
+ Here’s a blow-by-blow look at how negotiations fell apart. (NYT $)
+ Downloads of Claude are on the up. (TechCrunch)

2 Iranian apps and websites were hacked in the wake of the US-Israeli strikes
News sites and a religious app were co-opted to display anti-military messages. (Reuters)
+ They urged personnel to abandon the regime and to liberate the country. (WSJ $)
+ Unsurprisingly, X is rife with disinformation about the attacks. (Wired $)
+ The campaign has disrupted online delivery orders across the Middle East. (Bloomberg $)

3 DeepSeek is poised to release a new AI model this week
The multimodal V4 is being released ahead of China’s annual parliamentary meetings. (FT $)

4 The UK is trialing a social media ban for under-16s
Hundreds of teens will test overnight digital curfews and screen time limits. (The Guardian)
+ What it’s like to attend a phone addiction meeting. (Boston Globe $)

5 Celebrities are winning huge sums playing on this major crypto casino’s slots
In fact, their lucky wins appear to spike while they’re livestreaming. (Bloomberg $)

6 America is desperate to steal China’s critical mineral lead
The victor essentially controls global computing, aerospace and defense. (Economist $)
+ This rare earth metal shows us the future of our planet’s resources. (MIT Technology Review)

7 How lasers became the military’s weapon of choice
From Ukraine to the US, soldiers are deploying laser guns. But why? (The Atlantic $)
+ They’re a key part of America’s arsenal in manning the southern border. (New Yorker $)
+ This giant microwave may change the future of war. (MIT Technology Review)

8 How quantum entanglement became big business
It promises unhackable communication—but is it too good to be true? (New Scientist $)
+ Useful quantum computing is inevitable—and increasingly imminent. (MIT Technology Review)

9 The iPod is proving a hit among Gen Z
Even though Apple discontinued the music player four years ago. (NYT $)

10 Chinese parents are joining matchmaking apps in their droves
In a bid to marry off their adult children as soon as humanly possible. (Nikkei Asia)

Quote of the day

“Day to day it just feels untenable…Some managers know this is the case, but executives just keep pointing to some bigger AI picture.”

—An anonymous Amazon employee describes the stresses of trying to increase productivity amid the company’s commitment to reducing headcount to the Financial Times.

One more thing

The iPad was meant to revolutionize accessibility. What happened?

On April 3, 2010, Steve Jobs debuted the iPad. What for most people was basically a more convenient form factor was something far more consequential for non-speakers: a life-­changing revolution in access to a portable, powerful communication device for just a few hundred dollars.

But a piece of hardware, however impressively designed and engineered, is only as valuable as what a person can do with it. After the iPad’s release, the flood of new, easy-to-use augmentative and alternative communication apps that users were in desperate need of never came.

Today, there are only around half a dozen apps, each retailing for $200 to $300, that ask users to select from menus of crudely drawn icons to produce text and synthesized speech. It’s a depressingly slow pace of development for such an essential human function. Read the full story.

—Julie Kim

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Neanderthal by name, not by nature—these prehistoric men were surprisingly romantic, thank you very much.
+ If you’re lucky enough to live in Boston, make sure you swing by these beautiful bars.
+ Hmm, this sticky hoisin sausage traybake sounds intriguing.
+ George Takei, you are an absolute maverick.

OpenAI’s “compromise” with the Pentagon is what Anthropic feared

On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.”

In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. 

You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon. 

It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)

But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote

OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance. 

However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today.

The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. 

OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences.

OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X. 

But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months.

There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote.

But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech.

There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.

Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move.

Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI.

But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.

If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).

AI Bots Don’t Need Markdown Pages

Markdown is a lightweight, text-only language easily readable by both humans and machines. One of the newest search visibility tactics is to serve a Markdown version of web pages to generative AI bots. The aim is to assist the bots in fetching the content by reducing crawl resources, thereby encouraging them to access the page.

I’ve seen isolated tests by search optimizers showing an increase in visits from AI bots after Markdown, although none translated into better visibility. A few off-the-shelf tools, such as Cloudflare’s, make implementing Markdown easier.

Serving separate versions of a page to people and bots is not new. Called “cloaking,” the tactic is long considered spam under Google’s Search Central guidelines.

The AI scenario is different, however, because it’s not an attempt to manipulate algorithms, but rather making it easier for bots to access and read a page.

Effective?

That doesn’t make the tactic effective, however. Think carefully before implementing it, for the following reasons.

  • Functionality. The Markdown version of a page may not function correctly. Buttons, in particular, could fail.
  • Architecture. Markdown pages can lose essential elements, such as a footer, header, internal links (“related products”), and user-generated reviews via third-party providers. The effect is to remove critical context, which serves as a trust signal for large language models.
  • Abuse. If the Markdown tactic becomes mainstream, sites will inevitably inject unique product data, instructions, or other elements for AI bots only.

Creating unique pages for bots often dilutes essential signals, such as link authority and branding. A much better approach has always been to create sites that are equally friendly to humans and bots.

Moreover, a goal of LLM agents is to interact with the web as humans do. Serving different versions serves no purpose.

Representatives of Google and Bing echoed this sentiment a few weeks ago. John Mueller is Google’s senior search analyst:

LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees?

Fabrice Canel is Bing’s principal product manager:

… really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Human eyes help fix people- and bot-viewed content.

5 Content Marketing Ideas for April 2026

April 2026 offers ecommerce content marketers many potential topics, from nostalgia to beer.

Content marketing is the practice of creating, publishing, and promoting articles, videos, podcasts, and similar media to attract, engage, and retain customers.

Here are five content marketing ideas your business can use in April 2026.

Apple Turns 50

Photo of one of the first Apple computers in a wooden case

An early Apple computer. Photo: Ed Uthman.

On April 1, 1976, Steve Jobs and Steve Wozniak founded Apple Computer in a garage. Fifty years later, Apple’s influence spans personal computing, mobile, and even culture. It is one of the most recognized brands in the world.

Many news outlets will publish Apple retrospectives, Steve Jobs mini-biographies, and similar pieces this April. It’s an opportunity for content marketers, too.

  • Electronics retailers could publish Apple-related listicles, for example, “10 Best Products Apple Produced — and the 5 Worst.”
  • Lifestyle or apparel brands could lean into nostalgia. A post recalling the first Mac, iPod, or iPhone may resonate with shoppers who grew up alongside those devices.
  • B2B merchants could frame Apple’s history as a case study in product innovation, branding, and ecosystem building.

National Burrito Day

Photo of two burritos wrapped in foil

From humble Mexican origins, the burrito is a staple worldwide.

National Burrito Day falls on the first Thursday in April (the 2nd this year). The pseudo-holiday celebrates the familiar and adaptable Mexican dish.

The burrito (“little donkey”) likely originated in Mexico’s Sonora or Chihuahua regions. Wheat flour tortillas made it easy to wrap beans, meat, or potatoes into a portable meal for laborers.

Those same workers carried the compact and meaty wrap with them to the United States in the early 1900s. By 1930, El Cholo Spanish Café in Los Angeles added the burrito to its menu (the first in a restaurant).

In America, the burrito continued to evolve. In 1956, then 19-year-old Duane R. Roberts invented the first frozen burrito. In 1961, another L.A. restaurant, El Faro, invented the massive, foil-wrapped “mission style” burrito. There is also the “California burrito” stuffed with carne asada, French fries, cheese, guacamole, and sour cream.

For content, the adaptable burrito fits a variety of merchants.

  • Online grocer. “Regional Guide to America’s Favorite Burritos.”
  • Meal subscription brand. “Build-Your-Own Burrito for National Burrito Day.”
  • Workwear retailer. “10 Best Job Site Burritos.”
  • Kitchenware merchant. “How to Warm, Fold, and Wrap a Burrito.”

Google Discover Experiment

Photo of a Google Discover page on a smartphone

Google Discover displays content based on user interests rather than explicit queries.

Google Discover may represent a relatively new form of site traffic. Unlike traditional search, which responds to explicit queries, Discover pushes content based on users’ interests and behavior.

It favors fresh content with strong visuals, topical authority, and user engagement. Hence, merchants may focus on being a trusted source for their niche.

In April, consider running a series of Discover optimization tests. Marketers can visit their Search Console account and download the list of Discover-referred pages (if any). Then ask an AI model such as ChatGPT or Gemini to analyze the articles. Was there a common topic? A recognizable pattern?

National Beer Day

Photos of various bugs and bottles containing beer

The popularity of beer spans styles, regions, and traditions.

National Beer Day on April 7 commemorates the Cullen-Harrison Act of 1933, which legalized the sale of beer containing 3.2% alcohol and signaled the end of Prohibition.

Since 1933, beer has become one of America’s most popular beverages.

Beyond volume, beer is an expression of identity and taste. From light lagers to high-alcohol craft IPAs, beer culture spans sports, travel, food, and lifestyle. That breadth makes National Beer Day a versatile content hook for ecommerce content marketers.

Here are some examples.

  • Fitness gear merchant. “Low-Alcohol Beers for Active Lifestyles.”
  • Men’s heritage apparel brand. “Gentleman’s Guide to Classic Beers.”
  • Luggage retailer. “Guide to Germany’s Most Iconic Beers.”
  • Outdoor equipment store. “Craft Beer Pairings for Spring Adventures.”

Each approach connects beer culture to the merchant’s audience and product set.

National Zipper Day

Photo of a zipper on an article of clothing

The zipper transformed apparel, luggage, and outdoor gear.

On April 29, 1913, American engineer Gideon Sundback patented his “Hookless Fastener No. 2” —  the first modern zipper.

The zipper may seem mundane, but it represents a significant innovation. Developed for clothing, the zipper eventually found applications in luggage, boots, tents, and more. Like many great product innovations, it solved a practical problem with elegant engineering.

Content marketers with relevant products can use National Zipper Day as an opportunity to spotlight craftsmanship and materials.

AI is rewiring how the world’s best Go players think

Burrowed in the alleys of Hongik-dong, a hushed residential neighborhood in eastern Seoul, is a faded stone-tiled building stamped “Korea Baduk Association,” the governing body for professional Go. The game is an ancient one, with sacred stature in South Korea. 

But inside the building, rooms once filled with the soft clatter of hands dipping into wooden bowls of stones now echo with mouse clicks. Players hunch over their monitors and replay their matches in an AI program. Others huddle around a Go board and debate the best next move, while coaches report how their choices stack up against the AI’s. Some sit in silence, watching AI programs play against each other. 

Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result. 

For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.

When training for a match, Shin spends most of his waking hours poring over KataGo. “It’s almost like an ascetic practice,” he says. According to a study in 2022 by the Korean Baduk League, Shin’s moves match AI’s 37.5% of the time, well above the 28.5% average the study found among all players.

“My game has changed a lot,” says Shin, “because I have to follow the directions suggested by AI to some extent.” The Korea Baduk Association says it has reached out to Google DeepMind in the hopes of arranging a match between Shin and AlphaGo, to commemorate the 10th anniversary of its victory over Lee. A spokesperson for Google DeepMind said the company could not provide information at this time. But if a new match does happen, Shin, who has trained on more advanced AI programs, is optimistic that he’d win. “AlphaGo still had some flaws then, so I think I could beat it if I target those weaknesses,” he says.

AI rewrites the Go playbook

Go is an abstract strategy board game invented in China more than 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, aiming to conquer territory by surrounding their opponent’s stones. It’s a game of striking mathematical complexity. The number of possible board configurations—roughly 10170—dwarfs the number of atoms in the universe. If chess is a battle, Go is a war. You suffocate your enemy in one corner while fending off an invasion in another.

To train AI to play Go, a vast trove of human Go moves are fed into a neural network, a computing system that mimics the web of neurons in the human brain. AlphaGo, which was later christened AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and refined by playing millions of games against itself. In 2017, its successor, AlphaGo Zero, picked up Go from scratch. Without studying any human games, it learned by playing against itself, with moves based only on the rules of the game. The blank-slate approach proved more powerful, unconstrained by the limits of human knowledge. After three days of training, it beat AlphaGo Lee 100 games to zero. 

Google DeepMind retired AlphaGo that same year. But then a wave of open-source models inspired by AlphaGo Zero emerged. Today, KataGo is the program most widely used by professional Go players in South Korea. It’s faster and sharper than AlphaGo. It’s learned to predict not just who will win, but also who owns each point on the board at any given moment. While AlphaGo Zero pieced together its understanding of the board by looking at small sections, KataGo learned to read the whole board, developing better judgment for long-term strategies. Instead of just learning how to win, it learned to maximize its score.

The software has reshaped how people play. For hundreds of years, professional Go players have navigated the game’s astronomical complexity by developing heuristics that replaced brute calculation. Elegant opening strategies imposed abstract order on the empty grid. Invading corners early was a bad bargain. Each generation of Go players added new principles to the canon. 

But “AI has changed everything,” says Park Jeong-sang, a South Korean Go commentator. “Fundamental moves that were once considered common sense aren’t played at all today, and techniques that didn’t exist before have become popular.” 

The starkest shift has been in opening moves. Go starts on a blank grid, and the first 50 moves were canvases for abstract thinking and creativity, where players etched their personalities and philosophies. Lee Sedol fashioned provocative moves that invited chaos. Ke Jie, a Chinese player who was defeated by AlphaGo Master in 2017, dazzled with agile, imaginative moves. Now, players memorize the same strain of efficient, calculated opening moves suggested by AI. The crux of the game has shifted to the middle moves, where raw calculation matters more than creativity.

Training with AI has led to a homogenization of playing styles. Ke Jie has lamented the strain of watching the same opening moves recycled endlessly. “I feel the exact same way as the fans watching. It’s very tiring and painful to watch,” he told a Chinese news outlet in 2021. Fans revel when a player breaks from the script with offbeat moves, but those moments have become rarer. Over a third of moves by the top Go players replicate AI’s recommendations, according to a study in 2023. The first 50 moves of each game are often identical to what AI suggests, many players say. 

“Go has become a mind sport,” says Lee Sedol, who retired three years after his 2016 defeat to AlphaGo. “Before AI, we sought something greater. I learned Go as an art,” he says. “But if you copy your moves from an answer key, that’s no longer art.” 

Playing Go is no longer about charting new frontiers, some players say, but about following the dictates of a superhuman oracle. “I used to inspire fans by advancing the techniques of Go and presenting a new paradigm,” says Lee. “My reason for playing Go has vanished.”

A mysterious mind

The players who have stayed in the game are trying to reinvent their craft. But it can be hard to discern what the new principles are.

Disarmingly slight and formidably calm, Kim Chae-young, one of the top female Go players in the world, grew up learning the game from her father, who was also a professional Go player. But when AI began to reshape the game, she found herself starting over. “I needed time to abandon everything I had learned before,” says Kim who shared her screen with me as she pointed her cursor to the blue spots suggested by KataGo. “The intuition I had built up over the years turned out to be wrong.” 

As she leaned close to her monitor, her blinking screen showed the winning probabilities of each move, with no explanations. Even top players like Kim and Shin don’t understand all of AI’s moves. “It seems like it’s thinking in a higher dimension,” she says. When she tries to learn from AI, she adds, “it’s less about rationally thinking through each move, but more about developing a gut feeling—an intuition.”

Researchers are trying to discover the superhuman knowledge encoded in game-playing AI programs so that humans can learn it too. In 2024, researchers at Google DeepMind extracted new chess concepts from AlphaZero, a generalized version of AlphaGo Zero that can also play chess, and taught them to chess grandmasters using chess puzzles. The Go concepts that players have picked up from AI systems so far are “probably only a small portion of what you could potentially learn,” says Nicholas Tomlin, a computer scientist at Toyota Technological Institute at Chicago, who coauthored a study probing Go concepts encoded in AlphaGo Zero.

But extracting those lessons remains a struggle. “Top-tier players haven’t yet been able to deduce the general principles behind AI moves,” says Nam Chi-hyung, a Go professor at Myongji University. Although they can emulate AI’s moves, they have yet to glean a new paradigm for the game because its reasoning is a black box, she says. Go may be in an epistemic limbo. 

Even if AI is an opaque teacher, it’s a democratic one. It has supercharged training for female Go players, who have long been underdogs of the game. For decades, training meant studying under top male players, and the most competitive matches took place in male circles that were difficult for women to break into, says Nam. “Female players never had access to that experience,” she says. “But now they can study with AI, which has made their training environment much more favorable.” More broadly, AI has narrowed the gap between players by helping everyone perfect their opening moves.

Female players have climbed the ranks over the last few years as a result. In 2022, Choi Jeong, then the top female player in the world, became the first woman to reach the finals of a major international Go tournament. Dubbed “Girl Wrestler” for her fierce, combative style of play, she took on Shin. She lost, but the match broke new ground for women in Go. In 2024, Kim made headlines for winning the Korean Go League’s postseason playoffs. She was the only female player in the tournament. 

Training with AI has given Kim newfound confidence. Analyzing male players’ moves with AI has shattered their veneer of infallibility. “Before, I couldn’t gauge just how strong top male players were—they felt invincible. Now, I know that they make mistakes, and their moves aren’t always brilliant,” she says. “AI broke the psychological barrier.”

Go players find a new identity

Although AI has mastered Go far better than any player, fans continue to prefer watching people play. “A Go game between AI programs is not very fun for fans to watch,” says Park, the Go commentator. Such matches are too complex for fans to follow, too flawless to be thrilling, he says. 

Players can mimic AI’s opening moves, but in the middle game—where the board branches into too many possibilities to memorize—their own judgment takes over. Fans revel in watching players make mistakes and mount comebacks, exuding personality in every stone on the board. Shin’s playing style is combative but marked by machinelike poise. Kim deftly navigates  the most chaotic positions on the board. 

“In Go, every move is a choice you make, and your opponent responds with a choice of their own,” says Kim Dae-hui, 27, a Go fan and amateur player. “Watching that process unfold is fun.”

With fans like Kim still watching, Shin finds meaning in his game. “I can play a kind of Go that tells a story that only a human can,” he says. 

After his retirement, Lee searched for a new job where he could have an edge as a human. He started making board games, giving speeches, and teaching students at a university. “I’m looking for a new domain that I can enjoy and excel at,” he says.

But lately, he feels more hopeful for the game he left behind. “It’s every Go player’s dream to play a masterpiece game,” he says—a game of technical brilliance, with no mistakes, fought to a razor’s edge between evenly matched players. “It’s like a mirage,” Lee says, chuckling. “Maybe AI can help us play a masterpiece.” 

Shin hopes he can do that. To Shin, AI is a teacher, a companion, and a North Star. “I may be one of the strongest human players, but with AI around, I can’t be so arrogant,” he says. “AI gives me a reason to keep improving.”