Matt Mullenweg Expects WP Engine Dispute Resolution Soon via @sejournal, @martinibuster

Matt Mullenweg downplayed his dispute with WP Engine, saying it’s not as big a deal as people are making it out to be and shared that he believes it will all be over in a few months.

Matt Compares Himself To Standing Up To Bullies

The podcast host expressed surprise at how harshly Matt went after WP Engine, expressing that he never figured Matt to be the kind of person who would go after someone else so hard, that it didn’t seem to fit his idea of the kind of person Matt Mullenweg was in his mind. Matt responded that he thought that was kind of funny because he’s actually that guy.

The podcast host commented:

“I’ve read a lot about Matt’s work. I don’t know Matt and I’ve listened to him, he doesn’t seem like someone who would ever like insult someone and I was actually surprised that you were going as hard as you were. And I guess your perspective is like, they’re coming after everything I made or they don’t contribute, whatever. But I was actually surprised that you were you you were pissed off and I didn’t think that you would be the type of guy that would come off pissed off…”

Matt smiled as he explained that he feels obliged to stand up for WordPress, like someone standing up to a playground bully.

He explained:

“…so just like a schoolyard bully, you kind of have to stand up for yourself. So it’s kind of funny because you say you don’t think of me as doing this but actually if you look at the history of WordPress there have been maybe four or five times in the history where I had this kind of villain arc … like we had a fight to protect our principles and the sustainability and the future of WordPress.”

Matt Says People Will Forget About WP Engine Dispute

Matt compared the current dispute with WP Engine with previous controversies as a way to note how those were forgotten and one day the WP Engine conflict will also be forgotten.

Mullenweg continued:

“You know, some of these previous controversies that got mainstream media coverage, you know CNN, I had this Hot Nacho scandal in the first couple years of WordPress or the Thesis fight or the Easter Massacre of themes, like all these things I’m mentioning you probably haven’t heard of.

It used to be like half my Wikipedia page, now it’s not. Today if you go to my Wikipedia page, their PR firm has a whole paragraph about this.

I think in 5 years maybe it’ll be a sentence or not even on there at all.”

Mullenweg Downplays WP Engine Dispute

Matt sought to portray WP Engine as not that big a company and ultimately people are making a bigger deal about the dispute than it actually is.

He said:

“And they’re a web host which people think is the largest but actually you know probably the sixth or seventh largest WordPress web host. There’s a lot of bigger ones and they’re a single digit percentage of all the WordPresses in the world. They probably have like 700,000 800,000 or something.

People have made this into a bigger deal than it really is.”

Mullenweg Expects Fight To Be Over In Months

Lastly, Mullenweg expressed the opinion that it was his duty to stand up and fight and that he expected the WP Engine dispute to be behind him within a few months although he did acknowledge that there are many angry people.

The characterization that the dispute will be over within a few months is startling because it seems to suggest that there is something going on behind the scenes or that he would simply prevail and get his way. Mullenweg didn’t explain what he meant by that comment and the podcast hosts didn’t ask him to elaborate.

Mullenweg said,

“So it’s not my first rodeo. Sometimes you have to fight to protect your open source ideals and the community and and your trademark.

By the way, I expect this to resolve in the next few months. Although it’s easy to find like, if you go on Reddit or Twitter, I get a lot of hate.”

At this point Matt explained the conflict from his point of view, painting himself as the victim who was forced to go on the attack, narrating a sequence of events that generally isn’t how most people experienced it. He painted WP Engine’s side as the aggressor and characterized the public rebuke he gave of WP Engine at WordCamp as a “presentation.”

Mullenweg explained:

“Some of the people are uncomfortable with you know us having to to fight protect ourselves. You know WP Engine took some, a very aggressive legal action. So it turned out when we thought we were sort of good faith negotiating they were preparing a legal case to attack us because you know 3 days after I give this presentation they launched this huge lawsuit with Quinn Emanuel it’s kind of like the one of the biggest nastiest law firms.”

Where Were The Hard Questions?

One of the podcast hosts solicited the WordPress communities on Reddit and Twitter for questions that he could ask Matt Mullenweg. The community responded with many questions but the podcast hosts largely refrained from asking those user submitted questions, which to be fair were pretty hard-hitting and inherently presupposed things about Mullenweg.

Watch the podcast interview:

Featured Image by Shutterstock/supercaps

5 Content Marketing Ideas for March 2025

Content such as articles, videos, and podcasts can be the building blocks of modern consumer engagement. Content drives direct traffic, search engine rankings, social media, and AI tools.

In March 2025, ecommerce content marketers could focus on sustainability, compliments, baseball, agricultural heritage, and a spring cleaning challenge.

Go Green

Marketers can use the green of St. Patrick’s Day to focus on sustainability.

While March often brings thoughts of St. Patrick’s Day shamrocks and emerald hues, ecommerce businesses can expand the “green” theme to highlight sustainability practices and eco-friendly products.

A marketer can transform a playful holiday into meaningful content that resonates with conservation-conscious consumers.

Consider publishing blog posts or videos describing your business’s efforts to consume less and conserve more.

  • A beauty supply brand could showcase its transition to plastic-free packaging.
  • A direct-to-consumer outdoor gear company might detail its use of recycled materials.
  • A home goods store could demonstrate how it has reduced shipping waste.
  • A fashion retailer might explain its clothing recycling program.

The content becomes even better with metrics and achievements. Share actual numbers about the use of recycled materials or less packaging. These details help shoppers understand the real impact of their purchasing decisions.

A shop could create content that encourages sustainable practices, such as:

  • A kitchenware store could publish guides about reducing food waste.
  • An electronics retailer might offer tips for extending a device’s life.
  • A garden supply company could create content about water conservation.
  • A home decor business could share upcycling ideas for its products.

World Compliment Day

World Compliment Day is an opportunity to recognize customers, employees, and suppliers.

March 1, 2025, is World Compliment Day, an opportunity to create uplifting and entertaining content.

Unlike many commercial observances focusing on gift-giving, World Compliment Day celebrates the power of sincere appreciation — perfect for authentic engagement.

I see four angles a content marketer could take:

  • Customer appreciation. Short-form videos or blog posts on “what we love about our customers.”
  • Employee appreciation. Profile key personnel and tell the brand’s story from their perspective.
  • Supplier appreciation. Recognize top suppliers in blog posts or podcasts. Mr. Porter, the men’s fashion shop, used to run articles featuring quality inventory vendors.
  • Encourage compliments. Run social campaigns as part of a contest or discount to encourage customers to compliment others.

Spring Training

Photo of a baseball player in uniform

Spring training is an American baseball tradition, but “training” can apply to everyone.

Major League Baseball spring training in North America starts in February and runs through March 25. Teams head to Arizona, California, and Florida to prepare for the regular season.

Ecommerce businesses can tap into the nostalgic and hopeful spirit of baseball’s preseason. The annual tradition marks more than just the return of America’s pastime – it represents renewal, preparation, and the anticipation of warmer days ahead.

Content marketers could take a few angles with spring training, including a “Spring Training for Everyone” theme. The idea is to apply “spring training” to a shop’s customers and products.

  • Fitness retailers could create “Spring Training for Everyone” workout guides.
  • Kitchen supply shops could have “Spring Training Recipes” focused on nutrition and weight loss.
  • Outdoor furniture sellers could offer “Spring Patio” content.
  • Garden supply stores might publish “Spring Planting Guides.”

National Agriculture Day

Photo of a male farmer in a field of crops.

National Agriculture Day celebrates farming and food production.

National Agriculture Day falls on March 18, 2025, and celebrates the vital role of food production in our daily lives.

Farm supply retailers have a clear connection, but almost any business can create content aligning its products to agricultural heritage and sustainable food systems.

Here are a few example blog post titles:

  • Kitchen accessories shop: “The Chef’s Guide to Seasonal Produce”
  • DTC workwear brand: “How Farm Life Shaped Modern Fashion”
  • Pet supply retailer: “From Farm to Bowl: Understanding Pet Food Sources”
  • Travel merchant: “Top Farm Tourist Destinations for 2025”

Remember, the goal of content marketing is to entice shoppers to visit your website, engage with your brand, and ultimately become loyal customers. Never hesitate to connect your products to the topic.

Spring Cleaning Challenge

Photo of a male washing a classic Ford Mustang

Spring cleaning may take many forms, from cleaning a home to washing a car.

My fifth content marketing idea for March 2025 is a “Spring Cleaning Challenge,” an integrated campaign of multichannel content that drives engagement while naturally showcasing products.

The approach combines education, social proof, and community building:

  • Create a 14- or 30-day cleaning and organization program,
  • Release daily or weekly task videos,
  • Offer downloadable checklists and planning guides,
  • Include before and after photos for social sharing,
  • Provide expert tips and sustainable cleaning methods,
  • Offer rewards or discounts for the folks participating.

The idea applies to many businesses since “spring cleaning” could be a house, a vehicle, or a contractor’s power tools.

How measuring vaccine hesitancy could help health professionals tackle it

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, Robert F. Kennedy Jr., President Donald Trump’s pick to lead the US’s health agencies, has been facing questions from senators as part of his confirmation hearing for the role. So far, it’s been a dramatic watch, with plenty of fiery exchanges, screams from audience members, and damaging revelations.

There’s also been a lot of discussion about vaccines. Kennedy has long been a vocal critic of vaccines. He has spread misinformation about the effects of vaccines. He’s petitioned the government to revoke the approval of vaccines. He’s sued pharmaceutical companies that make vaccines

Kennedy has his supporters. But not everyone who opts not to vaccinate shares his worldview. There are lots of reasons why people don’t vaccinate themselves or their children.

Understanding those reasons will help us tackle an issue considered to be a huge global health problem today. And plenty of researchers are working on tools to do just that.

Jonathan Kantor is one of them. Kantor, who is jointly affiliated with the University of Pennsylvania in Philadelphia and the University of Oxford in the UK, has been developing a scale to measure and assess “vaccine hesitancy.”

That term is what best captures the diverse thoughts and opinions held by people who don’t get vaccinated, says Kantor. “We used to tend more toward [calling] someone … a vaccine refuser or denier,” he says. But while some people under this umbrella will be stridently opposed to vaccines for various reasons, not all of them will be. Some may be unsure or ambivalent. Some might have specific fears, perhaps about side effects or even about needle injections.

Vaccine hesitancy is shared by “a very heterogeneous group,” says Kantor. That group includes “everyone from those who have a little bit of wariness … and want a little bit more information … to those who are strongly opposed and feel that it is their mission in life to spread the gospel regarding the risks of vaccination.”

To begin understanding where individuals sit on this spectrum and why, Kantor and his colleagues scoured published research on vaccine hesitancy. They sent surveys to 50 people, asking them detailed questions about their feelings on vaccines. The researchers were looking for themes: Which issues kept cropping up?

They found that prominent concerns about vaccines tend to fall into three categories: beliefs, pain, and deliberation. Beliefs might be along the lines of “It is unhealthy for children to be vaccinated as much as they are today.” Concerns around pain center more on the immediate consequences of the vaccination, such as fears about the injection. And deliberation refers to the need some people feel to “do their own research.”

Kantor and his colleagues used their findings to develop a 13-question survey, which they trialed in 500 people from the UK and 500 more from the US. They found that responses to the questionnaire could predict whether someone had been vaccinated against covid-19.

Theirs is not the first vaccine hesitancy scale out there—similar questionnaires have been developed by others, often focusing on parents’ feelings about their children’s vaccinations. But Kantor says this is the first to incorporate the theme of deliberation—a concept that seems to have become more popular during the early days of covid-19 vaccination rollouts.

Nicole Vike at the University of Cincinnati and her colleagues are taking a different approach. They say research has suggested that how people feel about risks and rewards seems to influence whether they get vaccinated (although not necessarily in a simple or direct manner).

Vike’s team surveyed over 4,000 people to better understand this link, asking them information about themselves and how they felt about a series of pictures of sports, nature scenes, cute and aggressive animals, and so on. Using machine learning, they built a model that could predict, from these results, whether a person would be likely to get vaccinated against covid-19.

This survey could be easily distributed to thousands of people and is subtle enough that people taking it might not realize it is gathering information about their vaccine choices, Vike and her colleagues wrote in a paper describing their research. And the information collected could help public health centers understand where there is demand for vaccines, and conversely, where outbreaks of vaccine-preventable diseases might be more likely.

Models like these could be helpful in combating vaccine hesitancy, says Ashlesha Kaushik, vice president of the Iowa Chapter of the American Academy of Pediatrics. The information could enable health agencies to deliver tailored information and support to specific communities that share similar concerns, she says.

Kantor, who is a practicing physician, hopes his questionnaire could offer doctors and other health professionals insight into their patients’ concerns and suggest ways to address them. It isn’t always practical for doctors to sit down with their patients for lengthy, in-depth discussions about the merits and shortfalls of vaccines. But if a patient can spend a few minutes filling out a questionnaire before the appointment, the doctor will have a starting point for steering a respectful and fruitful conversation about the subject.

When it comes to vaccine hesitancy, we need all the insight we can get. Vaccines prevent millions of deaths every year. One and half million children under the age of five die every year from vaccine-preventable diseases, according to the children’s charity UNICEF. In 2019, the World Health Organization included “vaccine hesitancy” on its list of 10 threats to global health.

When vaccination rates drop, we start to see outbreaks of the diseases the vaccines protect against. We’ve seen this a lot recently with measles, which is incredibly infectious. Sixteen measles outbreaks were reported in the US in 2024.

Globally, over 22 million children missed their first dose of the measles vaccine in 2023, and measles cases rose by 20%. Over 107,000 people around the world died from measles that year, according to the US Centers for Disease Control and Prevention. Most of them were children.

Vaccine hesitancy is dangerous. “It’s really creating a threatening environment for these vaccine-preventable diseases to make a comeback,” says Kaushik. 

Kantor agrees: “Anything we can do to help mitigate that, I think, is great.”


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

In 2021, my former colleague Tanya Basu wrote a guide to having discussions about vaccines with people who are hesitant. Kindness and nonjudgmentalism will get you far, she wrote.

In December 2020, as covid-19 ran rampant around the world, doctors took to social media platforms like TikTok to allay fears around the vaccine. Sharing their personal experiences was important—but not without risk, A.W. Ohlheiser reported at the time.

Robert F. Kennedy Jr. is currently in the spotlight for his views on vaccines. But he has also spread harmful misinformation about HIV and AIDS, as Anna Merlan reported.

mRNA vaccines have played a vital role in the covid-19 pandemic, and in 2023, the researchers who pioneered the science behind them were awarded a Nobel Prize. Here’s what’s next for mRNA vaccines.

Vaccines are estimated to have averted 154 million deaths in the last 50 years. That number includes 146 million children under the age of five. That’s partly why childhood vaccines are a public health success story.

From around the web

As Robert F. Kennedy Jr.’s Senate hearing continued this week, so did the revelations of his misguided beliefs about health and vaccines. Kennedy, who has called himself “an expert on vaccines,” said in 2021 that “we should not be giving Black people the same vaccine schedule that’s given to whites, because their immune system is better than ours”—a claim that is not supported by evidence. (The Washington Post)

And in past email exchanges with his niece, a primary-care physician at NYC Health + Hospitals in New York City, RFK Jr. made repeated false claims about covid-19 vaccinations and questioned the value of annual flu vaccinations. (STAT)

Towana Looney, who became the third person to receive a gene-edited pig kidney in December, is still healthy and full of energy two months later. The milestone makes Looney the longest-living recipient of a pig organ transplant. “I’m superwoman,” she told the Associated Press. (AP)

The Trump administration’s attempt to freeze trillions of dollars in federal grants, loans, and other financial assistance programs was chaotic. Even a pause in funding for global health programs can be considered a destruction, writes Atul Gawande. (The New Yorker)

How ultraprocessed is the food in your diet? This chart can help rank food items—but won’t tell you all you need to know about how healthy they are. (Scientific American)

The Download: measuring vaccine hesitancy, and the rise of DeepSeek

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How measuring vaccine hesitancy could help health professionals tackle it

This week, Robert F. Kennedy Jr., President Donald Trump’s pick to lead the US’s health agencies, has been facing questions from senators as part of his confirmation hearing for the role. So far, it’s been a dramatic watch, with plenty of fiery exchanges, screams from audience members, and damaging revelations.

There’s also been a lot of discussion about vaccines. Kennedy has long been a vocal critic of vaccines. He has spread misinformation about the effects of vaccines. He’s petitioned the government to revoke the approval of vaccines. He’s sued pharmaceutical companies that make vaccines.

Kennedy has his supporters. But not everyone who opts not to vaccinate shares his worldview. There are lots of reasons why people don’t vaccinate themselves or their children. Understanding those reasons will help us tackle an issue considered to be a huge global health problem today. And plenty of researchers are working on tools to do just that. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What DeepSeek’s breakout success means for AI

The tech world is abuzz over a new open-source reasoning AI model developed by DeepSeek, a Chinese startup. The company claims that this new model, called DeepSeek R1, matches or even surpasses OpenAI’s ChatGPT o1 in performance but operates at a fraction of the cost.

Its success is even more remarkable given the constraints that Chinese AI companies face due to US export controls on cutting-edge chips. DeepSeek’s approach represents a radical change in how AI gets built, and could shift the tech world’s center of gravity.

Join news editor Charlotte Jee, senior AI editor Will Douglas Heaven, and China reporter Caiwei Chen for an exclusive subscriber-only Roundtable conversation on Monday 3 February at 12pm ET discussing what DeepSeek’s breakout success means for AI and the broader tech industry. Register here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Federal workers are being forced to defend their work to Elon Musk’s acolytes
Government tech staff are being pulled into sudden meetings with students. (Wired $)
+ Archivists are rushing to save thousands of datasets being yanked offline. (404 Media)
+ Civil servants aren’t buying Musk’s promises. (Slate $)

2 The US Copyright Office says AI-assisted art can be copyrighted 
But works wholly created by AI can’t be. (AP News)
+ The AI lab waging a guerrilla war over exploitative AI. (MIT Technology Review)

3 OpenAI is partnering with US National Laboratories
Its models will be used for scientific research and nuclear weapons security. (NBC News)
+ It’s the latest move from the firm to curry favor with the US government. (Engadget)
+ OpenAI has upped its lobbying efforts nearly sevenfold. (MIT Technology Review)

4 DeepSeek’s success is inspiring founders in Africa
The startup has proved that frugality can go hand in hand with innovation. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)

5 China is building a massive wartime command center
The complex appears to be part of preparation for the possibility of nuclear war. (FT $)
+ Pentagon workers used DeepSeek’s chatbot for days before it was blocked. (Bloomberg $)
+ We saw a demo of the new AI system powering Anduril’s vision for war. (MIT Technology Review)

6 There’s a chance this colossal asteroid will hit Earth in 2032
Experts aren’t too worried—yet. (The Guardian)
+ How worried should we be about the end of the world? (New Yorker $)
+ Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)

7 Things are looking up for Europe’s leading battery maker
Truckmaker Scania is now supporting the troubled Northvolt’s day-to-day operations. (Reuters)
+ Three takeaways about the current state of batteries. (MIT Technology Review)

8 This group of Luddite teens is still resisting technology
But three years after starting their club, the lure of dating apps is strong. (NYT $)

9 Reddit’s bastion of humanity is under threat
AI features are creeping into the forum, much to users’ chagrin. (The Atlantic $)

10 Bid a fond farewell to MiniDiscs and blank Blu-Rays
Sony is finally pulling the plug on some of its recordable media formats. (IEEE Spectrum)

Quote of the day

“We try to be really open and then everything I say leaks. It sucks.”

—Mark Zuckerberg warns that leakers will be fired in a memo that was promptly leaked, the Verge reports.

The big story

This artist is dominating AI-generated art. And he’s not happy about it.

September 2022

Greg Rutkowski is a Polish digital artist who uses classical styles to create dreamy landscapes. His distinctive style has been used in some of the world’s most popular fantasy games, including Dungeons and Dragons and Magic: The Gathering.

Now he’s become a hit in the new world of text-to-image AI generation. His name is one of the most commonly used prompts in the open-source AI art generator Stable Diffusion.

But this and other open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. And artists like Rutkowski have had enough. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s an oldie but a goodie: ice dancing gold medalists Tessa Virtue and Scott Moir’s routine to Moulin Rouge is simply spectacular.
+ This week marks 56 years since the Beatles performed their last ever gig on the roof of their Apple headquarters.
+ In other Beatles news, Ringo Starr has never eaten a pizza.
+ The Video Game History Foundation has opened up its incredible archive (thanks Dani!)

How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead

Join us on Monday, February 3 as our editors discuss what DeepSeek’s breakout success means for AI and the broader tech industry. Register for this special subscriber-only session today.

When the Chinese firm DeepSeek dropped a large language model called R1 last week, it sent shock waves through the US tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the cost—and given away for free. 

The US stock market lost $1 trillion, President Trump called it a wake-up call, and the hype was dialed up yet again. “DeepSeek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen—and as open source, a profound gift to the world,” Silicon Valley’s kingpin investor Marc Andreessen posted on X.

But DeepSeek’s innovations are not the only takeaway here. By publishing details about how R1 and a previous model called V3 were built and releasing the models for free, DeepSeek has pulled back the curtain to reveal that reasoning models are a lot easier to build than people thought. The company has closed the lead on the world’s very top labs.

The news kicked competitors everywhere into gear. This week, the Chinese tech giant Alibaba announced a new version of its large language model Qwen and the Allen Institute for AI (AI2), a top US nonprofit lab, announced an update to its large language model Tulu. Both claim that their latest models beat DeepSeek’s equivalent.

Sam Altman, cofounder and CEO of OpenAI, called R1 impressive—for the price—but hit back with a bullish promise: “We will obviously deliver much better models.” OpenAI then pushed out ChatGPT Gov, a version of its chatbot tailored to the security needs of US government agencies, in an apparent nod to concerns that DeepSeek’s app was sending data to China. There’s more to come.

DeepSeek has suddenly become the company to beat. What exactly did it do to rattle the tech world so fully? Is the hype justified? And what can we learn from the buzz about what’s coming next? Here’s what you need to know.  

Training steps

Let’s start by unpacking how large language models are trained. There are two main stages, known as pretraining and post-training. Pretraining is the stage most people talk about. In this process, billions of documents—huge numbers of websites, books, code repositories, and more—are fed into a neural network over and over again until it learns to generate text that looks like its source material, one word at a time. What you end up with is known as a base model.

Pretraining is where most of the work happens, and it can cost huge amounts of money. But as Andrej Karpathy, a cofounder of OpenAI and former head of AI at Tesla, noted in a talk at Microsoft Build last year: “Base models are not assistants. They just want to complete internet documents.”

Turning a large language model into a useful tool takes a number of extra steps. This is the post-training stage, where the model learns to do specific tasks like answer questions (or answer questions step by step, as with OpenAI’s o3 and DeepSeek’s R1). The way this has been done for the last few years is to take a base model and train it to mimic examples of question-answer pairs provided by armies of human testers. This step is known as supervised fine-tuning. 

OpenAI then pioneered yet another step, in which sample answers from the model are scored—again by human testers—and those scores used to train the model to produce future answers more like those that score well and less like those that don’t. This technique, known as reinforcement learning with human feedback (RLHF), is what makes chatbots like ChatGPT so slick. RLHF is now used across the industry.

But those post-training steps take time. What DeepSeek has shown is that you can get the same results without using people at all—at least most of the time. DeepSeek replaces supervised fine-tuning and RLHF with a reinforcement-learning step that is fully automated. Instead of using human feedback to steer its models, the firm uses feedback scores produced by a computer.

“Skipping or cutting down on human feedback—that’s a big thing,” says Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel. “You’re almost completely training models without humans needing to do the labor.”

Cheap labor

The downside of this approach is that computers are good at scoring answers to questions about math and code but not very good at scoring answers to open-ended or more subjective questions. That’s why R1 performs especially well on math and code tests. To train its models to answer a wider range of non-math questions or perform creative tasks, DeepSeek still has to ask people to provide the feedback. 

But even that is cheaper in China. “Relative to Western markets, the cost to create high-quality data is lower in China and there is a larger talent pool with university qualifications in math, programming, or engineering fields,” says Si Chen, a vice president at the Australian AI firm Appen and a former head of strategy at both Amazon Web Services China and the Chinese tech giant Tencent. 

DeepSeek used this approach to build a base model, called V3, that rivals OpenAI’s flagship model GPT-4o. The firm released V3 a month ago. Last week’s R1, the new model that matches OpenAI’s o1, was built on top of V3. 

To build R1, DeepSeek took V3 and ran its reinforcement-learning loop over and over again. In 2016 Google DeepMind showed that this kind of automated trial-and-error approach, with no human input, could take a board-game-playing model that made random moves and train it to beat grand masters. DeepSeek does something similar with large language models: Potential answers are treated as possible moves in a game. 

To start with, the model did not produce answers that worked through a question step by step, as DeepSeek wanted. But by scoring the model’s sample answers automatically, the training process nudged it bit by bit toward the desired behavior. 

Eventually, DeepSeek produced a model that performed well on a number of benchmarks. But this model, called R1-Zero, gave answers that were hard to read and were written in a mix of multiple languages. To give it one last tweak, DeepSeek seeded the reinforcement-learning process with a small data set of example responses provided by people. Training R1-Zero on those produced the model that DeepSeek named R1. 

There’s more. To make its use of reinforcement learning as efficient as possible, DeepSeek has also developed a new algorithm called Group Relative Policy Optimization (GRPO). It first used GRPO a year ago, to build a model called DeepSeekMath. 

We’ll skip the details—you just need to know that reinforcement learning involves calculating a score to determine whether a potential move is good or bad. Many existing reinforcement-learning techniques require a whole separate model to make this calculation. In the case of large language models, that means a second model that could be as expensive to build and run as the first. Instead of using a second model to predict a score, GRPO just makes an educated guess. It’s cheap, but still accurate enough to work.  

A common approach

DeepSeek’s use of reinforcement learning is the main innovation that the company describes in its R1 paper. But DeepSeek is not the only firm experimenting with this technique. Two weeks before R1 dropped, a team at Microsoft Asia announced a model called rStar-Math, which was trained in a similar way. “It has similarly huge leaps in performance,” says Matt Zeiler, founder and CEO of the AI firm Clarifai.

AI2’s Tulu was also built using efficient reinforcement-learning techniques (but on top of, not instead of, human-led steps like supervised fine-tuning and RLHF). And the US firm Hugging Face is racing to replicate R1 with OpenR1, a clone of DeepSeek’s model that Hugging Face hopes will expose even more of the ingredients in R1’s special sauce.

What’s more, it’s an open secret that top firms like OpenAI, Google DeepMind, and Anthropic may already be using their own versions of DeepSeek’s approach to train their new generation of models. “I’m sure they’re doing almost the exact same thing, but they’ll have their own flavor of it,” says Zeiler. 

But DeepSeek has more than one trick up its sleeve. It trained its base model V3 to do something called multi-token prediction, where the model learns to predict a string of words at once instead of one at a time. This training is cheaper and turns out to boost accuracy as well. “If you think about how you speak, when you’re halfway through a sentence, you know what the rest of the sentence is going to be,” says Zeiler. “These models should be capable of that too.”  

It has also found cheaper ways to create large data sets. To train last year’s model, DeepSeekMath, it took a free data set called Common Crawl—a huge number of documents scraped from the internet—and used an automated process to extract just the documents that included math problems. This was far cheaper than building a new data set of math problems by hand. It was also more effective: Common Crawl includes a lot more math than any other specialist math data set that’s available. 

And on the hardware side, DeepSeek has found new ways to juice old chips, allowing it to train top-tier models without coughing up for the latest hardware on the market. Half their innovation comes from straight engineering, says Zeiler: “They definitely have some really, really good GPU engineers on that team.”

Nvidia provides software called CUDA that engineers use to tweak the settings of their chips. But DeepSeek bypassed this code using assembler, a programming language that talks to the hardware itself, to go far beyond what Nvidia offers out of the box. “That’s as hardcore as it gets in optimizing these things,” says Zeiler. “You can do it, but basically it’s so difficult that nobody does.”

DeepSeek’s string of innovations on multiple models is impressive. But it also shows that the firm’s claim to have spent less than $6 million to train V3 is not the whole story. R1 and V3 were built on a stack of existing tech. “Maybe the very last step—the last click of the button—cost them $6 million, but the research that led up to that probably cost 10 times as much, if not more,” says Friedman. And in a blog post that cut through a lot of the hype, Anthropic cofounder and CEO Dario Amodei pointed out that DeepSeek probably has around $1 billion worth of chips, an estimate based on reports that the firm in fact used 50,000 Nvidia H100 GPUs

A new paradigm

But why now? There are hundreds of startups around the world trying to build the next big thing. Why have we seen a string of reasoning models like OpenAI’s o1 and o3, Google DeepMind’s Gemini 2.0 Flash Thinking, and now R1 appear within weeks of each other? 

The answer is that the base models—GPT-4o, Gemini 2.0, V3—are all now good enough to have reasoning-like behavior coaxed out of them. “What R1 shows is that with a strong enough base model, reinforcement learning is sufficient to elicit reasoning from a language model without any human supervision,” says Lewis Tunstall, a scientist at Hugging Face.

In other words, top US firms may have figured out how to do it but were keeping quiet. “It seems that there’s a clever way of taking your base model, your pretrained model, and turning it into a much more capable reasoning model,” says Zeiler. “And up to this point, the procedure that was required for converting a pretrained model into a reasoning model wasn’t well known. It wasn’t public.”

What’s different about R1 is that DeepSeek published how they did it. “And it turns out that it’s not that expensive a process,” says Zeiler. “The hard part is getting that pretrained model in the first place.” As Karpathy revealed at Microsoft Build last year, pretraining a model represents 99% of the work and most of the cost. 

If building reasoning models is not as hard as people thought, we can expect a proliferation of free models that are far more capable than we’ve yet seen. With the know-how out in the open, Friedman thinks, there will be more collaboration between small companies, blunting the edge that the biggest companies have enjoyed. “I think this could be a monumental moment,” he says. 

OpenAI releases its new o3-mini reasoning model for free

On Thursday, Microsoft announced that it’s rolling OpenAI’s reasoning model o1 out to its Copilot users, and now OpenAI is releasing a new reasoning model, o3-mini, to people who use the free version of ChatGPT. This will mark the first time that the vast majority of people will have access to one of OpenAI’s reasoning models, which were formerly restricted to its paid Pro and Plus bundles.

Reasoning models use a “chain of thought” technique to generate responses, essentially working through a problem presented to the model step by step. Using this method, the model can find mistakes in its process and correct them before giving an answer. This typically results in more thorough and accurate responses, but it also causes the models to pause before answering, sometimes leading to lengthy wait times. OpenAI claims that o3-mini responds 24% faster than o1-mini.

These types of models are most effective at solving complex problems, so if you have any PhD-level math problems you’re cracking away at, you can try them out. Alternatively, if you’ve had issues with getting previous models to respond properly to your most advanced prompts, you may want to try out this new reasoning model on them. To try out o3-mini, simply select “Reason” when you start a new prompt on ChatGPT

Although reasoning models possess new capabilities, they come at a cost. OpenAI’s o1-mini is 20 times more expensive to run than its equivalent non-reasoning model, GPT-4o mini. The company says its new model, o3-mini, costs 63% less than o1-mini per input token However, at $1.10 per million input tokens, it is still about seven times more expensive to run than GPT-4o mini.

This new model is coming right after the DeepSeek release that shook the AI world less than two weeks ago. DeepSeek’s new model performs just as well as top OpenAI models, but the Chinese company claims it cost roughly $6 million to train, as opposed to the estimated cost of over $100 million for training OpenAI’s GPT-4. (It’s worth noting that a lot of people are interrogating this claim.) 

Additionally, DeepSeek’s reasoning model costs $0.55 per million input tokens, half the price of o3-mini, so OpenAI still has a way to go to bring down its costs. It’s estimated that reasoning models also have much higher energy costs than other types, given the larger number of computations they require to produce an answer.

This new wave of reasoning models present new safety challenges as well. OpenAI used a technique called deliberative alignment to train its o-series models, basically having them reference OpenAI’s internal policies at each step of its reasoning to make sure they weren’t ignoring any rules.

But the company has found that o3-mini, like the o1 model, is significantly better than non-reasoning models at jailbreaking and “challenging safety evaluations”—essentially, it’s much harder to control a reasoning model given its advanced capabilities. o3-mini is the first model to score as “medium risk” on model autonomy, a rating given because it’s better than previous models at specific coding tasks—indicating “greater potential for self-improvement and AI research acceleration,” according to OpenAI. That said, the model is still bad at real-world research. If it were better at that, it would be rated as high risk, and OpenAI would restrict the model’s release.

DeepSeek might not be such good news for energy after all

In the week since a Chinese AI model called DeepSeek became a household name, a dizzying number of narratives have gained steam, with varying degrees of accuracy: that the model is collecting your personal data (maybe); that it will upend AI as we know it (too soon to tell—but do read my colleague Will’s story on that!); and perhaps most notably, that DeepSeek’s new, more efficient approach means AI might not need to guzzle the massive amounts of energy that it currently does.

The latter notion is misleading, and new numbers shared with MIT Technology Review help show why. These early figures—based on the performance of one of DeepSeek’s smaller models on a small number of prompts—suggest it could be more energy intensive when generating responses than the equivalent-size model from Meta. The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce. 

Add the fact that other tech firms, inspired by DeepSeek’s approach, may now start building their own similar low-cost reasoning models, and the outlook for energy consumption is already looking a lot less rosy.

The life cycle of any AI model has two phases: training and inference. Training is the often months-long process in which the model learns from data. The model is then ready for inference, which happens each time anyone in the world asks it something. Both usually take place in data centers, where they require lots of energy to run chips and cool servers. 

On the training side for its R1 model, DeepSeek’s team improved what’s called a “mixture of experts” technique, in which only a portion of a model’s billions of parameters—the “knobs” a model uses to form better answers—are turned on at a given time during training. More notably, they improved reinforcement learning, where a model’s outputs are scored and then used to make it better. This is often done by human annotators, but the DeepSeek team got good at automating it

The introduction of a way to make training more efficient might suggest that AI companies will use less energy to bring their AI models to a certain standard. That’s not really how it works, though. 

“⁠Because the value of having a more intelligent system is so high,” wrote Anthropic cofounder Dario Amodei on his blog, it “causes companies to spend more, not less, on training models.” If companies get more for their money, they will find it worthwhile to spend more, and therefore use more energy. “The gains in cost efficiency end up entirely devoted to training smarter models, limited only by the company’s financial resources,” he wrote. It’s an example of what’s known as the Jevons paradox.

But that’s been true on the training side as long as the AI race has been going. The energy required for inference is where things get more interesting. 

DeepSeek is designed as a reasoning model, which means it’s meant to perform well on things like logic, pattern-finding, math, and other tasks that typical generative AI models struggle with. Reasoning models do this using something called “chain of thought.” It allows the AI model to break its task into parts and work through them in a logical order before coming to its conclusion. 

You can see this with DeepSeek. Ask whether it’s okay to lie to protect someone’s feelings, and the model first tackles the question with utilitarianism, weighing the immediate good against the potential future harm. It then considers Kantian ethics, which propose that you should act according to maxims that could be universal laws. It considers these and other nuances before sharing its conclusion. (It finds that lying is “generally acceptable in situations where kindness and prevention of harm are paramount, yet nuanced with no universal solution,” if you’re curious.)

Chain-of-thought models tend to perform better on certain benchmarks such as MMLU, which tests both knowledge and problem-solving in 57 subjects. But, as is becoming clear with DeepSeek, they also require significantly more energy to come to their answers. We have some early clues about just how much more.

Scott Chamberlin spent years at Microsoft, and later Intel, building tools to help reveal the environmental costs of certain digital activities. Chamberlin did some initial tests to see how much energy a GPU uses as DeepSeek comes to its answer. The experiment comes with a bunch of caveats: He tested only a medium-size version of DeepSeek’s R-1, using only a small number of prompts. It’s also difficult to make comparisons with other reasoning models.

DeepSeek is “really the first reasoning model that is fairly popular that any of us have access to,” he says. OpenAI’s o1 model is its closest competitor, but the company doesn’t make it open for testing. Instead, he tested it against a model from Meta with the same number of parameters: 70 billion.

The prompt asking whether it’s okay to lie generated a 1,000-word response from the DeepSeek model, which took 17,800 joules to generate—about what it takes to stream a 10-minute YouTube video. This was about 41% more energy than Meta’s model used to answer the prompt. Overall, when tested on 40 prompts, DeepSeek was found to have a similar energy efficiency to the Meta model, but DeepSeek tended to generate much longer responses and therefore was found to use 87% more energy.

How does this compare with models that use regular old-fashioned generative AI as opposed to chain-of-thought reasoning? Tests from a team at the University of Michigan in October found that the 70-billion-parameter version of Meta’s Llama 3.1 averaged just 512 joules per response.

Neither DeepSeek nor Meta responded to requests for comment.

Again: uncertainties abound. These are different models, for different purposes, and a scientifically sound study of how much energy DeepSeek uses relative to competitors has not been done. But it’s clear, based on the architecture of the models alone, that chain-of-thought models use lots more energy as they arrive at sounder answers. 

Sasha Luccioni, an AI researcher and climate lead at Hugging Face, worries that the excitement around DeepSeek could lead to a rush to insert this approach into everything, even where it’s not needed. 

“If we started adopting this paradigm widely, inference energy usage would skyrocket,” she says. “If all of the models that are released are more compute intensive and become chain-of-thought, then it completely voids any efficiency gains.”

AI has been here before. Before ChatGPT launched in 2022, the name of the game in AI was extractive—basically finding information in lots of text, or categorizing images. But in 2022, the focus switched from extractive AI to generative AI, which is based on making better and better predictions. That requires more energy. 

“That’s the first paradigm shift,” Luccioni says. According to her research, that shift has resulted in orders of magnitude more energy being used to accomplish similar tasks. If the fervor around DeepSeek continues, she says, companies might be pressured to put its chain-of-thought-style models into everything, the way generative AI has been added to everything from Google search to messaging apps. 

We do seem to be heading in a direction of more chain-of-thought reasoning: OpenAI announced on January 31 that it would expand access to its own reasoning model, o3. But we won’t know more about the energy costs until DeepSeek and other models like it become better studied.

“It will depend on whether or not the trade-off is economically worthwhile for the business in question,” says Nathan Benaich, founder and general partner at Air Street Capital. “The energy costs would have to be off the charts for them to play a meaningful role in decision-making.”

Northern Ireland Key to E.U., U.K. Fulfillment

Before Brexit, merchants could sell cross-border into the U.K. and mainland Europe with relative ease. Both belonged to the E.U. It’s now more complex and expensive, with separate customs and taxes for each region — unless the shipments come from Northern Ireland.

Through a Brexit exception, fulfillment companies (and merchants) in Northern Ireland can ship to the U.K. and the E.U. with fewer complications. It’s been a boon to John Heenan’s Belfast-based 3PL, The Distribution Solution.

I met John years ago, pre-Brexit, when Beardbrand sold products in Europe via his company. We reconnected for this episode. He explained the nuances of selling internationally in the U.K. and the E.U. and how to streamline the process.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us a quick rundown of who you are.

John Heenan: I own a fulfillment business in Belfast, Northern Ireland, called The Distribution Solution, or TDS. We’ve been in business for 20 years. Before that, we were Travel Distribution Services, distributing printed travel brochures. Over the years, as the internet grew, we transitioned into ecommerce.

A big advantage of being in Northern Ireland is that, due to Brexit and the rules for Northern Ireland, we can trade in both the U.K. and the E.U. without customs complications.

When the U.K. voted to leave the E.U., the situation became complex for Northern Ireland. Being part of the U.K., we still maintain a border with the Republic of Ireland, which belongs to the E.U.

Northern Ireland remains in the E.U. customs union to avoid physical borders, which means businesses can operate freely in both markets. This is a huge advantage, as it allows companies to trade seamlessly between the two regions without dealing with customs duties or additional regulations. Companies not based in Northern Ireland would need separate fulfillment centers in the U.K. and the E.U.

Bandholz: Have new fulfillment companies emerged in Northern Ireland?

Heenan: There have been a few smaller, local operators. The larger corporations have hesitated due to political instability, including the collapse of Northern Ireland’s Assembly for nearly two years. Big companies tend to avoid places where political uncertainty exists. Despite that, some local entrepreneurs have capitalized on the opportunities. Becoming a fulfillment company isn’t as simple as owning a warehouse. The software and compliance requirements are substantial.

Within the U.K., you must register as a fulfillment house, which means adhering to various regulations. The U.K. government, for instance, inspects fulfillment companies to ensure value-added tax compliance.

In the E.U., VAT is around 20%, which applies to most ecommerce sales. Before Brexit, many sellers imported products from China and avoided VAT by slipping goods into the E.U. through local postal services. It created an unfair advantage, and local businesses in Europe complained. Every fulfillment house must now report customer details, including VAT registration numbers, to the authorities to ensure payment of taxes.

Bandholz: How should American businesses approach those challenges when selling in Europe?

Heenan: The process can seem complex, but it’s manageable if you take the time to set things up correctly from the beginning. We work with accountants to ensure everything is in order, such as getting an Economic Operators Registration and Identification number — “EORI” — for importing, exporting, and registering for VAT. Once that setup is complete, it’s relatively straightforward. Europeans love bureaucracy, so you need to embrace it like a checklist. We guide you through the necessary steps.

The setup process can take a couple of months. But once everything is in place, it’s smooth sailing. You can’t start shipping goods without a VAT number because you need it to reclaim VAT on imports. For example, if you import £1,000 of goods and pay £200 VAT, you can reclaim that VAT against your sales.

Bandholz: What fulfillment costs and timelines can brands expect when shipping from Northern Ireland?

Heenan: There are a lot of variables, but I’ll give you rough estimates. You’re looking at around $2 per shipment for picking and packing. Shipping costs depend on factors like weight and location. For example, within the U.K., small packages can cost around £3-£4 [$3.75-$5]  to ship and usually arrive within 48 hours. Shipping to Europe can range from £8-£12 [$10-$15]. One advantage of being in Northern Ireland is that shipping to the Republic of Ireland is much cheaper than other parts of the U.K. or Europe.

Bandholz: How do you typically bill American companies for fulfillment services?

Heenan: We bill in pounds sterling. However, it’s not much of an issue for American clients because they’re selling in either sterling or euros in Europe, which offsets the need for constant currency conversions. That said, the strong dollar could make it advantageous for some American companies to convert.

Costs in Europe are significantly higher than in the U.S. Labor costs, for instance, are about 50-100% higher. By law, employees get at least five and a half weeks of paid leave annually. However, companies selling in Europe can often command higher prices to offset those costs. We have clients who buy certain goods in the U.S., but after factoring in exchange rates, duties, and taxes, the final price often evens out.

Bandholz: How can people get in touch with you?

Heenan: Our website is TheDistributionSolution.co.uk. You can contact me there. I’m also on LinkedIn.

AI Overviews Data Shows Massive Changes In Search Results via @sejournal, @martinibuster

Enterprise SEO platform BrightEdge published results on current AI Search trends, showing that Google AI Overviews (AIO) has expanded its presence by up to 100% in increasingly complex search queries. The changes suggest growing confidence in AI for search, with indications that Google is relying on authoritativeness and greater precision in context awareness for matching queries to answers, particularly in relation to content modality.

The data shows that AI Overviews (AIO) has evolved from showing featured snippet style answers to being capable of handling multi-turn, complex search queries. The takeaway is that Google is increasingly comfortable with AI’s ability to surface precise answers for longer queries and this is a trend that may continue to rise.

Google AIO Presence Is Growing

Google continues to show confidence in their AI Overviews (AIO) search feature as BrightEdge has discovered that more keyword phrases are triggering AI answers now than at any point since the feature was rolled out last year.

25% of search queries using 8 words or more are displaying AI Overviews (AIO), which is a clear upward trend indicating that Google continues to refine the accuracy of AIO and is better able to handle increasingly complex search queries.

A graph shows how the keywords with 8, 9, and 10 words continued to increasingly show AI Overviews

Graph Representation Of AI Overviews Growth

Keyword phrases with less than four words continue to show an increasing amount of AIO but the growth in longer more precise keywords is growing significantly faster.

Screenshot Showing Percentage Of Keywords With Google AI Overviews

Change In AIO Patterns: Gains For Authoritative Brands

BrightEdge provided additional data that looks at specific topic categories, showing how queries for some topics consolidating to answers from big brand sites.

For example, in the healthcare category where accuracy and trustworthiness are paramount Google is increasingly showing search results from just a handful of websites. Content from authoritative medical research centers account for 72% of AI Overview answers, which is an increase from 54% of all queries at the start of January.

15-22% of B2B technology search queries are derived from the top five technology companies such as Amazon, IBM, and Microsoft.

Qualities Of AIO Answers

BrightEdge data reveals that AIO answers follow certain patterns that reveal qualities that Google feels make content more relevant.

  • Excels at step by step and how to answers (structured hierarchical information)
  • Shows precise real-time relevance
  • Answers lean toward general guidance

Educational Search Queries

For educational queries AIO shows a preference toward concise answers with a clean visual presentation. In the below example Google is hiding content that has additional information that answers additional questions beyond the main query. This may relate to Google’s information gain patent which is about anticipating additional information that a user will be interested in after receiving the answer to their original search query.

AIO Showing Information Gain Ranked Content

Change In YouTube Citations

An interesting pattern picked up by BrightEdge is that YouTube technical tutorials have increased by 40% in AIO while health related queries that show YouTube videos are trending downward by 31%.

Of particular interest is that the high volume search queries (100k+ search volume) that trigger YouTube content have decreased by 18.7%. This may reflect a change in user needs and Google’s ability to identify that context and understand that it’s not served well by video content.

What all of this means is that it’s increasingly important to think about context awareness, the appropriateness of the content to the query. The question to ask is what kind of content best serves the context and to expand that answer across modalities like images, sound, video, and text, then within those formats think in terms of how-to, data dump, informative, etc.

BrightEdge observes:

“Most Interesting Pattern:
AI Overviews are developing sophisticated, context-aware citation models. While YouTube citations are declining for health queries (e.g., “symptoms,” “diet”), they’re increasing for technical how-to content, jumping from 2.0% to 2.8% of citations in this category.

Pay Attention:

1. Context is King – Focus video content where it’s gaining traction (technical tutorials, DIY) and pivot to text for topics where traditional authority is preferred (health, finance)
2. Match Your Industry’s Pattern – In sectors with distributed authority (like B2B tech at 15-22% per source), focus on direct citations; in consolidated spaces (like healthcare at 72% institutional),

partner with established authorities

3. Monitor Actively – With citation patterns shifting dramatically in just one month, weekly monitoring of your space is crucial to spot new opportunities before competitors”

Takeaway

A way to make sense of the data is that it Google AI Overviews appear to be increasingly relying on the authoritativeness of the content as the stakes go higher with more complex search queries.

Authoritativeness isn’t just about being a big brand but it may have to do with simply being meaningful to the Internet audience as a go-to source for a particular topic. Trustworthiness and other related factors are important and this has nothing to do with superficial SEO activities like author bios and so on.

Read the data:
How AI Giants Are Carving Distinct Territory in the Search Landscape

Elementor Rolls Out WordPress AI Site Planner via @sejournal, @martinibuster

Elementor released a free to use standalone AI app called Site Planner that enables users to create a website in a step by step process beginning with the most general concept of the site and ending with a complete website design down to the individual page elements. I gave it a try and was stunned by how easy and fast it was to create a website.

Intuitive Approach To Site Building

Elementor’s application of AI features an intuitive and attractive user interface, everything feels to have been considered so that at no point does one feel the need to read instructions. The questions asked at the start of the process establish a general overview of what the site is about, necessary pages, what the goals are and so on.

Getting started is as simple as clicking a start button, the first hint that building a site with Elementor is going to be easy.

Screenshot Of Start Of AI Site Building App

Collaborative Capabilities

The site design process can be a designer working with a client or multiple stakeholders in a company working together to roll out the next iteration of a website. Elementor’s Site Planner app recognizes this reality and offers users the option to collaborate over Google Meet or proceed alone with the AI as one of the first steps of the process.

Screenshot Of Collaboration Option

Generate A Website Brief

A website brief is a document that outlines the goals and expectations of a web design project. It serves as a road map and plan that guides the stakeholders through the planning and development stages of the project.

Elementor’s AI Site Planner app smartly begins with asking the right questions for putting together a website brief that serves as the backbone of what is to be created.

The site planner generates a website brief describing what the website project is and once that’s approved Elementor creates what it refers to as a sitemap, a site diagram or site architecture diagram that provides a high-level overview of the different pages and how they’re interlinked.

It then generates a wireframe of the entire site that can be zoomed in to edit individual sections of a website at an overview level, to “fine-tune” the layout.

This is how Elementor describes the process:

1 Brief
From Vision -> Brief
Start an AI-led conversation and get your project off the ground. Watch your ideas, descriptions, and notes transform before your eyes into a proper website brief.

2 Sitemap
From Brief -> Sitemap
AI Site Planner instantly maps out all your key pages and creates a complete sitemap in minutes, not hours. Easily shuffle or edit pages to fit your vision.

3 Wireframe
From Sitemap -> Wireframe
Get your first draft in minutes. Watch AI turn your sitemap into content-filled wireframes in a click.

Elementor AI Site Planner

The Elementor AI Site Planner is in my opinion a successful implementation of AI for planning a website. Read the full announcement.

Site Planner by Elementor AI – Generate Professional Sitemaps & Wireframes in Minutes

Featured Image by Shutterstock/Net Vector