What’s next for Chinese open-source AI

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost. 

Just last week the Chinese firm Moonshot AI released its latest open-weight model, Kimi K2.5, which came close to top proprietary systems such as Anthropic’s Claude Opus on some early benchmarks. The difference: K2.5 is roughly one-seventh Opus’s price.

On Hugging Face, Alibaba’s Qwen family—after ranking as the most downloaded model series in 2025 and 2026—has overtaken Meta’s Llama models in cumulative downloads. And a recent MIT study found that Chinese open-source models have surpassed US models in total downloads. For developers and builders worldwide, access to near-frontier AI capabilities has never been this broad or this affordable.

But these models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them. 

If these open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. 

Here’s what may come next.

China’s commitment to open source will continue

When DeepSeek launched R1, much of the initial shock centered on its origin. Suddenly, a Chinese team had released a reasoning model that could stand alongside the best systems from US labs. But the long tail of DeepSeek’s impact had less to do with nationality than with distribution. R1 was released as an open-weight model under a permissive MIT license, allowing anyone to download, inspect, and deploy it. On top of that, DeepSeek also published a paper detailing its training process and techniques. For developers who access models via an API, DeepSeek also undercut competitors on price, offering access at a fraction the cost of OpenAI’s o1, the leading proprietary reasoning model at the time.

Within days of its release, DeepSeek replaced ChatGPT as the most downloaded free app in the US App Store. The moment spilled beyond developer circles into financial markets, triggering a sharp sell-off in US tech stocks that briefly erased roughly $1 trillion in market value. Almost overnight, DeepSeek went from a little-known spin-off team backed by a quantitative hedge fund to the most visible symbol of China’s push for open-source AI.

China’s decision to lean in to open source isn’t surprising. It has the world’s second-largest concentration of AI talent after the US. plus a vast, well-resourced tech industry. After ChatGPT broke into the mainstream, China’s AI sector went through a reckoning—and emerged determined to catch up. Pursuing an open-source strategy was seen as the fastest way to close the gap by rallying developers, spreading adoption, and setting standards.

DeepSeek’s success injected confidence into an industry long used to following global standards rather than setting them. “Thirty years ago, no Chinese person would believe they could be at the center of global innovation,” says Alex Chenglin Wu, CEO and founder of Atoms, an AI agent company and prominent contributor to China’s open-source ecosystem. “DeepSeek shows that with solid technical talent, a supportive environment, and the right organizational culture, it’s possible to do truly world-class work.”

DeepSeek’s breakout moment wasn’t China’s first open-source success. Alibaba’s Qwen Lab had been releasing open-weight models for years. By September 2024,  well before DeepSeek’s V3 launch, Alibaba was saying that global downloads had exceeded 600 million. On Hugging Face, Qwen accounted for more than 30% of all model downloads in 2024. Other institutions, including the Beijing Academy of Artificial Intelligence and the AI firm Baichuan, were also releasing open models as early as 2023. 

But since the success of DeepSeek, the field has widened rapidly. Companies such as Z.ai (formerly Zhipu), MiniMax, Tencent, and a growing number of smaller labs have released models that are competitive on reasoning, coding, and agent-style tasks. The growing number of capable models has sped up progress. Capabilities that once took months to make it to the open-source world now emerge within weeks, even days.

“Chinese AI firms have seen real gains from the open-source playbook,” says Liu Zhiyuan, a professor of computer science at Tsinghua University and chief scientist at the AI startup ModelBest. “By releasing strong research, they build reputation and gain free publicity.”

Beyond commercial incentives, Liu says, open source has taken on cultural and strategic weight. “In the Chinese programmer community, open source has become politically correct,” he says, framing it as a response to US dominance in proprietary AI systems.

That shift is also reflected at the institutional level. Universities including Tsinghua have begun encouraging AI development and open-source contributions, while policymakers have moved to formalize those incentives. In August, China’s State Council released a draft policy encouraging universities to reward open-source work, proposing that students’ contributions on platforms such as GitHub or Gitee could eventually be counted toward academic credit.

With growing momentum and a reinforcing feedback loop, China’s push for open-source models is likely to continue in the near term, though its long-term sustainability still hinges on financial results, says Tiezhen Wang, who helps lead work on global AI at Hugging Face. In January, the model labs Z.ai and MiniMax went public in Hong Kong. “Right now, the focus is on making the cake bigger,” says Wang. “The next challenge is figuring out how each company secures its share.”

The next wave of models will be narrower—and better

Chinese open-source models are leading not just in download volume but also in variety. Alibaba’s Qwen has become one of the most diversified open model families in circulation, offering a wide range of variants optimized for different uses. The lineup ranges from lightweight models that can run on a single laptop to large, multi-hundred-billion-parameter systems designed for data-center deployment. Qwen features many task-optimized variants created by the community: the “instruct” models are good at following orders, and “code” variants specialize in coding.

Although this strategy isn’t unique to Chinese labs, Qwen was the first open model family to roll out so many high-quality options that it started to feel like a full product line—one that’s free to use.

The open-weight nature of these releases also makes it easy for others to adapt them through techniques like fine-tuning and distillation, which means training a smaller model to mimic a larger one.  According to ATOM (American Truly Open Models), a project by the AI researcher Nathan Lambert, by August 4, 2025, model variations derived from Qwen were “more than 40%” of new Hugging Face language-model derivatives, while Llama had fallen to about 15%. This means that Qwen has become the default base model for all the “remixes.”

This pattern has made the case for smaller, more specialized models. “Compute and energy are real constraints for any deployment,” Liu says. He told MIT Technology Review that the rise of small models is about making AI cheaper to run and easier for more people to use. His company, ModelBest, focuses on small language models designed to run locally on devices such as phones, cars, and other consumer hardware.

While an average user might interact with AI only through the web or an app for simple conversations, power users of AI models with some technical background are experimenting with giving AI more autonomy to solve large-scale problems. OpenClaw, an open-source AI agent that recently went viral within the AI hacker world, allows AI to take over your computer—it can run 24-7, going through your emails and work tasks without supervision. 

OpenClaw, like many other open-source tools, allows users to connect to different AI models via an application programming interface, or API. Within days of OpenClaw’s release, the team revealed that Kimi’s K2.5 had surpassed Claude Opus and became the most used AI model—by token count, meaning it was handling more total text processed across user prompts and model responses.

Cost has been a major reason Chinese models have gained traction, but it would be a mistake to treat them as mere “dupes” of Western frontier systems, Wang suggests. Like any product, a model only needs to be good enough for the job at hand. 

The landscape of open-source models in China is also getting more specialized. Research groups such as Shanghai AI Laboratory have released models geared toward scientific and technical tasks; several projects from Tencent have focused specifically on music generation. Ubiquant, a quantitative finance firm like DeepSeek’s parent High-Flyer, has released an open model aimed at medical reasoning.

In the meantime, innovative architectural ideas from Chinese labs are being picked up more broadly. DeepSeek has published work exploring model efficiency and memory; techniques that compress the model’s attention “cache,” reducing memory and inference costs while mostly preserving performance, have drawn significant attention in the research community. 

“The impact of these research breakthroughs is amplified because they’re open-sourced and can be picked up quickly across the field,” says Wang.

Chinese open models will become infrastructure for global AI builders

The adoption of Chinese models is picking up in Silicon Valley, too. Martin Casado, a general partner at Andreessen Horowitz, has put a number on it: Among startups pitching with open-source stacks, there’s about an 80% chance they’re running on Chinese open models, according to a post he made on X. Usage data tells a similar story. OpenRouter,  a middleman that tracks how people use different AI models through its API, shows Chinese open models rising from almost none in late 2024 to nearly 30% of usage in some recent weeks.

The demand is also rising globally. Z.ai limited new subscriptions to its GLM coding plan (a coding tool based on its flagship GLM models) after demand surged, citing compute constraints. What’s notable is where the demand is coming from: CNBC reports that the system’s user base is primarily concentrated in the United States and China, followed by India, Japan, Brazil, and the UK.

“The open-source ecosystems in China and the US are tightly bound together,” says Wang at Hugging Face. Many Chinese open models still rely on Nvidia and US cloud platforms to train and serve them, which keeps the business ties tangled. Talent is fluid too: Researchers move across borders and companies, and many still operate as a global community, sharing code and ideas in public.

That interdependence is part of what makes Chinese developers feel optimistic about this moment: The work travels, gets remixed, and actually shows up in products. But openness can also accelerate the competition. Dario Amodei, the CEO of Anthropic, made a version of this point after DeepSeek’s 2025 releases: He wrote that export controls are “not a way to duck the competition” between the US and China, and that AI companies in the US “must have better models” if they want to prevail. 

For the past decade, the story of Chinese tech in the West has been one of big expectations that ran into scrutiny, restrictions, and political backlash. This time the export isn’t just an app or a consumer platform. It’s the underlying model layer that other people build on. Whether that will play out differently is still an open question.

AI is already making online crimes easier. It could get much worse.

Anton Cherepanov is always on the lookout for something interesting. And in late August last year, he spotted just that. It was a file uploaded to VirusTotal, a site cybersecurity researchers like him use to analyze submissions for potential viruses and other types of malicious software, often known as malware. On the surface it seemed innocuous, but it triggered Cherepanov’s custom malware-detecting measures. Over the next few hours, he and his colleague Peter Strýček inspected the sample and realized they’d never come across anything like it before.

The file contained ransomware, a nasty strain of malware that encrypts the files it comes across on a victim’s system, rendering them unusable until a ransom is paid to the attackers behind it. But what set this example apart was that it employed large language models (LLMs). Not just incidentally, but across every stage of an attack. Once it was installed, it could tap into an LLM to generate customized code in real time, rapidly map a computer to identify sensitive data to copy or encrypt, and write personalized ransom notes based on the files’ content. The software could do this autonomously, without any human intervention. And every time it ran, it would act differently, making it harder to detect.

Cherepanov and Strýček were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could be exploited to create highly flexible malware attacks. They published a blog post declaring that they’d uncovered the first example of AI-powered ransomware, which quickly became the object of widespread global media attention.

But the threat wasn’t quite as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University claimed responsibility, explaining that the malware was not, in fact, a full attack let loose in the wild but a research project, merely designed to prove it was possible to automate each step of a ransomware campaign—which, they said, they had. 

PromptLock may have turned out to be an academic project, but the real bad guys are using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. 

The likelihood that cyberattacks will now become more common and more effective over time is not a remote possibility but “a sheer reality,” says Lorenzo Cavallaro, a professor of computer science at University College London. 

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers say this claim is overblown. “For some reason, everyone is just focused on this malware idea of, like, AI superhackers, which is just absurd,” says Marcus Hutchins, who is principal threat researcher at the security company Expel and famous in the security world for ending a giant global ransomware attack called WannaCry in 2017. 

Instead, experts argue, we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. These AI-enhanced cyberattacks are only set to get more frequent and more destructive, and we need to be ready. 

Spam and beyond

Attackers started adopting generative AI tools almost immediately after ChatGPT exploded on the scene at the end of 2022. These efforts began, as you might imagine, with the creation of spam—and a lot of it. Last year, a report from Microsoft said that in the year leading up to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “many likely aided by AI content.” 

At least half of spam email is now generated using LLMs, according to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who analyzed nearly 500,000 malicious messages collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks, which impersonate a trusted figure in order to trick a worker within an organization out of funds or sensitive information. By April 2025, they found, at least 14% of those sorts of focused email attacks were generated using LLMs, up from 7.6% in April 2024.

In one high-profile case, a worker was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees.

And the generative AI boom has made it easier and cheaper than ever before to generate not only emails but highly convincing images, videos, and audio. The results are much more realistic than even just a few short years ago, and it takes much less data to generate a fake version of someone’s likeness or voice than it used to.

Criminals aren’t deploying these sorts of deepfakes to prank people or to simply mess around—they’re doing it because it works and because they’re making money out of it, says Henry Ajder, a generative AI expert. “If there’s money to be made and people continue to be fooled by it, they’ll continue to do it,” he says. In one high-­profile case reported in 2024, a worker at the British engineering firm Arup was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees. That’s likely only the tip of the iceberg, and the problem posed by convincing deepfakes is only likely to get worse as the technology improves and is more widely adopted. 

person sitting in profile at a computer with an enormous mask in front of them and words spooling out through the frame

BRIAN STAUFFER

Criminals’ tactics evolve all the time, and as AI’s capabilities improve, such people are constantly probing how those new capabilities can help them gain an advantage over victims. Billy Leonard, tech leader of Google’s Threat Analysis Group, has been keeping a close eye on changes in the use of AI by potential bad actors (a widely used term in the industry for hackers and others attempting to use computers for criminal purposes). In the latter half of 2024, he and his team noticed prospective criminals using tools like Google Gemini the same way everyday users do—to debug code and automate bits and pieces of their work—as well as tasking it with writing the odd phishing email. By 2025, they had progressed to using AI to help create new pieces of malware and release them into the wild, he says.

The big question now is how far this kind of malware can go. Will it ever become capable enough to sneakily infiltrate thousands of companies’ systems and extract millions of dollars, completely undetected? 

Most popular AI models have guardrails in place to prevent them from generating malicious code or illegal material, but bad actors still find ways to work around them. For example, Google observed a China-linked actor asking its Gemini AI model to identify vulnerabilities on a compromised system—a request it initially refused on safety grounds. However, the attacker managed to persuade Gemini to break its own rules by posing as a participant in a capture-the-flag competition, a popular cybersecurity game. This sneaky form of jailbreaking led Gemini to hand over information that could have been used to exploit the system. (Google has since adjusted Gemini to deny these kinds of requests.)

But bad actors aren’t just focusing on trying to bend the AI giants’ models to their nefarious ends. Going forward, they’re increasingly likely to adopt open-source AI models, as it’s easier to strip out their safeguards and get them to do malicious things, says Ashley Jess, a former tactical specialist at the US Department of Justice and now a senior intelligence analyst at the cybersecurity company Intel 471. “Those are the ones I think that [bad] actors are going to adopt, because they can jailbreak them and tailor them to what they need,” she says.

The NYU team used two open-source models from OpenAI in its PromptLock experiment, and the researchers found they didn’t even need to resort to jailbreaking techniques to get the model to do what they wanted. They say that makes attacks much easier. Although these kinds of open-source models are designed with an eye to ethical alignment, meaning that their makers do consider certain goals and values in dictating the way they respond to requests, the models don’t have the same kinds of restrictions as their closed-source counterparts, says Meet Udeshi, a PhD student at New York University who worked on the project. “That is what we were trying to test,” he says. “These LLMs claim that they are ethically aligned—can we still misuse them for these purposes? And the answer turned out to be yes.” 

It’s possible that criminals have already successfully pulled off covert PromptLock-style attacks and we’ve simply never seen any evidence of them, says Udeshi. If that’s the case, attackers could—in theory—have created a fully autonomous hacking system. But to do that they would have had to overcome the significant barrier that is getting AI models to behave reliably, as well as any inbuilt aversion the models have to being used for malicious purposes—all while evading detection. Which is a pretty high bar indeed.

Productivity tools for hackers

So, what do we know for sure? Some of the best data we have now on how people are attempting to use AI for malicious purposes comes from the big AI companies themselves. And their findings certainly sound alarming, at least at first. In November, Leonard’s team at Google released a report that found bad actors were using AI tools (including Google’s Gemini) to dynamically alter malware’s behavior; for example, it could self-modify to evade detection. The team wrote that it ushered in “a new operational phase of AI abuse.”

However, the five malware families the report dug into (including PromptLock) consisted of code that was easily detected and didn’t actually do any harm, the cybersecurity writer Kevin Beaumont pointed out on social media. “There’s nothing in the report to suggest orgs need to deviate from foundational security programmes—everything worked as it should,” he wrote.

It’s true that this malware activity is in an early phase, concedes Leonard. Still, he sees value in making these kinds of reports public if it helps security vendors and others build better defenses to prevent more dangerous AI attacks further down the line. “Cliché to say, but sunlight is the best disinfectant,” he says. “It doesn’t really do us any good to keep it a secret or keep it hidden away. We want people to be able to know about this— we want other security vendors to know about this—so that they can continue to build their own detections.”

And it’s not just new strains of malware that would-be attackers are experimenting with—they also seem to be using AI to try to automate the process of hacking targets. In November, Anthropic announced it had disrupted a large-scale cyberattack, the first reported case of one executed without “substantial human intervention.” Although the company didn’t go into much detail about the exact tactics the hackers used, the report’s authors said a Chinese state-sponsored group had used its Claude Code assistant to automate up to 90% of what they called a “highly sophisticated espionage campaign.”

“We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Jacob Klein, head of threat intelligence at Anthropic

But, as with the Google findings, there were caveats. A human operator, not AI, selected the targets before tasking Claude with identifying vulnerabilities. And of 30 attempts, only a “handful” were successful. The Anthropic report also found that Claude hallucinated and ended up fabricating data during the campaign, claiming it had obtained credentials it hadn’t and “frequently” overstating its findings, so the attackers would have had to carefully validate those results to make sure they were actually true. “This remains an obstacle to fully autonomous cyberattacks,” the report’s authors wrote. 

Existing controls within any reasonably secure organization would stop these attacks, says Gary McGraw, a veteran security expert and cofounder of the Berryville Institute of Machine Learning in Virginia. “None of the malicious-attack part, like the vulnerability exploit … was actually done by the AI—it was just prefabricated tools that do that, and that stuff’s been automated for 20 years,” he says. “There’s nothing novel, creative, or interesting about this attack.”

Anthropic maintains that the report’s findings are a concerning signal of changes ahead. “Tying this many steps of an intrusion campaign together through [AI] agentic orchestration is unprecedented,” Jacob Klein, head of threat intelligence at Anthropic, said in a statement. “It turns what has always been a labor-intensive process into something far more scalable. We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Some are not convinced there’s reason to be alarmed. AI hype has led a lot of people in the cybersecurity industry to overestimate models’ current abilities, Hutchins says. “They want this idea of unstoppable AIs that can outmaneuver security, so they’re forecasting that’s where we’re going,” he says. But “there just isn’t any evidence to support that, because the AI capabilities just don’t meet any of the requirements.”

person kneeling warding off an attack of arrows under a sheild

BRIAN STAUFFER

Indeed, for now criminals mostly seem to be tapping AI to enhance their productivity: using LLMs to write malicious code and phishing lures, to conduct reconnaissance, and for language translation. Jess sees this kind of activity a lot, alongside efforts to sell tools in underground criminal markets. For example, there are phishing kits that compare the click-rate success of various spam campaigns, so criminals can track which campaigns are most effective at any given time. She is seeing a lot of this activity in what could be called the “AI slop landscape” but not as much “widespread adoption from highly technical actors,” she says.

But attacks don’t need to be sophisticated to be effective. Models that produce “good enough” results allow attackers to go after larger numbers of people than previously possible, says Liz James, a managing security consultant at the cybersecurity company NCC Group. “We’re talking about someone who might be using a scattergun approach phishing a whole bunch of people with a model that, if it lands itself on a machine of interest that doesn’t have any defenses … can reasonably competently encrypt your hard drive,” she says. “You’ve achieved your objective.” 

On the defense

For now, researchers are optimistic about our ability to defend against these threats—regardless of whether they are made with AI. “Especially on the malware side, a lot of the defenses and the capabilities and the best practices that we’ve recommended for the past 10-plus years—they all still apply,” says Leonard. The security programs we use to detect standard viruses and attack attempts work; a lot of phishing emails will still get caught in inbox spam filters, for example. These traditional forms of defense will still largely get the job done—at least for now. 

And in a neat twist, AI itself is helping to counter security threats more effectively. After all, it is excellent at spotting patterns and correlations. Vasu Jakkal, corporate vice president of Microsoft Security, says that every day, the company processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious events.

Despite the cybersecurity landscape’s constant state of flux, Jess is heartened by how readily defenders are sharing detailed information with each other about attackers’ tactics. Mitre’s Adversarial Threat Landscape for Artificial-Intelligence Systems and the GenAI Security Project from the Open Worldwide Application Security Project are two helpful initiatives documenting how potential criminals are incorporating AI into their attacks and how AI systems are being targeted by them. “We’ve got some really good resources out there for understanding how to protect your own internal AI toolings and understand the threat from AI toolings in the hands of cybercriminals,” she says.

PromptLock, the result of a limited university project, isn’t representative of how an attack would play out in the real world. But if it taught us anything, it’s that the technical capabilities of AI shouldn’t be dismissed.New York University’s Udeshi says he wastaken aback at how easily AI was able to handle a full end-to-end chain of attack, from mapping and working out how to break into a targeted computer system to writing personalized ransom notes to victims: “We expected it would do the initial task very well but it would stumble later on, but we saw high—80% to 90%—success throughout the whole pipeline.” 

AI is still evolving rapidly, and today’s systems are already capable of things that would have seemed preposterously out of reach just a few short years ago. That makes it incredibly tough to say with absolute confidence what it will—or won’t—be able to achieve in the future. While researchers are certain that AI-driven attacks are likely to increase in both volume and severity, the forms they could take are unclear. Perhaps the most extreme possibility is that someone makes an AI model capable of creating and automating its own zero-day exploits—highly dangerous cyber­attacks that take advantage of previously unknown vulnerabilities in software. But building and hosting such a model—and evading detection—would require billions of dollars in investment, says Hutchins, meaning it would only be in the reach of a wealthy nation-state. 

Engin Kirda, a professor at Northeastern University in Boston who specializes in malware detection and analysis, says he wouldn’t be surprised if this was already happening. “I’m sure people are investing in it, but I’m also pretty sure people are already doing it, especially [in] China—they have good AI capabilities,” he says. 

It’s a pretty scary possibility. But it’s one that—thankfully—is still only theoretical. A large-scale campaign that is both effective and clearly AI-driven has yet to materialize. What we can say is that generative AI is already significantly lowering the bar for criminals. They’ll keep experimenting with the newest releases and updates and trying to find new ways to trick us into parting with important information and precious cash. For now, all we can do is be careful, remain vigilant, and—for all our sakes—stay on top of those system updates. 

Why EVs are gaining ground in Africa

EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa.

Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around.

But there are some signs of progress. I just finished up a story about the economic case: A recent study in Nature Energy found that EVs from scooters to minibuses could be cheaper to own than gas-powered vehicles in Africa by 2040.

If there’s one thing to know about EVs in Africa, it’s that each of the 54 countries on the continent faces drastically different needs, challenges, and circumstances. There’s also a wide range of reasons to be optimistic about the prospects for EVs in the near future, including developing policies, a growing grid, and an expansion of local manufacturing.  

Even the world’s leading EV markets fall short of Ethiopia’s aggressively pro-EV policies. In 2024, the country became the first in the world to ban the import of non-electric private vehicles.

The case is largely an economic one: Gasoline is expensive there, and the country commissioned Africa’s largest hydropower dam in September 2025, providing a new source of cheap and abundant clean electricity. The nearly $5 billion project has a five-gigawatt capacity, doubling the grid’s peak power in the country.  

Much of Ethiopia’s vehicle market is for used cars, and some drivers are still opting for older gas-powered vehicles. But this nudge could help increase the market for EVs there.  

Other African countries are also pushing some drivers toward electrification. Rwanda banned new registrations for commercial gas-powered motorbikes in the capital city of Kigali last year, encouraging EVs as an alternative. These motorbike taxis can make up over half the vehicles on the city’s streets, so the move is a major turning point for transportation there. 

Smaller two- and three-wheelers are a bright spot for EVs globally: In 2025, EVs made up about 45% of new sales for such vehicles. (For cars and trucks, the share was about 25%.)

And Africa’s local market is starting to really take off. There’s already some local assembly of electric two-wheelers in countries including Morocco, Kenya, and Rwanda, says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy. 

Spiro, a Dubai-based electric motorbike company, recently raised $100 million in funding to expand operations in Africa. The company currently assembles its bikes in Uganda, Kenya, Nigeria, and Rwanda, and as of October it has over 60,000 bikes deployed and 1,500 battery swap stations operating.

Assembly and manufacturing for larger EVs and batteries is also set to expand. Gotion High-Tech, a Chinese battery company, is currently building Africa’s first battery gigafactory. It’s a $5.6 billion project that could produce 20 gigawatt-hours of batteries annually, starting in 2026. (That’s enough for hundreds of thousands of EVs each year.)

Chinese EV companies are looking to growing markets like Southeast Asia and Africa as they attempt to expand beyond an oversaturated domestic scene. BYD, the world’s largest EV company, is aggressively expanding across South Africa and plans to have as many as 70 dealerships in the country by the end of this year. That will mean more options for people in Africa looking to buy electric. 

“You have very high-quality, very affordable vehicles coming onto the market that are benefiting from the economies of scale in China. These countries stand to benefit from that,” says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank. “It’s a game changer,” he adds.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: AI-enhanced cybercrime, and secure AI assistants

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI is already making online crimes easier. It could get much worse.

Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out.

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers instead argue that we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams.

Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. And we need to be ready for what comes next. Read the full story.

—Rhiannon Williams

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

Is a secure AI assistant possible?

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.

Viral AI agent project OpenClaw, which has made headlines across the world in recent weeks, harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out.

In response to these concerns, its creator warned that nontechnical people should not use the software. But there’s a clear appetite for what OpenClaw is offering, and any AI companies hoping to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research. Read the full story.

—Grace Huckins

What’s next for Chinese open-source AI

The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost.

These models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them. 

If open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. Here’s what may come next.

—Caiwei Chen

This is part of our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Why EVs are gaining ground in Africa

EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa.

Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around. But there are some signs of progress. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Instagram’s head has denied that social media is “clinically addictive”  
Adam Mosseri disputed allegations the platform prioritized profits over protecting its younger users’ mental health. (NYT $)
+ Meta researchers’ correspondence seems to suggest otherwise. (The Guardian)

2 The Pentagon is pushing AI companies to drop tools’ restrictions
In a bid to make AI models available on classified networks. (Reuters)
+ The Pentagon has gutted the team that tests AI and weapons systems. (MIT Technology Review)

3 The FTC has warned Apple News not to stifle conservative content
It has accused the company’s news arm of promoting what it calls “leftist outlets.” (FT $)

4 Anthropic has pledged to minimize the impact of its data centers
By covering electricity price increases and the cost of grid infrastructure upgrades. (NBC News)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

5 Online harassers are posting Grok-generated nude images on OnlyFans 
Kylie Brewer, a feminism-focused content creator, says the latest online campaign against her feels like an escalation. (404 Media)
+ Inside the marketplace powering bespoke AI deepfakes of real women. (MIT Technology Review)

6 Venture capitalists are hedging their AI bets
They’re breaking a cardinal rule by investing in both OpenAI and rival Anthropic. (Bloomberg $)
+ OpenAI has set itself some seriously lofty revenue goals. (NYT $)
+ AI giants are notoriously inconsistent when reporting deprecation expenses. (WSJ $)

7 We’re learning more about the links between weight loss drugs and addiction
Some patients report lowered urges for drugs and alcohol. But can it last? (New Yorker $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

8 Meta has patented an AI that keeps the accounts of dead users active
But it claims to have “no plans to move forward” with it. (Insider $)
+ Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)

9 Slime mold is cleverer than you may think
A certain type appears able to learn, remember and make decisions. (Knowable Magazine)
+ And that’s not all—this startup thinks it can help us design better cities, too. (MIT Technology Review)

10 Meditation can actually alter your brain activity 🧘
According to a new study conducted on Buddhist monks. (Wired $)

Quote of the day

“I still try to believe that the good that I’m doing is greater than the horrors that are a part of this. But there’s a limit to what we can put up with. And I’ve hit my limit.”

—An anonymous Microsoft worker explains why they’re growing increasingly frustrated with their employer’s links to ICE, the Verge reports. 

One more thing

Motor neuron diseases took their voices. AI is bringing them back.

Jules Rodriguez lost his voice in October 2024. His speech had been deteriorating since a diagnosis of amyotrophic lateral sclerosis (ALS) in 2020, but a tracheostomy to help him breathe dealt the final blow. 

Rodriguez and his wife, Maria Fernandez, who live in Miami, thought they would never hear his voice again. Then they re-created it using AI. After feeding old recordings of Rodriguez’s voice into a tool trained on voices from film, television, radio, and podcasts, the couple were able to generate a voice clone—a way for Jules to communicate in his “old voice.”

Rodriguez is one of over a thousand people with speech difficulties who have cloned their voices using free software from ElevenLabs. The AI voice clones aren’t perfect. But they represent a vast improvement on previous communication technologies and are already improving the lives of people with motor neuron diseases. Read the full story

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ We all know how the age of the dinosaurs ended. But how did it begin?
+ There’s only one Miss Piggy—and her fashion looks through the ages are iconic.
+ Australia’s hospital for injured and orphaned flying foxes is unbearably cute.
+ 81-year old Juan López is a fitness inspiration to us all.

AI Content Licensing for Merchants

Recent efforts by Microsoft and Amazon to develop content-licensing marketplaces for artificial intelligence models could represent an opportunity for ecommerce marketers.

A leaked Amazon Web Services slide presentation and Microsoft’s February announcement of its Publisher Content Marketplace both aim to solve the AI licensing problem.

REI is an excellent example of ecommerce content marketing.

AI Content

Large language models need content. They train on it and self-evaluate against it.

Yet those AI-driven interfaces increasingly answer questions without sending users to the content source. Google’s AI Overviews makes this obvious to many businesses in the form of dwindling search traffic.

Many publishers are alarmed, having built their businesses on audience reach, page views, and advertising impressions.

When AI systems summarize articles instead of referring readers, the economic model fractures. News organizations, media companies, and independent creators argue that AI platforms derive value from their work but don’t pay.

Some large publishers have made license deals, but the problem remains.

Microsoft’s Publisher Content Marketplace is one path toward a solution. The program allows publishers to license content for AI use through a centralized system that emphasizes usage-based compensation and reporting transparency.

Rather than relying exclusively on separate agreements, publishers can theoretically expose their work to multiple AI buyers while maintaining defined licensing terms.

Amazon’s reported initiative appears conceptually similar. Publishers could sell or license content to AI developers. While unconfirmed, the effort signals a broader industry shift toward formalized access to AI content rather than unstructured scraping.

Economics

These and similar marketplaces could reshape how value flows between content producers and AI builders.

For publishers, a marketplace implies more predictable compensation and greater control. For AI developers, it offers a defensible content supply chain that reduces legal uncertainty. In principle, marketplaces reduce friction by normalizing pricing, usage measurement, and participation mechanics.

Content Marketing

While the licensing debate centers on publishers, ecommerce marketers should closely watch, too.

For years, some retailers have produced publisher-like content to attract, engage, and retain shoppers. Buying guides, tutorials, recipes, and project libraries increasingly sit alongside product catalogs.

Prominent examples include:

Much of ecommerce content marketing operates on the principle of reciprocity. Retailers provide useful information, and consumers reward it with trust, attention, and eventual purchases. The strategy does not depend solely on immediate transactions. It builds long-term preference and brand affinity, similar to that of publishers.

In fact, not too long ago, publishers complained that some forms of content marketing represented direct competition.

Content Traits

The distinction between the types of ecommerce marketing content is worth noting.

The first is promoting products. Content marketers and search engine optimizers work hand in glove to expose products. AI has made this more difficult.

Product feeds are a potential solution. The feeds would originate from ecommerce platforms such as Shopify or marketplaces like Walmart, which have direct relationships with AI businesses.

The second type is publisher-style and reciprocity-driven. These are the articles, videos, and podcasts to attract shoppers. It is distinct from product-focused and has at least three aims.

  • Relationships first. Reciprocity-based content creates value independent of short-term purchases. It’s a back door to ecommerce sales and builds customer relationships. REI’s educational posts and videos help outdoor enthusiasts develop skills, whether or not a transaction occurs immediately.
  • Brand affinity and trust. In the same way publishers seek authority, content marketers instill confidence. For example, Williams Sonoma’s recipe and entertaining collections position the retailer as an authority in cooking and hospitality. Shoppers engage with the brand through expertise, not only merchandise.
  • Audience development, wherein the marketer is akin to a media company, with content that drives search engine rankings, repeat visits, email subscribers, and consumer preferences. Rockler operates as a niche publisher with its learning center that cultivates repeat visits and sustained engagement.

Content Opportunity

When they produce publisher-style content, marketers gain access to publisher-oriented tools, including emerging AI content marketplaces.

Yet the motivation differs. Publishers seek licensing revenue, while merchants seek discovery and visibility. Thus content-license marketplaces are a potential ecommerce opportunity to expose a brand’s products and expertise across AI-driven interfaces.

Google Clarifies Its Stance On Campaign Consolidation via @sejournal, @brookeosmundson

In the recent episode of Google’s Ads Decoded podcast, Ginny Marvin sat down with Brandon Ervin, Director of Product Management for Search Ads, to address a topic many PPC marketers have strong opinions about: campaign and ad group consolidation.

Ervin, who oversees product development across core Search and Shopping ad automation, including query matching, Smart Bidding, Dynamic Search Ads, budgeting, and AI-driven systems, made one thing clear.

Consolidation is not the end goal. Equal or better performance with less granularity is.

What Was Said

During the discussion, Ervin acknowledged that many legacy account structures were built with good reason.

“What people were doing before was quite rational,” he said.

For years, granular campaign builds gave advertisers control. Match type segmentation, tightly themed ad groups, layered bidding strategies, and regional splits all made sense in a manual or semi-automated environment.

But according to Ervin, the rise of Smart Bidding and AI has shifted that dynamic.

The big shift we’ve seen with the rise of Smart Bidding and AI, the machine in general can do much better than most humans. Consolidation is not necessarily the goal itself. This evolution we’ve gone through allows you to get equal or better performance with a lot less granularity.

In other words, the structure that once helped performance may now be limiting it.

Ervin also pushed back on the idea that consolidation means losing control.

“Control still exists,” he said. “It just looks different than it did before.”

Ginny Marvin described it as a “mindset shift.”

When Segmentation Still Makes Sense

Despite Google’s push toward leaner account structures, Ervin did not suggest collapsing everything into one campaign.

Segmentation still makes sense when it reflects how a business actually operates.

Examples he shared included:

  • Distinct product lines with separate budgets and bidding goals
  • Different business objectives that require their own targets or reporting
  • Regional splits if that mirrors how the company runs operations

The key distinction is intent. If structure supports real budget decisions, reporting requirements, or operational differences, it belongs. If it exists only because that was the best practice five years ago, it may be creating more friction than value.

Ervin also addressed a common concern: how do you know when you’ve consolidated enough?

His benchmark was 15 conversions over a 30-day period. Those conversions do not need to come from a single campaign. Shared budgets and portfolio bidding strategies can aggregate conversion data across campaigns to meet that threshold.

If your campaign or ad group segmentation dilutes learning and slows down bidding models, it may be time to rethink your structure.

Why This Matters

For many PPC professionals, granularity has long been associated with expertise. Highly segmented accounts, tightly themed ad groups, and cautious use of broad match were once signs of disciplined management.

In earlier versions of Google Ads, that level of control often made a measurable difference.

I used to build accounts that way, too. When I used to manage highly competitive and seasonal E-commerce brands, SKAG structures were common practice for good reason. It was a way to better control budget for high-volume, generic terms that performed differently than more niche, long-tail terms.

What has changed my mindset is not the importance of structure, but the role it plays in my accounts. As Smart Bidding and automation have matured, I have seen firsthand how legacy segmentation can dilute data and slow down learning.

In several accounts where consolidation was tested thoughtfully, performance stabilized and, in some cases, improved. Especially in accounts I managed that had low conversion volume as a whole. What I thought was a perfectly built account structure was actually limiting performance because I was trying to spread budget and conversion volume too thin.

After a few months of poor performance, I was essentially “forced” to test out a simpler campaign structure and let go of hold habits.

Was it uncomfortable? Absolutely. When you’ve been doing PPC for years (think back to when Google Shopping was first free!), you’re essentially unlearning years of ‘best practices’ and having to learn a new way of managing accounts.

That does not mean consolidation is always the answer. It does suggest that structure should be tied directly to business logic, not inherited from best practices that were built for a different version of the platform.

Looking Ahead

If you’re in the camp of needing to start consolidating campaigns or ad groups, know that these large structural changes should not happen overnight.

For many teams, especially those managing complex accounts, restructuring can carry risk and large volatility spikes if it is done too aggressively.

A more measured approach may make sense. Start by identifying splits that clearly align with budgets, reporting requirements, or business priorities. Then evaluate the ones that exist primarily because they were once considered best practice.

In some cases, consolidation may unlock stronger data signals and steadier bidding. In others, maintaining separation may still be justified. The key is being intentional about the reason each layer exists.

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You And The Answer via @sejournal, @DuaneForrester

Most people still think visibility is a ranking problem. That worked when discovery lived in 10 blue links. It breaks down when discovery happens inside an answer layer.

Answer engines have to filter aggressively. They are assembling responses, not returning a list. They are also carrying more risk. A bad result can become harmful advice, a scam recommendation, or a confident lie delivered in a friendly tone. So the systems that power search and LLM experiences rely on classification gates long before they decide what to rank or what to cite.

If you want to be visible in the answer layer, you need to clear those gates.

SSIT is a simple way to name what’s happening. Spam, Safety, Intent, Trust. Four classifier jobs sit between your content and the output a user sees. They sort, route, and filter long before retrieval, ranking, or citation.

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You and the Answer

Spam: The Manipulation Gate

Spam classifiers exist to catch scaled manipulation. They are upstream and unforgiving, and if you trip them, you can be suppressed before relevance even enters the conversation.

Google is explicit that it uses automated systems to detect spam and keep it out of search results. It also describes how those systems evolve over time and how manual review can complement automation.

Google has also named a system directly in its spam update documentation. SpamBrain is described as an AI-based spam prevention system that it continually improves to catch new spam patterns.

For SEOs, spam detection behaves like pattern recognition at scale. Your site gets judged as a population of pages, not a set of one-offs. Templates, footprints, link patterns, duplication, and scaling behavior all become signals. That’s why spam hits often feel unfair. Single pages look fine; the aggregate looks engineered.

If you publish a hundred pages that share the same structure, phrasing, internal links, and thin promise, classifiers see the pattern.

Google’s spam policies are a useful map of what the spam gate tries to prevent. Read them like a spec for failure modes, then connect each policy category to a real pattern on your site that you can remove.

Manual actions remain part of this ecosystem. Google documents that manual actions can be applied when a human reviewer determines a site violates its spam policies.

There is an uncomfortable SEO truth hiding in this. If your growth play relies on behaviors that resemble manipulation, you are betting your business on a classifier not noticing, not learning, and not adapting. That is not a stable bet.

Safety: The Harm And Fraud Gate

Safety classifiers are about user protection. They focus on harm, deception, and fraud. They do not care if your keyword targeting is perfect, but they do care if your experience looks risky.

Google has made public claims about major improvements in scam detection using AI, including catching more scam pages and reducing specific forms of impersonation scams.

Even if you ignore the exact numbers, the direction is clear. Safety classification is a core product priority, and it shapes visibility hardest where users can be harmed financially, medically, or emotionally.

This is where many legitimate sites accidentally look suspicious. Safety classifiers are conservative, and they work at the level of pattern and context. Monetization-heavy layouts, thin lead gen pages, confusing ownership, aggressive outbound pushes, and inflated claims can all resemble common scam patterns when they show up at scale.

If you operate in affiliate, lead gen, local services, finance, health, or any category where scams are common, you should assume the safety gate is strict. Then build your site so it reads as legitimate without effort.

That comes down to basic trust hygiene.

Make ownership obvious. Use consistent brand identifiers across the site. Provide clear contact paths. Be transparent about monetization. Avoid claims that cannot be defended. Include constraints and caveats in the content itself, not hidden in a footer.

If your site has ever been compromised, or if you operate in a neighborhood of risky outbound links, you also inherit risk. Safety classifiers treat proximity as a signal because threat actors cluster. Cleaning up your link ecosystem and site security is no longer only a technical responsibility; it’s visibility defense.

Intent: The Routing Gate

Intent classification determines what the system believes the user is trying to accomplish. That decision shapes the retrieval path, the ranking behavior, the format of the answer, and which sources get pulled into the response.

This matters more as search shifts from browsing sessions to decision sessions. In a list-based system, the user can correct the system by clicking a different result. In an answer system, the system makes more choices on the user’s behalf.

Intent classification is also broader than the old SEO debates about informational versus transactional. Modern systems try to identify local intent, freshness intent, comparative intent, procedural intent, and high-stakes intent. These intent classes change what the system considers helpful and safe. In fact, if you deep-dive into “intents,” you’ll find that so many more don’t even fit into our crisply defined, marketing-designed boxes. Most marketers build for maybe three to four intents. The systems you’re trying to win in often operate with more, and research taxonomies already show how intent explodes into dozens of types when you measure real tasks instead of neat categories.

If you want consistent visibility, make intent alignment obvious and commit each page to a primary task.

  • If a page is a “how to,” make it procedural. Lead with the outcome. Present steps. Include requirements and failure modes early.
  • If a page is a “best options” piece, make it comparative. Define your criteria. Explain who each option fits and who it does not.
  • If a page is local, behave like a local result. Include real local proof and service boundaries. Remove generic filler that makes the page look like a template.
  • If a page is high-stakes, be disciplined. Avoid sweeping guarantees. Include evidence trails. Use precise language. Make boundaries explicit.

Intent clarity also helps across classic ranking systems, and it can help reduce pogo behavior and improve satisfaction signals. More importantly for the answer layer, it gives the system clean blocks to retrieve and use.

Trust: The “Should We Use This” Gate

Trust is the gate that decides whether content is used, how much it is used, and whether it is cited. You can be retrieved and still not make the cut. You can be used and still not be cited. You can show up one day and disappear the next because the system saw slightly different context and made different selections.

Trust sits at the intersection of source reputation, content quality, and risk.

At the source level, trust is shaped by history. Domain behavior over time, link graph context, brand footprint, author identity, consistency, and how often the site is associated with reliable information.

At the content level, trust is shaped by how safe it is to quote. Specificity matters. Internal consistency matters. Clear definitions matter. Evidence trails matter. So does writing that makes it hard to misinterpret.

LLM products also make classification gates explicit in their developer tooling. OpenAI’s moderation guide documents classification of text and images for safety purposes, so developers can filter or intervene.

Even if you are not building with APIs, the existence of this tooling reflects the reality of modern systems. Classification happens before output, and policy compliance influences what can be surfaced. For SEOs, the trust gate is where most AI optimization advice gets exposed. Sounding authoritative is easy, but being safe to use takes precision, boundaries, evidence, and plain language.

It also comes in blocks that can stand alone.

Answer engines extract. They reassemble, and they summarize. That means your best asset is a self-contained unit that still makes sense when it is pulled out of the page and placed into a response.

A good self-contained block typically includes a clear statement, a short explanation, a boundary condition, and either an example or a source reference. When your content has those blocks, it becomes easier for the system to use it without introducing risk.

How SSIT Flows Together In The Real World

In practice, the gates stack.

First, the system evaluates whether a site and its pages look spammy or manipulative. This can affect crawl frequency, indexing behavior, and ranking potential. Next, it evaluates whether the content or experience looks risky. In some categories, safety checks can suppress visibility even when relevance is high. Then it evaluates intent. It decides what the user wants and routes retrieval accordingly. If your page does not match the intent class cleanly, it becomes less likely to be selected.

Finally, it evaluates trust for usage. That is where decisions get made about quoting, citing, summarizing, or ignoring. The key point for AI optimization is not that you should try to game these gates. The point is that you should avoid failing them.

Most brands lose visibility in the answer layer for boring reasons. They look like scaled templates. They hide important legitimacy signals. They publish vague content that is hard to quote safely. They try to cover five intents in one page and satisfy none of them fully.

If you address those issues, you are doing better “AI optimization” than most teams chasing prompt hacks.

Where “Classifiers Inside The Model” Fit, Without Turning This Into A Computer Science Lecture

Some classification happens inside model architectures as routing decisions. Mixture of Experts approaches are a common example, where a routing mechanism selects which experts process a given input to improve efficiency and capability. NVIDIA also provides a plain-language overview of Mixture of Experts as a concept.

This matters because it reinforces the broader mental model. Modern AI systems rely on routing and gating at multiple layers. Not every gate is directly actionable for SEO, but the presence of gates is the point. If you want predictable visibility, you build for the gates you can influence.

What To Do With This, Practical Moves For SEOs

Start by treating SSIT like a diagnostic framework. When visibility drops in an answer engine, do not jump straight to “ranking.” Ask where you might have failed in the chain.

Spam Hygiene Improvements

Audit at the template level. Look for scaled patterns that resemble manipulation when aggregated. Remove doorway clusters and near-duplicate pages that do not add unique value. Reduce internal link patterns that exist only to sculpt anchors. Identify pages that exist only to rank and cannot defend their existence as a user outcome.

Use Google’s spam policy categories as the baseline for this audit, because they map to common classifier objectives.

Safety Hygiene Improvements

Assume conservative filtering in categories where scams are common. Strengthen legitimacy signals on every page that asks for money, personal data, a phone call, or a lead. Make ownership and contact information easy to find. Use transparent disclosures. Avoid inflated claims. Include constraints inside the content.

If you publish in YMYL-adjacent categories, tighten your editorial standards. Add sourcing. Track updates. Remove stale advice. Safety gates punish stale content because it can become harmful.

Intent Hygiene Improvements

Choose the primary job of the page and make it obvious in the first screen. Align the structure to the task. A procedural page should read like a procedure. A comparison page should read like a comparison. A local page should prove locality.

Do not rely on headers and keywords to communicate this. Make it obvious in sentences that a system can extract.

Trust Hygiene Improvements

Build citeable blocks that stand on their own. Use tight definitions. Provide evidence trails. Include boundaries and constraints. Avoid vague, sweeping statements that cannot be defended. If your content is opinion-led, label it as such and support it with rationale. If your content is claim-led, cite sources or provide measurable examples.

This is also where authorship and brand footprint matter. Trust is not only on-page. It is the broader set of signals that tell systems you exist in the world as a real entity.

SSIT As A Measurement Mindset

If you are building or buying “AI visibility” reporting, SSIT changes how you interpret what you see.

  • A drop in presence can mean a spam classifier dampened you.
  • A drop in citations can mean a trust classifier avoided quoting you.
  • A mismatch between retrieval and usage can mean intent misalignment.
  • A category-level invisibility can mean safety gating.

That diagnostic framing matters because it leads to fixes you can execute. It also stops teams from thrashing, rewriting everything, and hoping the next version sticks.

SSIT also keeps you grounded. It is tempting to treat AI optimization as a new discipline with new hacks. Most of it is not hacks. It is hygiene, clarity, and trust-building, applied to systems that filter harder than the old web did. That’s the real shift.

The answer layer is not only ranking content, but it’s also selecting content. That selection happens through classifiers that are trained to reduce risk and improve usefulness. When you plan for Spam, Safety, Intent, and Trust, you stop guessing. You start designing content and experiences that survive the gates.

That is how you earn a place in the answer layer, and keep it.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Olga_TG/Shutterstock

From Performance SEO To Demand SEO via @sejournal, @TaylorDanRW

AI is fundamentally changing what doing SEO means. Not just in how results are presented, but in how brands are discovered, understood, and trusted inside the very systems people now rely on to learn, evaluate, and make decisions. This forces a reassessment of our role as SEOs, the tools and frameworks we use, and the way success is measured beyond legacy reporting models that were built for a very different search environment.

Continuing to rely on vanity metrics rooted in clicks and rankings no longer reflects reality, particularly as people increasingly encounter and learn about brands without ever visiting a website.

For most of its history, SEO focused on helping people find you within a static list of results. Keywords, content, and links existed primarily to earn a click from someone who already recognized a need and was actively searching for a solution.

AI disrupts that model by moving discovery into the answer itself, returning a single synthesized response that references only a small number of brands, which naturally reduces overall clicks while simultaneously increasing the number of brand touchpoints and moments of exposure that shape perception and preference. This is not a traffic loss problem, but a demand creation opportunity. Every time a brand appears inside an AI-generated answer, it is placed directly into the buyer’s mental shortlist, building mental availability even when the user has never encountered the brand before.

Why AI Visibility Creates Demand, Not Just Traffic

Traditional SEO excelled at capturing existing demand by supporting users as they moved through a sequence of searches that refined and clarified a problem before leading them towards a solution.

AI now operates much earlier in that journey, shaping how people understand categories, options, and tradeoffs before they ever begin comparing vendors, effectively pulling what we used to think of as middle and bottom-of-funnel activity further upstream. People increasingly use AI to explore unfamiliar spaces, weigh alternatives, and design solutions that fit their specific context, which means that when a brand is repeatedly named, explained, or referenced, it begins to influence how the market defines what good looks like.

This repeated exposure builds familiarity over time, so that when a decision moment eventually arrives, the brand feels known and credible rather than new and untested, which is demand generation playing out inside the systems people already trust and use daily.

Unlike above-the-line advertising, this familiarity is built natively within tools that have become deeply embedded in everyday life through smartphones, assistants, and other connected devices, making this shift not only technical but behavioral, rooted in how people now access and process information.

How This Changes The Role Of SEO

As AI systems increasingly summarize, filter, and recommend on behalf of users, SEO has to move beyond optimizing individual pages and instead focus on making a brand easy for machines to understand, trust, and reuse across different contexts and queries.

This shift is most clearly reflected in the long-running move from keywords to entities, where keywords still matter but are no longer the primary organizing principle, because AI systems care more about who a brand is, what it does, where it operates, and which problems it solves.

That pushes modern SEO towards clearly defined and consistently expressed brand boundaries, where category, use cases, and differentiation are explicit across the web, even when that creates tension with highly optimized commercial landing pages.

AI systems rely heavily on trust signals such as citations, consensus, reviews, and verifiable facts, which means traditional ranking factors still play a role, but increasingly as proof points that an AI system can safely rely on when constructing answers. When an AI cannot confidently answer basic questions about a brand, it hesitates to recommend it, whereas when it can, that brand becomes a dependable component it can repeatedly draw upon.

This changes the questions SEO teams need to ask, shifting focus away from rankings alone and toward whether content genuinely shapes category understanding, whether trusted publishers reference the brand, and whether information about the brand remains consistent wherever it appears.

Narrative control also changes, because where brands once shaped their story through pages in a list of results, AI now tells the story itself, requiring SEOs to work far more closely with brand and communication teams to reinforce simple, consistent language and a small number of clear value propositions that AI systems can easily compress into accurate summaries.

What Brands Need To Do Differently

Brands need to stop starting their strategies with keywords and instead begin by assessing their strength and clarity as an entity, looking at what search engines and other systems already understand about them and how consistent that understanding really is.

The most valuable AI moments occur long before a buyer is ready to compare vendors, at the point where they are still forming opinions about the problem space, which means appearing by name in those early exploratory questions allows a brand to influence how the problem itself is framed and to build mental availability before any shortlist exists.

Achieving that requires focus rather than breadth, because trying to appear in every possible conversation dilutes clarity, whereas deliberately choosing which problems and perspectives to own creates stronger and more coherent signals for AI systems to work with.

This represents a move away from chasing as many keywords as possible in favor of standardizing a simple brand story that uses clear language everywhere, so that what you do, who it is for, and why it matters can be expressed in one clean, repeatable sentence.

This shift also demands a fundamental change in how SEO success is measured and reported, because if performance continues to be judged primarily through rankings and clicks, AI visibility will always look underwhelming, even though its real impact happens upstream by shaping preference and intent over time.

Instead, teams need to look at patterns across branded search growth, direct traffic, lead quality, and customer outcomes, because when reporting reflects that broader reality, it becomes clear that as AI visibility grows, demand follows, repositioning SEO from a purely tactical channel into a strategic lever for long-term growth.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

WP Engine Complaint Adds Unredacted Allegations About Mullenweg Plan via @sejournal, @martinibuster

WP Engine recently filed its third amended complaint against WordPress co-founder Matt Mullenweg and Automattic, which includes newly s allegations that Mullenweg identified ten companies to pursue for licensing fees and contacted a Stripe executive in an effort to persuade Stripe to cancel contracts and partnerships with WPE.

Mullenweg And “Nuclear War”

The defendants argued that Mullenweg did not use the phrase “nuclear war.” However, documents they produced show that he used the phrase in a message describing his response to WP Engine if it did not comply with his demands.

The footnote states:

“During the recent hearing before this Court, Defendants represented that “we have seen over and over again ‘nuclear war’ in quotes,” but Mullenweg “didn’t say it” and it “[d]idn’t happen.” August 28, 2025 Hrg. Tr. at 33. According to Defendants’ counsel, Mullenweg instead only “refers to nuclear,” not “nuclear war.””

While WPE alleges that both threats are abhorrent and wrongful, reflecting a distinction without a difference, documents recently produced by Defendants confirm that in a September 13, 2024 message sent shortly before Defendants launched their campaign against WPE, Mullenweg declared “for example with WPE . . . [i]f that doesn’t resolve well it’ll look like all-out nuclear war[.]”

Email From Matt Mullenweg To A Stripe Executive

Another newly unredacted detail is an email from Matt Mullenweg to a Stripe executive in which he asked Stripe to “cancel any contracts or partnerships with WP Engine.” Stripe is a financial infrastructure platform that enables companies to accept credit card payments online.

The new information appears in the third amended complaint:

“In a further effort to inflict harm upon WPE and the market, Defendants secretly sought to strongarm Stripe into ceasing any business dealings with WPE. Shocking documents Defendants recently produced in discovery reveal that in mid-October 2024, just days after WPE brought this lawsuit, Mullenweg emailed a Stripe senior executive, insisting that Stripe “cancel any contracts or partnerships with WP Engine,” and threatening, “[i]f you chose not to do so, we should exit our contracts.”

“Destroy All Competition”

In paragraphs 200 and 202, WP Engine alleges that Defendants acknowledged having the power to “destroy all competition” and were seeking contributions that benefited Automattic rather than the WordPress.org community. WPE argues that Mullenweg abused his roles as the head of a nonprofit foundation, the owner of critical “dot-org” infrastructure, and the CEO of a for-profit competitor, Automattic.

These paragraphs appear intended to support WP Engine’s claim that the “Five for the Future” program and other community-oriented initiatives were used as leverage to pressure competitors into funding Automattic’s commercial interests. The complaint asserts that only a monopolist could make such demands and successfully coerce competitors in this manner.

Here are the paragraphs:

“Indeed, in documents recently produced by Defendants, they shockingly acknowledge that they have the power to “destroy all competition” and would inflict that harm upon market participants unless they capitulated to Defendants’ extortionate demands.”

“…Defendants’ monopoly power is so overwhelming that, while claiming they are interested in encouraging their competitors to “contribute to the community,” internal documents recently produced by Defendants reveal the truth—that they are engaged in an anticompetitive campaign to coerce their competitors to “contribute to Automattic.” Only a monopolist could possibly make such demands, and coerce their competitors to meet them, as has occurred here.”

“They Get The Same Thing Today For Free”

Additional paragraphs allege that internal documents contradict the defendants’ claim that their trademark enforcement is legitimate by acknowledging that certain WordPress hosts were already receiving the same benefits for free.

The new paragraph states:

“Contradicting Defendants’ current claim that their enforcement of supposed trademarks is legitimate, Defendants conceded internally that “any Tier 1 host (WPE for example)” would “pushback” on agreeing to a purported trademark license because “they get the same thing today for free. They’ve never paid for [the WordPress] trademarks and won’t want to pay …”

“If They Don’t Take The Carrot We’ll Give Them The Stick”

Paragraphs 211, 214, and 215 cite internal correspondence that WP Engine alleges reflects an intention to enforce compliance using a “carrot” or “stick” approach. The complaint uses this language to support its claims of market power and exclusionary conduct, which form the basis of its coercion and monopolization allegations under the Sherman Act.

Paragraph 211:

“Given their market power, Defendants expected to be able to enforce compliance, whether with a “carrot” or a “stick.””

Paragraph 214

“Defendants’ internal discussions further reveal that if market participants did not acquiesce to the price increases via a partnership with a purported trademark license component, then “they are fair game” and Defendants would start stealing their sites, thereby effectively eliminating those competitors. As Defendants’ internal correspondence states, “if they don’t take the carrot we’ll give them the stick.””

Paragraph 215:

“As part of their scheme, Defendants initially categorized particular market participants as follows:
• “We have friends (like Newfold) who pay us a lot of money. We want to nurture and value these relationships.”
• “We have would-be friends (like WP Engine) who are mostly good citizens within the WP ecosystem but don’t directly contribute to Automattic. We hope to change this.”
• “And then there are the charlatans ( and ) who don’t contribute. The charlatans are free game, and we should steal every single WP site that they host.””

Plan To Target At Least Ten Competitors

Paragraphs 218, 219, and 220 serve to:

  • Support its claim that WPE was the “public example” of what it describes as a broader plan to target at least ten other competitors with similar trademark-related demands.
  • Allege that certain competitors were paying what it describes as “exorbitant sums” tied to trademark arrangements.

WP Engine argues that these allegations show the demands extended beyond WPE and were part of a broader pattern.

The complaint cites internal documents produced by Defendants in which Mullenweg claimed he had “shield[ed]” a competitor “from directly competitive actions,” which WP Engine cites as evidence that Defendants had and exercised the ability to influence competitive conditions through these arrangements.

In those same internal documents, proposed payments were described as “not going to work,” which the complaint uses to argue that the payment amounts were not standardized but could be increased at Defendants’ insistence.

Here are the paragraphs:

“218. Ultimately, WPE was the public example of the “stick” part of Defendants’ “trademark license” demand. But while WPE decided to stand and fight by refusing Defendants’ ransom demand, Defendants’ list included at least ten other competitors that they planned to target with similar demands to pay Defendants’ bounty.

219. Indeed, based on documents that Defendants have recently produced in discovery, other competitors such as Newfold and [REDACTED] are paying Defendants exorbitant sums as part of deals that include “the use of” Defendants’ trademarks.

220. Regarding [REDACTED], in internal documents produced by Defendants, [REDACTED] confirmed that “[t]he money we’re sending from the hosting page is going to you directly”.

In return, Mullenweg claimed he apparently “shield[ed]” [REDACTED] “from directly competitive actions from a number of places[.]”.

Mullenweg further criticized the level of contributions for the month of August 2024, claiming “I’d need 3 years of that to get a new Earthroamer”.

Confronted with Mullenweg’s demand for more, [REDACTED] described itself as “the smallest fish,” suggesting that Mullenweg “can get more money from other companies,” and asking whether [REDACTED] was “the only ones you’re asking to make this change” in an apparent reference to “whatever trademark guidelines you send over”.

Mullenweg responded “nope[.]”. Later, on November 26, 2024—the same day this Court held the preliminary injunction hearing—Mullenweg told [REDACTED] that its proposed “monthly payment of [REDACTED] and contributions to wordpress.org were not “going to work,” and wished it “[b]est of luck” in resisting Defendants’ higher demands.”

WP Engine Versus Mullenweg And Automattic

Much of the previously redacted material is presented to support WP Engine’s antitrust claims, including statements that Defendants had the power to “destroy all competition.” What happens next is up to the judge.

Featured Image by Shutterstock/Kues

EVs could be cheaper to own than gas cars in Africa by 2040

Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.

There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. As a result some previous analyses have suggested that fossil-fuel vehicles would dominate in Africa through at least 2050. 

But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Electric two-wheelers, cars, larger automobiles, and even minibuses could compete in most African countries in just 15 years, according to the new study, published in Nature Energy.

“EVs have serious economic potential in most African countries in the not-so-distant future,” says Bessie Noll, a senior researcher at ETH Zürich and one of the authors of the study.

The study considered the total cost of ownership over the lifetime of a vehicle. That includes the sticker price, financing costs, and the cost of fueling (or charging). The researchers didn’t consider policy-related costs like taxes, import fees, and government subsidies, choosing to focus instead on only the underlying economics.

EVs are getting cheaper every year as battery and vehicle manufacturing improve and production scales, and the researchers found that in most cases and in most places across Africa, EVs are expected to be cheaper than equivalent gas-powered vehicles by 2040. EVs should also be less expensive than vehicles that use synthetic fuels. 

For two-wheelers like electric scooters, EVs could be the cheaper option even sooner: with smaller, cheaper batteries, these vehicles will be economically competitive by the end of the decade. On the other hand, one of the most difficult segments for EVs to compete in is small cars, says Christian Moretti, a researcher at ETH Zürich and the Paul Scherrer Institute in Switzerland.

Because some countries still have limited or unreliable grid access, charging is a major barrier to EV uptake, Noll says. So for EVs, the authors analyzed the cost of buying not only the vehicle but also a solar off-grid charging system. This includes solar panels, batteries, and the inverter required to transform the electricity into a version that can charge an EV. (The additional batteries help the system store energy for charging at times when the sun isn’t shining.)

Mini grids and other standalone systems that include solar panels and energy storage are increasingly common across Africa. It’s possible that this might be a primary way that EV owners in Africa will charge their vehicles in the future, Noll says.

One of the bigger barriers to EVs in Africa is financing costs, she adds. In some cases, the cost of financing can be more than the up-front cost of the vehicle, significantly driving up the cost of ownership.

Today, EVs are more expensive than equivalent gas-powered vehicles in much of the world. But in places where it’s relatively cheap to borrow money, that difference can be spread out across the course of a vehicle’s whole lifetime for little cost. Then, since it’s often cheaper to charge an EV than fuel a gas-powered car, the EV is less expensive over time. 

In some African countries, however, political instability and uncertain economic conditions make borrowing money more expensive. To some extent, the high financing costs affect the purchase of any vehicle, regardless of how it’s powered. But EVs are more expensive up front than equivalent gas-powered cars, and that higher up-front cost adds up to more interest paid over time. In some cases, financing an EV can also be more expensive than financing a gas vehicle—the technology is newer, and banks may see the purchase as more of a risk and charge a higher interest rate, says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank.

The picture varies widely depending on the country, too. In South Africa, Mauritius, and Botswana, financing conditions are already close to levels required to allow EVs to reach cost parity, according to the study. In higher-risk countries (the study gives examples including Sudan, which is currently in a civil war, and Ghana, which is recovering from a major economic crisis), financing costs would need to be cut drastically for that to be the case. 

Making EVs an affordable option will be a key first step to putting more on the roads in Africa and around the world. “People will start to pick up these technologies when they’re competitive,” says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy. 

Solar-based charging systems, like the ones mentioned in the study, could help make electricity less of a constraint, bringing more EVs to the roads, Nsitem says. But there’s still a need for more charging infrastructure, a major challenge in many countries where the grid needs major upgrades for capacity and reliability, he adds. 

Globally, more EVs are hitting the roads every year. “The global trend is unmistakable,” Carlin says. There are questions about how quickly it’s happening in different places, he says, “but the momentum is there.”