The Shift To Zero-Click Searches: Is Traffic Still King? via @sejournal, @wburton27

The world of SEO has changed, especially with the rise of zero-click searches, where users get their answers directly on Google’s search results page without clicking through to any websites.

A study from SparkTaro found that zero-click searches accounted for nearly 60% of Google searches ending without a click in 2024.

This trend will continue to reshape the digital landscape and force marketers to adapt their strategies, but is traffic still king? Let’s explore.

Before we get into it, here are the most common types of zero click searches:

  • Featured Snippets: These are snippets of text that appear at the top of the SERP, that provide direct answers to specific questions. This could be in the form of paragraphs, lists, or tables.
  • Knowledge Panels: These information boxes appear on the side of the SERP, providing a quick overview of entities like people, places, or organizations.
  • Direct Answers: These are concise answers to simple questions, such as “How Hot Will It Be Today?” or “How many feet are in 36 inches?”
  • People Also Ask (PAA): This section displays related questions that users frequently ask, with answers provided directly on the SERP that are expandable.
  • Local Packs: For local searches, Google displays a map with business listings and information, allowing users to find what they need without clicking through to individual websites.
  • AI Overviews: Answers to queries that are generated by AI, which give a quick overview of a topic searched.
  • Calculators And Converters: Google provides built-in tools for calculations and conversions, eliminating the need to visit external websites. For example, a search for ‘calculator’ brings up a mathematical calculator in the SERPs.
  • Definitions: When searching for the meaning of a word, the dictionary definition is often displayed directly on the SERP.

Here is the evolution of zero-click searches:

Year Description
2004 Google Local was introduced.
2007 Universal Search was launched.
2008 Google Suggest (Autocomplete) was introduced.
2010 Google Instant was launched.
2012 Knowledge Panels/Graphs were introduced.
2013 Quick Answers were introduced.
2015 Local Map Packs and People Also Ask were introduced.
2017 Google enhanced Knowledge Panels, and Google Posts were introduced.
2018 Featured Snippets and People Also Ask became more prominent.
2019 Zero-click searches passed the 50% mark in browsers.
2020 Knowledge Graph and Knowledge Panels were reintroduced
2021 Passages Ranking was introduced, and  64.82% of Google searches were zero-click.
2023 Google refined and expanded zero-click features.
2024 AI Overviews were introduced.

Can Zero Click Impact My Organic Traffic?

Yes, with the rise of zero click, it could impact your website traffic, and here is why.

  • If a user finds the answer to their query by a featured snippet or AI overview directly in the SERPs, they don’t need to click through to your website if the information matches what they were looking for. In this case, this could cause a decrease in organic traffic to your site.
  • For certain industries, such as news and health, this could have a detrimental impact on site traffic unless you’re optimized for AI overviews and users click through to your site if they need more information.
  • If you’re a brand that is well optimized and has conversational content, great content experience, and is optimized for featured snippets, then you may experience an increase in organic traffic. However, some publishers report increased traffic from AI overview citations.
  • The expansion of AIOs, and their in-depth answers and size, takes up a whole lot of organic real estate.
Screenshot from search for [what is a featured snippet], Google, February 2025

Adapting To Zero-Click Marketing

Just because your site may experience a decline in clicks, don’t throw in the towel just yet. It’s time to adapt your SEO strategy, and of course, in today’s landscape, you have to be everywhere your audience is.

Brands need to stop thinking about Google and think about social networks like Reddit, Quora, TikTok, YouTube, and others, in addition to optimizing AI Overviews.

While AIOs may result in fewer clicks to your website, if you show up in AIO and someone does click on your website, they are probably more qualified and more likely to convert.

Increase In Traffic From AI Citations

Some brands are reporting an increase in traffic from AI citations because they show up as links within AIO citations.

An example of this would be a search for AI SEO software.

Notice that brands like Backlinko, benefit from a link in the AIO. This can generate more brand awareness and traffic because it is an authority domain and is well optimized for AIOs.

Screenshot from search for [ai seo software], Google, February 2025

Is Traffic Still King?

In my opinion, traffic is not king; it is queen unless traffic is your main key performance indicator (KPI).

Unlike paid traffic, traffic generated through organic search is still free and can provide long-term results for years to come.

Conversions are king if you have a site that depends upon converting website visitors to customers.

If your site depends upon growing the number of organic visitors, then traffic may be king based on your business model because it can increase members, drive up ad revenue, and increase subscriptions.

I spoke with a client the other day who said they got a lot of traffic from their SEO and paid search campaign. When I looked at the conversions, there were only a few over the last six months, and they are a lead-focused business.

If SEO is not driving leads and conversions and resulting in paying customers, then traffic does not matter. SEO is all about driving high-quality traffic that converts into customers.

Although, in most cases, zero-click traffic does not drive users directly to your website, you can reap the benefits of it if you show up as an AI citation or the answer to the snippet itself.

You can improve your brand awareness if you show up as the search results for zero-click results, resulting in more users recognizing your brand and potentially lift conversions.

While zero click results may not directly drive organic traffic to your site, demonstrating expertise and the authority gained from awareness can drive higher conversion rates when users do visit your site.

SEO Is Not Dead

SEO and organic traffic are not dead; it has just evolved.

With the rise of AI overviews and changing user behavior, end users are asking questions in social discovery channels like TikTok, YouTube, and Reddit as part of their search journey. And, you need to be everywhere your brand is.

SEO can no longer sit in a vacuum all by itself and must be a part of a fully integrated strategy,

How Can I Adapt My Strategy To Win  

A good rule of thumb is to always create high-quality content that people can consume.

Focus on creating content that is conversational, directly answers user questions, is accurate and factual, and is marked up with structured data.

  1. Continue to optimize for featured snippets and knowledge panels.
  2. Create more comprehensive and conversational content that answers related questions, i.e., FAQs, etc.
  3. Focus on branded searches.
  4. Think outside Google and focus on social discovery channels like Reddit, YouTube, etc.
  5. Optimize your local SEO and Google Business Profile listings.

How To Measure Success

To measure the success of zero click, your metrics should focus on:

  • Most of the main SEO tools provide good reporting to see if you can be visible for AI Overviews and zero-click searches.
  • Focus on impressions and conversions. As I mentioned, SEO is all about driving traffic that converts into customers.
  • See if you get more brand mentions and citations in AI overviews and featured snippets.

Wrapping Up

Optimizing zero-click is critical to being competitive today, especially as search engines refine their ability to answer user queries directly.

While zero-click searches are rising and becoming the new standard, especially where there are AI Overviews, SEO professionals and digital marketers must adapt and update strategies to focus on visibility, brand awareness, and providing value directly within search results and social platforms.

This is especially true as user behavior continues to change and users are expecting a faster, easier way to satisfy their information needs.

More Resources:


Featured Image: Dean Drobot/Shutterstock

Ex-Googler: Google Sees Publisher Traffic As A Necessary Evil via @sejournal, @martinibuster

Google says it values the open web, and a current Googler confirmed in a private conversation at the recent Search Central Live in New York that the company, including CEO Sundar Pichai, cares about the web ecosystem. But that message is contradicted by an ex-Googler, who said Google internally regards sending traffic to publishers as “a necessary evil.”

Constant Evolution Of Google Search

Elizabeth Reid, VP of Search, is profiled in Bloomberg as the one responsible for major changes at Google search beginning in 2021, particularly AI Overviews. She was previously involved in Google Maps and is the one who revealed the existence of core topicality systems at Google.

Her statements about search show how it’s changing and give an idea of how publishers and SEOs should realign their perspectives. The main takeaway is that technology enables users to interact with information in different ways and search has to evolve with that to keep up with them. In her view, what’s happening is now a top-down approach to search where Google is imposing changes on users but rather it’s Google being responsive to users.

Her approach to search was said to be informed by her experience at Google Maps where Sergey Brin pushed the team to release Maps before they felt comfortable releasing it, teaching her that this enabled them to understand what users really wanted faster than had they waited longer.

According to Bloomberg:

“Reid refers to her approach as a “constant evolution” rather than a complete overhaul. Her team is still struggling to define the purpose of Google Search in this new era, according to interviews with 21 current and former search executives and employees…”

AI And Traditional Google Search

Google Search lost 20% of their search engineers who went over to focus on rolling out generative AI so perhaps it’s not surprising that she believes the search bar will lose prominence. According to the report:

“Reid predicts that the traditional Google search bar will become less prominent over time. Voice queries will continue to rise, she says, and Google is planning for expanded use of visual search, too.”

But she also said that the search bar isn’t going away:

“The search bar isn’t going away anytime soon, Reid says, but the company is moving toward a future in which Google is always hovering in the background. ‘The world will just expand,’ she says. ‘It’s as if you can ask Google as easily as you could ask a friend, only the friend is all-knowing, right?’”

Sending Traffic To Publishers Is A Necessary Evil

The article offers seemingly contradictory statements about how Google sees its relationship with the web ecosystem. An unnamed former Googler is quoted as saying that “giving” traffic to publishers is a necessary evil.

“Giving traffic to publisher sites is kind of a necessary evil. The main thing they’re trying to do is get people to consume Google services,” the former executive says. “So there’s a natural tendency to want to have people stay on Google pages, but it does diminish the sort of deal between the publishers and Google itself.”

What Current Googlers Say

At the Google Search Central Live event at New York City I had the opportunity to have a private conversation with a Googler about Google CEO Sundar Pichai’s inability to articulate what Google does to support the web ecosystem. The Googler told me that they’ve heard Sundar Pichai express a profound recognition of their relationship with publishers and said that it’s something he reflects on seriously.

That statement by the Googler was echoed in the article by something that Liz Reid and Sundar Pichai said:

“Reid says that Google cares deeply about publishers and that AI Overviews is a jumping-off point for users to conduct further research on the open web. Pichai, for his part, stresses the need to send ‘high-quality’ traffic to websites, instead of making users click around on sites that may not be relevant to them.

‘We are in the phase of making sure through this moment that we are improving the product, but in a way that prioritizes sending traffic to the ecosystem,’ he says, adding, ‘That’s been the most important goal.’”

Takeaways

  • Google is reshaping Search based on user behavior, not top-down mandates. But the fact that OpenAI’s ChatGPT pushed Google into rolling out their answer shows that other forces aside from user behaviors are in play as well.
  • Traditional search bar is becoming less central, replaced by voice (likely mobile devices) and visual search (also mobile). Google is multimodal, which means that it operates within multiple senses, like audio and visual. Publishers should really think hard about how that affects their business and how they can align it to also be multimodal so as to evolve along with users so that their content is already there when Google itself evolves to meet them there, too.
  • AI Overviews and possibly the Gemini Personal AI Assistant could signal a shift toward Google acting as an ambient presence, not a destination.
  • Google’s relationship with publishers has never been more strained. The disconnect between the public-facing statements and those by anonymous ex-Googlers send a signal that Google needs to be more out front with their relationship with publishers. For example, Google’s Search Central videos used to be interactive sessions with publishers, gradually drying up to scripted question and answers and now it’s completely gone. Although I believe what the Googler told me about Pichai’s regard for publishers because I know them to be truthful, the appearance that their search relations team has retreated behind closed doors sends a louder signal.
  • Google leadership emphasizes commitment to sending “high-quality traffic” to websites. But SEOs and publishers are freaking out that traffic is lower and the sentiment may be that Google should consider a little more give and a lot less take.

Hat tip to Glenn Gabe for calling attention to this article.

Featured Image by Shutterstock/photoschmidt

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Ethically sourced “spare” human bodies could revolutionize medicine

Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically sourced human bodies. 

It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress.

This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that can’t replicate major aspects of human physiology and makes it necessary to inflict harm on sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time. 

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines.

Bringing technologies together

Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body. 

Such technologies, together with established genetic techniques to inhibit brain development, make it possible to envision the creation of “bodyoids”—a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.

There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids could radically transform biomedical research by addressing critical limitations in the current models of research, drug development, and medicine. Among many other benefits, they would offer an almost unlimited source of organs, tissues, and cells for use in transplantation.

It could even be possible to generate organs directly from a patient’s own cells, essentially cloning someone’s biological material to ensure that transplanted tissues are a perfect immunological match and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patient’s cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patient’s own personal genetics and physiology. We can even envision using animal bodyoids in agriculture, as a substitute for the use of sentient animal species. 

Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions.

Even if it all works, it may not be practical or economical to “grow” bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications. 

Ethical considerations and societal implications

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness. 

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it. 

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the “animated cadavers” of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we’d have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought. 

A call to action

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored. 

The potential benefits—for both human patients and sentient animal species—are great. Governments, companies, and private foundations should start thinking about bodyoids as a possible path for investment. There is no need to start with humans—we can begin exploring the feasibility of this approach with rodents or other research animals. 

As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments. 

The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation.

The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.

Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University.

Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why the world is looking to ditch US AI models

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies

Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play. 

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The Download: creating “spare” human bodies, and ditching US AI models

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Ethically sourced “spare” human bodies could revolutionize medicine

Many challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. 

Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines. Read the full story.

Why the world is looking to ditch US AI models

—Eileen Guo

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

Some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How to… delete your 23andMe data

Consumer DNA testing company 23andMe has filed for bankruptcy protection, following months of speculation around CEO Anne Wojcicki’s plans to take the firm private. The news means that 23andMe—and the genetic data of millions of its customers—could soon be put up for sale.

But although customers worried about the security of their DNA data can request its deletion, truly scrubbing your information from the company’s archives is easier said than done. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US security leaders accidentally added a journalist to a secret Signal chat
The group used the unapproved platform to discuss classified military strikes in Yemen. (The Atlantic $)
+ It raises questions over how the US government is handling sensitive information. (Vox)
+ The Trump administration has embraced the encrypted messaging app. (WP $)

2 Donald Trump’s H-1B visa crackdown could seriously harm US tech firms
Amazon is likely to be hit particularly hard. (Rest of World)
+ US visa and green-card holders are being detained and deported. (NY Mag $)
+ Tariffs, DOGE and scams are weighing heavily on the tech industry. (Insider $)
+ America relies heavily on skilled overseas workers. (The Conversation)

3 DeepSeek’s runaway success is shaking up China’s AI startups
They’re overhauling their business models in an effort to keep up. (FT $)
+ The AI development gap between China and the US is narrowing. (Reuters)
+ How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead. (MIT Technology Review)

4 AI companies don’t want to be regulated anymore
Emboldened by the Trump administration, the industry’s biggest firms are lobbying for fewer rules. (NYT $)

5 Colorado is experimenting with psychedelic mushrooms
It plans to administer them in ‘healing centers’ across the state. (Undark)
+ Job titles of the future: Pharmaceutical-grade mushroom grower. (MIT Technology Review)

6 Tesla sales are plummeting in Europe
As customers turn to its Chinese rival BYD. (The Guardian)
+ Elon Musk’s companies are under increasing pressure from their rivals. (Economist $)
+ BYD was one of our 2024 Climate Tech Companies to Watch. (MIT Technology Review)

7 This Indian city relies on the wind to stay cool
Palava City is a living testbed of technological innovation. (WP $)
+ No power, no fans, no AC: The villagers fighting to survive India’s deadly heatwaves. (MIT Technology Review)

8 Filming your online routine is not for the faint of heart
Absurd clips are doing the rounds on social media yet again. (NY Mag $)

9 Floating wood could help to refreeze the Arctic
By helping to seed the formation of new ice. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

10 Silicon Valley workers are ditching dating apps
Instead, they’re attending carefully vetted dating meetups IRL. (Wired $)

Quote of the day

“The path to saving TikTok should run through Capitol Hill.”

—Three Democratic senators urge Donald Trump to work with Congress to save TikTok from shutting down in the US, the Verge reports.

The big story

How AI is changing gymnastics judging


January 2024

The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.

At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.

For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.

—Jessica Taylor Price

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These plants are quite possibly math geniuses.
+ Inside the weird and wonderful world of animal art.
+ Get me on a (sustainable) trip to the Cook Islands immediately.
+ It’s officially cherry blossom season around the world! 🌸

The Download: creating “spare” human bodies, and ditching US AI models

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Ethically sourced “spare” human bodies could revolutionize medicine

Many challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. 

Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman.

These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most people’s ethical lines. Read the full story.

Why the world is looking to ditch US AI models

—Eileen Guo

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

Some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How to… delete your 23andMe data

Consumer DNA testing company 23andMe has filed for bankruptcy protection, following months of speculation around CEO Anne Wojcicki’s plans to take the firm private. The news means that 23andMe—and the genetic data of millions of its customers—could soon be put up for sale.

But although customers worried about the security of their DNA data can request its deletion, truly scrubbing your information from the company’s archives is easier said than done. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US security leaders accidentally added a journalist to a secret Signal chat
The group used the unapproved platform to discuss classified military strikes in Yemen. (The Atlantic $)
+ It raises questions over how the US government is handling sensitive information. (Vox)
+ The Trump administration has embraced the encrypted messaging app. (WP $)

2 Donald Trump’s H-1B visa crackdown could seriously harm US tech firms
Amazon is likely to be hit particularly hard. (Rest of World)
+ US visa and green-card holders are being detained and deported. (NY Mag $)
+ Tariffs, DOGE and scams are weighing heavily on the tech industry. (Insider $)
+ America relies heavily on skilled overseas workers. (The Conversation)

3 DeepSeek’s runaway success is shaking up China’s AI startups
They’re overhauling their business models in an effort to keep up. (FT $)
+ The AI development gap between China and the US is narrowing. (Reuters)
+ How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead. (MIT Technology Review)

4 AI companies don’t want to be regulated anymore
Emboldened by the Trump administration, the industry’s biggest firms are lobbying for fewer rules. (NYT $)

5 Colorado is experimenting with psychedelic mushrooms
It plans to administer them in ‘healing centers’ across the state. (Undark)
+ Job titles of the future: Pharmaceutical-grade mushroom grower. (MIT Technology Review)

6 Tesla sales are plummeting in Europe
As customers turn to its Chinese rival BYD. (The Guardian)
+ Elon Musk’s companies are under increasing pressure from their rivals. (Economist $)
+ BYD was one of our 2024 Climate Tech Companies to Watch. (MIT Technology Review)

7 This Indian city relies on the wind to stay cool
Palava City is a living testbed of technological innovation. (WP $)
+ No power, no fans, no AC: The villagers fighting to survive India’s deadly heatwaves. (MIT Technology Review)

8 Filming your online routine is not for the faint of heart
Absurd clips are doing the rounds on social media yet again. (NY Mag $)

9 Floating wood could help to refreeze the Arctic
By helping to seed the formation of new ice. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

10 Silicon Valley workers are ditching dating apps
Instead, they’re attending carefully vetted dating meetups IRL. (Wired $)

Quote of the day

“The path to saving TikTok should run through Capitol Hill.”

—Three Democratic senators urge Donald Trump to work with Congress to save TikTok from shutting down in the US, the Verge reports.

The big story

How AI is changing gymnastics judging


January 2024

The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.

At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.

For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.

—Jessica Taylor Price

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These plants are quite possibly math geniuses.
+ Inside the weird and wonderful world of animal art.
+ Get me on a (sustainable) trip to the Cook Islands immediately.
+ It’s officially cherry blossom season around the world! 🌸

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.

OpenAI’s new image generator aims to be practical enough for designers and advertisers

OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

The image generator, which is now part of the company’s GPT-4o model, was promised by OpenAI last May but wasn’t released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one. 

The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says “hot dogs” properly placed above a food cart, not somewhere else in the image). 

It was only a few years ago that models started to succeed at things like “Put the red cube on top of the blue cube,” a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.

OPENAI

Example images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single image—like a cat emoji or a lightning bolt—and place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o. 

OPENAI

It’s “a new tool for communication,” says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: “I think the whole idea is that we’re going away from, like, beautiful art.” It can still do that, he clarifies, but it will do more useful things too. “You can actually make images work for you,” he says, “and not just just look at them.”

It’s a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult. 

One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI. 

“Adobe really has a stranglehold on this market, and they’re moving fast enough that I don’t know how compelling it is for people to switch,” says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation. 

The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process). 

It’s also possible the tool will simply be used as many image generators are now: to create quick visuals that are “good enough” to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, it’s hard to imagine that the image generator won’t play some ambitious moneymaking role. 

Regardless, the fact that OpenAI’s new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.

“The pace of innovation should increase here,” Raskino says.