YouTube Answers Creator Questions On Profanity Monetization via @sejournal, @MattGSouthern

YouTube released a video to clarify how its recent update to advertiser-friendly guidelines affects creators.

The company acknowledged communication gaps and outlined what it has already done for previously affected uploads.

What Changed?

YouTube relaxed its rules about strong language at the beginning of videos, making it easier for creators to monetize their content.

The company also took a fresh look at videos that were previously demonetized due to strong language in the first few seconds and has now restored some of those videos to full monetization.

Since creators weren’t notified about these changes, YouTube recommends checking your monetization status in Studio and reaching out to support if you think something might have been missed.

YouTube said in the video:

“For content where there was strong profanity within the first 7 seconds… we identified uploads that were demonetized solely for this reason and no other, and re-reviewed them, flipping the rating to a green dollar icon.”

YouTube clarified that isolated uses of strong words do not, on their own, cause demonetization.

Limited ads are more likely when the focus of the whole video is on sensitive topics.

See the full video below:

Restrictions Remain

The seven-second flexibility doesn’t extend to graphic violence. YouTube reiterated that content with explicit or highly realistic violence tends to receive limited monetization.

That also includes video game footage when the graphicness is the focus.

Why This Matters

For creators, this clarification helps ease uncertainties and prevents unnecessary self-censorship. It also clearly defines boundaries for content involving sensitive topics.

For advertisers, this update is designed to help maintain brand suitability while allowing more creator content to be fully monetized.

It has the potential to slightly increase the amount of ad-eligible content, helping advertisers reach a broader audience, especially those who don’t exclude this type of content.

However, the overall effect will depend on individual brand-suitability choices and creator ad-blocking settings.

Looking Ahead

YouTube plans to improve how it explains retroactive reviews when policies change and is considering providing more detailed examples without establishing a strict ‘forbidden words’ list.


Featured Image: Visuals6x/Shutterstock

Pinterest Launches “Top of Search” Ads In Beta via @sejournal, @MattGSouthern

Pinterest has introduced new ad products focused on visual search, highlighted by ‘Top of Search’ ads.

Currently in beta across all monetized markets, these ads can appear within the first ten search results and in Related Pins, targeting users as they begin discovering products.

Why This Matters For Search Marketers

Pinterest is a search platform where users arrive with shopping intent, much of which remains unfulfilled.

According to the company’s data, 45% of clicks occur within the first ten results, and 96% of top searches are unbranded. That makes Top of Search placements ideal for category discovery through paid ads.

For advertising teams, this creates a new SERP-like space to compete in, combining search intent with visual creative.

Additionally, Media Network Connect integrates retailer first-party audiences and conversion data into Pinterest Ads Manager via partners such as Kroger Precision Marketing and Instacart Ads, making measurement and incrementality testing more feasible than before.

Early Results

Pinterest reports that Top of Search ads have a 29% higher average CTR compared to typical campaigns and are 32% more likely to attract new customers.

These results are based on platform data and may differ depending on the category and creative used.

Additional Updates

Local Inventory Ads Expanded

Pinterest has expanded Local Inventory Ads in shopping markets, providing real-time prices for in-stock items within a shopper’s nearby store radius.

Retailer Data In Ads Manager

A new self-service feature, Media Network Connect, allows media networks to share first-party audiences, product catalogs, and conversion data directly with advertisers within Pinterest Ads Manager.

Early U.S. partners include Kroger Precision Marketing and Instacart Ads, with additional partners upcoming.

Christine Foster, Senior Vice President at Kroger Precision Marketing, said:

“This new capability empowers advertisers with faster decision-making and control, while using purchase-based audiences direct from the retailer.”

Looking Ahead

Competition for commerce search is expanding across social media and retail platforms. Pinterest emphasizes unbranded, visual discovery and stronger retailer data integrations.

If you’re already using Pinterest Shopping or Catalog campaigns, trying the beta, despite limited inventory, can help you identify where search-related visual placements could integrate into your marketing strategy.

Why Reddit Is Driving The Conversation In AI Search – User Journey Over Short Tail via @sejournal, @brentcsutoras

The How AI Search Can Drive Sales & Boost Conversions webinar, presented recently by Bartosz Góralewicz, touched on something that I think every marketer needs to understand about how people actually make decisions today.

This isn’t just about Reddit anymore; we’re talking about the future of how brands actually connect with customers when they’re making real decisions.

Image from author, September 2025

Bartosz shared some data from Cloudflare that’s wild: 10 years ago, Google crawled two pages for every one click. Six months ago? Six pages per click. Today, it’s 18 pages for every single click! OpenAI is crawling 1,500 pages for each click they send. And get this, in 2024, 60% of Google searches ended in zero clicks, as LLMs increasingly serve answers directly on the page, according to Justin Turner, Head of Thought Leadership at Reddit.

As Bartosz put it, quoting Cloudflare’s CEO: “People trust AI more and they’re just not following the footnotes anymore.”

But here’s what everyone’s missing: Reddit is just the messenger.

What Reddit Really Shows Us

Reddit appears in nearly 98% of product review searches because it’s solving a problem that traditional marketing content can’t touch. When someone searches “iPhone 16 vs Samsung S25,” they’ll find millions of YouTube views but almost no traditional search volume data.

The conversation is happening, just not where we’ve been looking. Turner’s research shows Reddit is the No. 1 most cited domain across all major AI platforms, accounting for 3.5% of all citations across AI models, nearly three times more than Wikipedia.

What Reddit provides, and what Google and OpenAI are paying for, is authentic peer advice instead of corporate marketing messages. Users want to feel understood, not sold to. They want contextual advice that feels like someone actually gets their specific problem.

As Bartosz explained it, when someone is researching a car, they don’t want to hear from paid bloggers. They want to talk to someone who actually drives the thing every day and can tell them the radio breaks 11 times in the first year. That’s the stuff you won’t find on the company website.

The Real Journey People Take

During our webinar, Bartosz walked through this perfect example from his own experience. He bought a wool carpet, discovered he couldn’t use his Dyson on it (voids the warranty), and now needed a suction-only vacuum.

Image from author, September 2025

Bartosz showed how this creates a progression that most marketers never see:

  • Stage 1: “Why can’t I use Dyson on wool carpet?”
  • Stage 2: “Suction only vacuums for wool carpets”
  • Stage 3: “Miele C1 suction only vacuum safe”

Each answer informs the next question. As Bartosz explained, understanding this progression isn’t just about Reddit; it’s about understanding how people actually think and research!

The thing is, sometimes, this entire customer journey condenses into one perfect answer. Bartosz showed us how, when someone asked, “Why is it bad to use Dyson on wool carpet?” Perplexity immediately recommended Miele as the solution. One conversation, massive conversion potential.

But as Bartosz emphasized, you can’t manufacture this by guessing. You have to listen to actual conversations and understand the real problems people are trying to solve. This is exactly why he created ZipTie.ai, to help brands identify those critical moments in customer conversations where they can genuinely solve problems rather than just promote products.

And here’s proof that this approach actually works: Turner’s data shows users referred from ChatGPT view 42% more pages per session than those referred from Google, showing more intent, deeper curiosity, and stronger engagement.

Why This Changes Everything

I’ve been looking for this shift in marketing for years, waiting for it to come back to the actual science behind why people make decisions. The funnel is longer now, people are using more places along the way, and when you can find what people really need, honestly, content really is king again. But not content for content’s sake, problem-solving is all you really need.

Bartosz’s Miele example shows something that’s often overlooked. You wouldn’t see this in your regular website data or in traditional Google articles. It’s not visible to most brands because we’re so conditioned to look down this logical marketing path that we miss the conversations happening right in front of us.

We started seeing it more clearly when people began giving us signals by writing on Reddit. Why are they doing that? Because they want validation. When you give them that validation through genuine problem-solving, it works!

The New Success Metrics

Bartosz talked about how we need to stop chasing old metrics. Rankings, clicks, and keywords still matter, but they’re not the whole story anymore.

Image from author, September 2025

As he put it, here’s what actually matters now:

  • Are you the recommended solution throughout the customer journey?
  • Do you show contextual relevance that makes users feel understood?
  • Can you track your influence through actual conversion paths?

As Bartosz said, “The teams that are going to win nowadays are going to be the teams that are going to solve the most amount, the biggest amount of problems that users have.”

The Authenticity Problem

To be authentic, you have to talk about positives and negatives. The biggest challenge I have in discovery calls with huge brands is that they tell me, “we cannot say we don’t do this or we don’t do this.”

But that’s exactly what you need to do!

I always tell people Reddit success comes down to three overlapping areas: what Redditors expect from you, what you honestly have to give, and where your business goals align. That overlap is your area of influence.

A TikTok campaign I did years ago started with 300 messages telling me to basically get lost (wasn’t as kind though). But once people realized we were real humans having real conversations, everything changed. People started editing their posts, sending improvement ideas, giving us awards.

That’s the power of authentic engagement.

The Psychology Behind It All

People want to share every decision they make with somebody because it’s our nature to want to share responsibility. It’s a way of validating that we’re not total idiots; we at least explored the conversation. “I talked to my friend John and he said it was a good phone.”

But there’s more to it than just sharing responsibility. We’re also looking for validation that someone has actually experienced the issue, product, or service we’re researching and has real information to share about it.1 We want to hear from people who’ve been there, not from someone reading a spec sheet or writing content that’s been paid for, influenced, or even completely faked. There’s so little trust in traditional search results anymore because we know so much of what we find is compromised.

Also, we rarely have the right problem when we start searching. We think we need “the best vacuum” when what we really need is “a vacuum that won’t destroy my wool carpet.” It takes conversation and depth to uncover what the real problem actually is. That’s why those Reddit threads go so deep: People are working through layers of issues together.

Most importantly, we want to feel like we learned enough to come to our own decision. We don’t want someone to tell us what to buy; we want to feel smart about figuring it out ourselves with good information from people we trust.2

I’ve been talking about these concepts a lot lately, but this isn’t just my personal theory. This behavior is extensively researched across psychology, behavioral economics, and decision science. Studies consistently show that people actively seek to share decision responsibility to reduce regret and minimize the psychological burden of negative outcomes. Research demonstrates that individuals are more likely to join groups or seek validation after experiencing negative results, and that sharing responsibility helps shield people from the emotional consequences of bad decisions.

What This Means Going Forward

This approach works because it aligns with human psychology. When you understand that core element, solving users’ real problems, everything gets better. Your commercials, website copy, social media ads, customer service. Everything improves when you know what people actually need to feel comfortable making a decision.

Reddit just happens to be where these conversations are most visible right now. But the principles apply everywhere: Understand the real problems, join authentic conversations, and focus on solving issues rather than promoting solutions.

The brands that figure this out first will own the next phase of digital marketing. The ones that keep chasing traditional metrics will keep wondering why their traffic is declining while their competitors seem to effortlessly show up everywhere that matters.

Definitely, definitely take the time to understand your user’s journey. Don’t be lazy about it. Really understand what people need at each stage, what problems they’re actually trying to solve, and where they go to get that validation they need to make decisions.

It’s not complicated, but it requires you to slow down and actually listen to your customers instead of talking at them.

Sources:

  1. https://academic.oup.com/jcr/article-abstract/51/1/7/7672991?login=false
  2. https://acr-journal.com/article/consumer-trust-in-digital-brands-the-role-of-transparency-and-ethical-marketing-882/
  3. https://www.linkedin.com/pulse/convergence-product-marketing-seo-ai-search-era-ziptieai-aotnc/

More Resources:


Featured Image: Accogliente Design/Shutterstock

How AI and Wikipedia have sent vulnerable languages into a doom spiral

When Kenneth Wehr started managing the Greenlandic-language version of Wikipedia four years ago, his first act was to delete almost everything. It had to go, he thought, if it had any chance of surviving.

Wehr, who’s 26, isn’t from Greenland—he grew up in Germany—but he had become obsessed with the island, an autonomous Danish territory, after visiting as a teenager. He’d spent years writing obscure Wikipedia articles in his native tongue on virtually everything to do with it. He even ended up moving to Copenhagen to study Greenlandic, a language spoken by some 57,000 mostly Indigenous Inuit people scattered across dozens of far-flung Arctic villages. 

The Greenlandic-language edition was added to Wikipedia around 2003, just a few years after the site launched in English. By the time Wehr took its helm nearly 20 years later, hundreds of Wikipedians had contributed to it and had collectively written some 1,500 articles totaling over tens of thousands of words. It seemed to be an impressive vindication of the crowdsourcing approach that has made Wikipedia the go-to source for information online, demonstrating that it could work even in the unlikeliest places. 

There was only one problem: The Greenlandic Wikipedia was a mirage. 

Virtually every single article had been published by people who did not actually speak the language. Wehr, who now teaches Greenlandic in Denmark, speculates that perhaps only one or two Greenlanders had ever contributed. But what worried him most was something else: Over time, he had noticed that a growing number of articles appeared to be copy-pasted into Wikipedia by people using machine translators. They were riddled with elementary mistakes—from grammatical blunders to meaningless words to more significant inaccuracies, like an entry that claimed Canada had only 41 inhabitants. Other pages sometimes contained random strings of letters spat out by machines that were unable to find suitable Greenlandic words to express themselves. 

“It might have looked Greenlandic to [the authors], but they had no way of knowing,” complains Wehr.

“Sentences wouldn’t make sense at all, or they would have obvious errors,” he adds. “AI translators are really bad at Greenlandic.”  

What Wehr describes is not unique to the Greenlandic edition. 

Wikipedia is the most ambitious multilingual project after the Bible: There are editions in over 340 languages, and a further 400 even more obscure ones are being developed and tested. Many of these smaller editions have been swamped with automatically translated content as AI has become increasingly accessible. Volunteers working on four African languages, for instance, estimated to MIT Technology Review that between 40% and 60% of articles in their Wikipedia editions were uncorrected machine translations. And after auditing the Wikipedia edition in Inuktitut, an Indigenous language close to Greenlandic that’s spoken in Canada, MIT Technology Review estimates that more than two-thirds of pages containing more than several sentences feature portions created this way. 

This is beginning to cause a wicked problem. AI systems, from Google Translate to ChatGPT, learn to “speak” new languages by scraping huge quantities of text from the internet. Wikipedia is sometimes the largest source of online linguistic data for languages with few speakers—so any errors on those pages, grammatical or otherwise, can poison the wells that AI is expected to draw from. That can make the models’ translation of these languages particularly error-prone, which creates a sort of linguistic doom loop as people continue to add more and more poorly translated Wikipedia pages using those tools, and AI models continue to train from poorly translated pages. It’s a complicated problem, but it boils down to a simple concept: Garbage in, garbage out

“These models are built on raw data,” says Kevin Scannell, a former professor of computer science at Saint Louis University who now builds computer software tailored for endangered languages. “They will try and learn everything about a language from scratch. There is no other input. There are no grammar books. There are no dictionaries. There is nothing other than the text that is inputted.”

There isn’t perfect data on the scale of this problem, particularly because a lot of AI training data is kept confidential and the field continues to evolve rapidly. But back in 2020, Wikipedia was estimated to make up more than half the training data that was fed into AI models translating some languages spoken by millions across Africa, including Malagasy, Yoruba, and Shona. In 2022, a research team from Germany that looked into what data could be obtained by online scraping even found that Wikipedia was the sole easily accessible source of online linguistic data for 27 under-resourced languages. 

This could have significant repercussions in cases where Wikipedia is poorly written—potentially pushing the most vulnerable languages on Earth toward the precipice as future generations begin to turn away from them. 

“Wikipedia will be reflected in the AI models for these languages,” says Trond Trosterud, a computational linguist at the University of Tromsø in Norway, who has been raising the alarm about the potentially harmful outcomes of badly run Wikipedia editions for years. “I find it hard to imagine it will not have consequences. And, of course, the more dominant position that Wikipedia has, the worse it will be.” 

Use responsibly

Automation has been built into Wikipedia since the very earliest days. Bots keep the platform operational: They repair broken links, fix bad formatting, and even correct spelling mistakes. These repetitive and mundane tasks can be automated away with little problem. There is even an army of bots that scurry around generating short articles about rivers, cities, or animals by slotting their names into formulaic phrases. They have generally made the platform better. 

But AI is different. Anybody can use it to cause massive damage with a few clicks. 

Wikipedia has managed the onset of the AI era better than many other websites. It has not been flooded with AI bots or disinformation, as social media has been. It largely retains the innocence that characterized the earlier internet age. Wikipedia is open and free for anyone to use, edit, and pull from, and it’s run by the very same community it serves. It is transparent and easy to use. But community-run platforms live and die on the size of their communities. English has triumphed, while Greenlandic has sunk. 

“We need good Wikipedians. This is something that people take for granted. It is not magic,” says Amir Aharoni, a member of the volunteer Language Committee, which oversees requests to open or close Wikipedia editions. “If you use machine translation responsibly, it can be efficient and useful. Unfortunately, you cannot trust all people to use it responsibly.” 

Trosterud has studied the behavior of users on small Wikipedia editions and says AI has empowered a subset that he terms “Wikipedia hijackers.” These users can range widely—from naive teenagers creating pages about their hometowns or their favorite YouTubers to well-meaning Wikipedians who think that by creating articles in minority languages they are in some way “helping” those communities. 

“The problem with them nowadays is that they are armed with Google Translate,” Trosterud says, adding that this is allowing them to produce much longer and more plausible-looking content than they ever could before: “Earlier they were armed only with dictionaries.” 

This has effectively industrialized the acts of destruction—which affect vulnerable languages most, since AI translations are typically far less reliable for them. There can be lots of different reasons for this, but a meaningful part of the issue is the relatively small amount of source text that is available online. And sometimes models struggle to identify a language because it is similar to others, or because some, including Greenlandic and most Native American languages, have structures that make them badly suited to the way most machine translation systems work. (Wehr notes that in Greenlandic most words are agglutinative, meaning they are built by attaching prefixes and suffixes to stems. As a result, many words are extremely context specific and can express ideas that in other languages would take a full sentence.) 

Research produced by Google before a major expansion of Google Translate rolled out three years ago found that translation systems for lower-resourced languages were generally of a lower quality than those for better-resourced ones. Researchers found, for example, that their model would often mistranslate basic nouns across languages, including the names of animals and colors. (In a statement to MIT Technology Review, Google wrote that it is “committed to meeting a high standard of quality for all 249 languages” it supports “by rigorously testing and improving [its] systems, particularly for languages that may have limited public text resources on the web.”) 

Wikipedia itself offers a built-in editing tool called Content Translate, which allows users to automatically translate articles from one language to another—the idea being that this will save time by preserving the references and fiddly formatting of the originals. But it piggybacks on external machine translation systems, so it’s largely plagued by the same weaknesses as other machine translators—a problem that the Wikimedia Foundation says is hard to solve. It’s up to each edition’s community to decide whether this tool is allowed, and some have decided against it. (Notably, English-language Wikipedia has largely banned its use, claiming that some 95% of articles created using Content Translate failed to meet an acceptable standard without significant additional work.) But it’s at least easy to tell when the program has been used; Content Translate adds a tag on the Wikipedia back end. 

Other AI programs can be harder to monitor. Still, many Wikipedia editors I spoke with said that once their languages were added to major online translation tools, they noticed a corresponding spike in the frequency with which poor, likely machine-translated pages were created. 

Some Wikipedians using AI to translate content do occasionally admit that they do not speak the target languages. They may see themselves as providing smaller communities with rough-cut articles that speakers can then fix—essentially following the same model that has worked well for more active Wikipedia editions.  

Google Translate, for instance, says the Fulfulde word for January means June, while ChatGPT says it’s August or September. The programs also suggest the Fulfulde word for “harvest” means “fever” or “well-being,” among other possibilities.  

But once error-filled pages are produced in small languages, there is usually not an army of knowledgeable people who speak those languages standing ready to improve them. There are few readers of these editions, and sometimes not a single regular editor. 

Yuet Man Lee, a Canadian teacher in his 20s, says that he used a mix of Google Translate and ChatGPT to translate a handful of articles that he had written for the English Wikipedia into Inuktitut, thinking it’d be nice to pitch in and help a smaller Wikipedia community. He says he added a note to one saying that it was only a rough translation. “I did not think that anybody would notice [the article],” he explains. “If you put something out there on the smaller Wikipedias—most of the time nobody does.” 

But at the same time, he says, he still thought “someone might see it and fix it up”—adding that he had wondered whether the Inuktitut translation that the AI systems generated was grammatically correct. Nobody has touched the article since he created it.

Lee, who teaches social sciences in Vancouver and first started editing entries in the English Wikipedia a decade ago, says that users familiar with more active Wikipedias can fall victim to this mindset, which he terms a “bigger-Wikipedia arrogance”: When they try to contribute to smaller Wikipedia editions, they assume that others will come along to fix their mistakes. It can sometimes work. Lee says he had previously contributed several articles to Wikipedia in Tatar, a language spoken by several million people mainly in Russia, and at least one of those was eventually corrected. But the Inuktitut Wikipedia is, by comparison, a “barren wasteland.” 

He emphasizes that his intentions had been good: He wanted to add more articles to an Indigenous Canadian Wikipedia. “I am now thinking that it may have been a bad idea. I did not consider that I could be contributing to a recursive loop,” he says. “It was about trying to get content out there, out of curiosity and for fun, without properly thinking about the consequences.” 

 “Totally, completely no future”

Wikipedia is a project that is driven by wide-eyed optimism. Editing can be a thankless task, involving weeks spent bickering with faceless, pseudonymous people, but devotees put in hours of unpaid labor because of a commitment to a higher cause. It is this commitment that drives many of the regular small-language editors I spoke with. They all feared what would happen if garbage continued to appear on their pages.

Abdulkadir Abdulkadir, a 26-year-old agricultural planner who spoke with me over a crackling phone call from a busy roadside in northern Nigeria, said that he spends three hours every day fiddling with entries in his native Fulfulde, a language used mainly by pastoralists and farmers across the Sahel. “But the work is too much,” he said. 

Abdulkadir sees an urgent need for the Fulfulde Wikipedia to work properly. He has been suggesting it as one of the few online resources for farmers in remote villages, potentially offering information on which seeds or crops might work best for their fields in a language they can understand. If you give them a machine-translated article, Abdulkadir told me, then it could “easily harm them,” as the information will probably not be translated correctly into Fulfulde. 

Google Translate, for instance, says the Fulfulde word for January means June, while ChatGPT says it’s August or September. The programs also suggest the Fulfulde word for “harvest” means “fever” or “well-being,” among other possibilities.  

Abdulkadir said he had recently been forced to correct an article about cowpeas, a foundational cash crop across much of Africa, after discovering that it was largely illegible. 

If someone wants to create pages on the Fulfulde Wikipedia, Abdulkadir said, they should be translated manually. Otherwise, “whoever will read your articles will [not] be able to get even basic knowledge,” he tells these Wikipedians. Nevertheless, he estimates that some 60% of articles are still uncorrected machine translations. Abdulkadir told me that unless something important changes with how AI systems learn and are deployed, then the outlook for Fulfulde looks bleak. “It is going to be terrible, honestly,” he said. “Totally, completely no future.” 

Across the country from Abdulkadir, Lucy Iwuala contributes to Wikipedia in Igbo, a language spoken by several million people in southeastern Nigeria. “The harm has already been done,” she told me, opening the two most recently created articles. Both had been automatically translated via Wikipedia’s Content Translate and contained so many mistakes that she said it would have given her a headache to continue reading them. “There are some terms that have not even been translated. They are still in English,” she pointed out. She recognized the username that had created the pages as a serial offender. “This one even includes letters that are not used in the Igbo language,” she said. 

Iwuala began regularly contributing to Wikipedia three years ago out of concern that Igbo was being displaced by English. It is a worry that is common to many who are active on smaller Wikipedia editions. “This is my culture. This is who I am,” she told me. “That is the essence of it all: to ensure that you are not erased.” 

Iwuala, who now works as a professional translator between English and Igbo, said the users doing the most damage are inexperienced and see AI translations as a way to quickly increase the profile of the Igbo Wikipedia. She often finds herself having to explain at online edit-a-thons she organizes, or over email to various error-prone editors, that the results can be the exact opposite, pushing users away: “You will be discouraged and you will no longer want to visit this place. You will just abandon it and go back to the English Wikipedia.”  

These fears are echoed by Noah Ha‘alilio Solomon, an assistant professor of Hawaiian language at the University of Hawai‘i. He reports that some 35% of words on some pages in the Hawaiian Wikipedia are incomprehensible. “If this is the Hawaiian that is going to exist online, then it will do more harm than anything else,” he says. 

Hawaiian, which was teetering on the verge of extinction several decades ago, has been undergoing a recovery effort led by Indigenous activists and academics. Seeing such poor Hawaiian on such a widely used platform as Wikipedia is upsetting to Ha‘alilio Solomon. 

“It is painful, because it reminds us of all the times that our culture and language has been appropriated,” he says. “We have been fighting tooth and nail in an uphill climb for language revitalization. There is nothing easy about that, and this can add extra impediments. People are going to think that this is an accurate representation of the Hawaiian language.” 

The consequences of all these Wikipedia errors can quickly become clear. AI translators that have undoubtedly ingested these pages in their training data are now assisting in the production, for instance, of error-strewn AI-generated books aimed at learners of languages as diverse as Inuktitut and Cree, Indigenous languages spoken in Canada, and Manx, a small Celtic language spoken on the Isle of Man. Many of these have been popping up for sale on Amazon. “It was just complete nonsense,” says Richard Compton, a linguist at the University of Quebec in Montreal, of a volume he reviewed that had purported to be an introductory phrasebook for Inuktitut. 

Rather than making minority languages more accessible, AI is now creating an ever expanding minefield for students and speakers of those languages to navigate. “It is a slap in the face,” Compton says. He worries that younger generations in Canada, hoping to learn languages in communities that have fought uphill battles against discrimination to pass on their heritage, might turn to online tools such as ChatGPT or phrasebooks on Amazon and simply make matters worse. “It is fraud,” he says.

A race against time

According to UNESCO, a language is declared extinct every two weeks. But whether the Wikimedia Foundation, which runs Wikipedia, has an obligation to the languages used on its platform is an open question. When I spoke to Runa Bhattacharjee, a senior director at the foundation, she said that it was up to the individual communities to make decisions about what content they wanted to exist on their Wikipedia. “Ultimately, the responsibility really lies with the community to see that there is no vandalism or unwanted activity, whether through machine translation or other means,” she said. Usually, Bhattacharjee added, editions were considered for closure only if a specific complaint was raised about them. 

But if there is no active community, how can an edition be fixed or even have a complaint raised? 

Bhattacharjee explained that the Wikimedia Foundation sees its role in such cases as about maintaining the Wikipedia platform in case someone comes along to revive it: “It is the space that we provide for them to grow and develop. That is where we are at.”   

Inari Saami, spoken in a single remote community in northern Finland, is a poster child for how people can take good advantage of Wikipedia. The language was headed toward extinction four decades ago; there were only four children who spoke it. Their parents created the Inari Saami Language Association in a last-ditch bid to keep it going. The efforts worked. There are now several hundred speakers, schools that use Inari Saami as a medium of instruction, and 6,400 Wikipedia articles in the language, each one copy-edited by a fluent speaker. 

This success highlights how Wikipedia can indeed provide small and determined communities with a unique vehicle to promote their languages’ preservation. “We don’t care about quantity. We care about quality,” says Fabrizio Brecciaroli, a member of the Inari Saami Language Association. “We are planning to use Wikipedia as a repository for the written language. We need to provide tools that can be used by the younger generations. It is important for them to be able to use Inari Saami digitally.” 

This has been such a success that Wikipedia has been integrated into the curriculum at the Inari Saami–speaking schools, Brecciaroli adds. He fields phone calls from teachers asking him to write up simple pages on topics from tornadoes to Saami folklore. Wikipedia has even offered a way to introduce words into Inari Saami. “We have to make up new words all the time,” Brecciaroli says. “Young people need them to speak about sports, politics, and video games. If they are unsure how to say something, they now check Wikipedia.”

Wikipedia is a monumental intellectual experiment. What’s happening with Inari Saami suggests that with maximum care, it can work in smaller languages. “The ultimate goal is to make sure that Inari Saami survives,” Brecciaroli says. “It might be a good thing that there isn’t a Google Translate in Inari Saami.” 

That may be true—though large language models like ChatGPT can be made to translate phrases into languages that more traditional machine translation tools do not offer. Brecciaroli told me that ChatGPT isn’t great in Inari Saami but that the quality varies significantly depending on what you ask it to do; if you ask it a question in the language, then the answer will be filled with words from Finnish and even words it invents. But if you ask it something in English, Finnish, or Italian and then ask it to reply in Inari Saami, it will perform better. 

In light of all this, creating as much high-quality content online as can possibly be written becomes a race against time. “ChatGPT only needs a lot of words,” Brecciaroli says. “If we keep putting good material in, then sooner or later, we will get something out. That is the hope.” This is an idea supported by multiple linguists I spoke with—that it may be possible to end the “garbage in, garbage out” cycle. (OpenAI, which operates ChatGPT, did not respond to a request for comment.)

Still, the overall problem is likely to grow and grow, since many languages are not as lucky as Inari Saami—and their AI translators will most likely be trained on more and more AI slop. Wehr, unfortunately, seems far less optimistic about the future of his beloved Greenlandic. 

Since deleting much of the Greenlandic-language Wikipedia, he has spent years trying to recruit speakers to help him revive it. He has appeared in Greenlandic media and made social media appeals. But he hasn’t gotten much of a response; he says it has been demoralizing. 

“There is nobody in Greenland who is interested in this, or who wants to contribute,” he says. “There is completely no point in it, and that is why it should be closed.” 

Late last year, he began a process requesting that the Wikipedia Language Committee shut down the Greenlandic-language edition. Months of bitter debate followed between dozens of Wikipedia bureaucrats; some seemed to be surprised that a superficially healthy-seeming edition could be gripped by so many problems. 

Then, earlier this month, Wehr’s proposal was accepted: Greenlandic Wikipedia is set to be shuttered, and any articles that remain will be moved into the Wikipedia Incubator, where new language editions are tested and built. Among the reasons cited by the Language Committee is the use of AI tools, which have “frequently produced nonsense that could misrepresent the language.”   

Nevertheless, it may be too late—mistakes in Greenlandic already seem to have become embedded in machine translators. If you prompt either Google Translate or ChatGPT to do something as simple as count to 10 in proper Greenlandic, neither program can deliver. 

Jacob Judah is an investigative journalist based in London. 

Fusion power plants don’t exist yet, but they’re making money anyway

This week, Commonwealth Fusion Systems announced it has another customer for its first commercial fusion power plant, in Virginia. Eni, one of the world’s largest oil and gas companies, signed a billion-dollar deal to buy electricity from the facility.

One small detail? That reactor doesn’t exist yet. Neither does the smaller reactor Commonwealth is building first to demonstrate that its tokamak design will work as intended.

This is a weird moment in fusion. Investors are pouring billions into the field to build power plants, and some companies are even signing huge agreements to purchase power from those still-nonexistent plants. All this comes before companies have actually completed a working reactor that can produce electricity. It takes money to develop a new technology, but all this funding could lead to some twisted expectations. 

Nearly three years ago, the National Ignition Facility at Lawrence Livermore National Laboratory hit a major milestone for fusion power. With the help of the world’s most powerful lasers, scientists heated a pellet of fuel to 100 million °C. Hydrogen atoms in that fuel fused together, releasing more energy than the lasers put in.

It was a game changer for the vibes in fusion. The NIF experiment finally showed that a fusion reactor could yield net energy. Plasma physicists’ models had certainly suggested that it should be true, but it was another thing to see it demonstrated in real life.

But in some ways, the NIF results didn’t really change much for commercial fusion. That site’s lasers used a bonkers amount of energy, the setup was wildly complicated, and the whole thing lasted a fraction of a second. To operate a fusion power plant, not only do you have to achieve net energy, but you also need to do that on a somewhat constant basis and—crucially—do it economically.

So in the wake of the NIF news, all eyes went to companies like Commonwealth, Helion, and Zap Energy. Who would be the first to demonstrate this milestone in a more commercially feasible reactor? Or better yet, who would be the first to get a power plant up and running?

So far, the answer is none of them.

To be fair, many fusion companies have made technical progress. Commonwealth has built and tested its high-temperature superconducting magnets and published research about that work. Zap Energy demonstrated three hours of continuous operation in its test system, a milestone validated by the US Department of Energy. Helion started construction of its power plant in Washington in July. (And that’s not to mention a thriving, publicly funded fusion industry in China.)  

These are all important milestones, and these and other companies have seen many more. But as Ed Morse, a professor of nuclear engineering at Berkeley, summed it up to me: “They don’t have a reactor.” (He was speaking specifically about Commonwealth, but really, the same goes for the others.)

And yet, the money pours in. Commonwealth raised over $800 million in funding earlier this year. And now it’s got two big customers signed on to buy electricity from this future power plant.

Why buy electricity from a reactor that’s currently little more than ideas on paper? From the perspective of these particular potential buyers, such agreements can be something of a win-win, says Adam Stein, director of nuclear energy innovation at the Breakthrough Institute.

By putting a vote of confidence behind Commonwealth, Eni could help the fusion startup get the capital it needs to actually build its plant. The company also directly invests in Commonwealth, so it stands to benefit from success. Getting a good rate on the capital needed to build the plant could also mean the electricity is ultimately cheaper for Eni, Stein says. 

Ultimately, fusion needs a lot of money. If fossil-fuel companies and tech giants want to provide it, all the better. One concern I have, though, is how outside observers are interpreting these big commitments. 

US Energy Secretary Chris Wright has been loud about his support for fusion and his expectations of the technology. Earlier this month, he told the BBC that it will soon power the world.

He’s certainly not the first to have big dreams for fusion, and it is an exciting technology. But despite the jaw-dropping financial milestones, this industry is still very much in development. 

And while Wright praises fusion, the Trump administration is slashing support for other energy technologies, including wind and solar power, and spreading disinformation about their safety, cost, and effectiveness. 

To meet the growing electricity demand and cut emissions from the power sector, we’ll need a whole range of technologies. It’s a risk and a distraction to put all our hopes on an unproven energy tech when there are plenty of options that actually exist. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: growing threats to vulnerable languages, and fact-checking Trump’s medical claims

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI and Wikipedia have sent vulnerable languages into a doom spiral

Wikipedia is the most ambitious multilingual project after the Bible: There are editions in over 340 languages, and a further 400 even more obscure ones are being developed. But many of these smaller editions are being swamped with AI-translated content. Volunteers working on four African languages, for instance, estimated to MIT Technology Review that between 40% and 60% of articles in their Wikipedia editions were uncorrected machine translations.

This is beginning to cause a wicked problem. AI systems learn new languages by scraping huge quantities of text from the internet. Wikipedia is sometimes the largest source of online linguistic data for languages with few speakers—so any errors on those pages can poison the wells that AI is expected to draw from. Volunteers are being forced to go to extreme lengths to fix the issue, even deleting certain languages from Wikipedia entirely. Read the full story

Jacob Judah 

This story is part of our Big Story series: MIT Technology Review’s most important, ambitious reporting. These stories take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of the series here.

Trump is pushing leucovorin as a new treatment for autism. What is it? 

On Monday, President Trump claimed that childhood vaccines and acetaminophen, the active ingredient in Tylenol, are to blame for the increasing prevalence of autism. He advised pregnant women against taking the medicine. 

The administration also announced that the FDA would work to make a medication called leucovorin available as a treatment for children with autism. The president’s assertions left many dismayed. “The data cited do not support the claim that Tylenol causes autism and leucovorin is a cure, and only stoke fear and falsely suggest hope when there is no simple answer,” said the Coalition for Autism Researchers, a group of more than 250 scientists, in a statement. So what does the evidence say? Read our story to find out

Cassandra Willyard 

This is part of our MIT Technology Review Explains series, where our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Fusion power plants don’t exist yet, but they’re making money anyway

This week, Commonwealth Fusion Systems announced it has another customer for its first commercial fusion power plant, in Virginia. Eni, one of the world’s largest oil and gas companies, signed a billion-dollar deal to buy electricity from the facility.

One small detail? That reactor doesn’t exist yet. This is a weird moment in fusion. Investors are pouring billions into the field to build power plants, and companies are even signing huge agreements to purchase power from those still-nonexistent plants. 

But all this comes before companies have actually completed a working reactor that can produce electricity. It takes money to develop a new technology, but all this funding could lead to some twisted expectations. Read the full story.

—Casey Crownhart 

This story is from The Spark, our weekly newsletter all about the latest in climate change and clean tech. Sign up to receive it in your inbox every Wednesday.

The AI Hype Index: Cracking the chatbot code

Millions of us use chatbots every day, even though we don’t really know how they work or how using them affects us. In a bid to address this, the FTC recently launched an inquiry into how chatbots affect children and teenagers. Elsewhere, OpenAI has started to shed more light on what people are actually using ChatGPT for, and why it thinks its LLMs are so prone to making stuff up.

There’s still plenty we don’t know—but that isn’t stopping governments from forging ahead with AI projects. In the US, RFK Jr. is pushing his staffers to use ChatGPT, while Albania is using a chatbot for public contract procurement. Check out the latest edition of our AI Hype Index to help you sort AI reality from hyped-up fiction. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Huntington’s disease has been treated successfully for the first time
Gene therapy managed to slow progress of the disease in patients by 75%. (The Economist $) 
+ Here’s how the gene editing tool CRISPR is changing lives. (MIT Technology Review)

2 Google says 90% of tech workers are using AI
But most of them also say they don’t trust AI models’ outputs. (CNN)
+ Why does AI hallucinate? (MIT Technology Review)

3 A MAGA TikTok takeover is coming
Just as free speech protections in the US start to look worryingly fragile. (The Atlantic $)

4 Chinese tech workers are returning from the US
There’s a whole bunch of complex factors both driving them to leave, and luring them back. (Rest of World)
+ But it’s hard to say what the impact of the new $100,000 fee for H-1B visas will be on India’s tech sector. (WP $)
+ Europe is hoping to nab more tech talent too. (The Verge)

5 If AI can diagnose us, what are doctors for?
They need to prepare for the fact chatbot use is becoming more and more widespread among patients. (New Yorker $)
+ This medical startup uses LLMs to run appointments and make diagnoses. (MIT Technology Review)

6 Drones have been spotted at four more airports in Denmark
It looks like a coordinated attack, but officials still haven’t worked out who is behind it. (FT $)

7 TSMC has unveiled AI-designed chips that use less energy
The AI software found better solutions than TSMC’s own human engineersand did so much faster. (South China Morning Post)
+ These four charts sum up the state of AI and energy. (MIT Technology Review)

8 How to find love on dating apps 💑
It’s not easy, but it is possible. (The Guardian)

9 AI models can’t cope with Persian social etiquette
It involves a lot of saying ‘no’ when you mean ‘yes’, which simply doesn’t wash with computers. (Ars Technica)

10 VR headsets are better than ever, but no one seems to care
The tech industry keeps overestimating how willing people are to strap computers to their faces. (Gizmodo)

Quote of the day

“We are living through the most destructive arms race in human history.”

—Ukrainian president Volodymyr Zelenskyy tells world leaders gathered at the UN that they need to intervene to stop the escalating development of drone technology and AI, The Guardian reports.

One more thing

STUART BRADFORD

The great AI consciousness conundrum

AI consciousness isn’t just a tricky intellectual puzzle; it’s a morally weighty problem. Fail to identify a conscious AI, and you might unintentionally subjugate a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code.

Over the past few decades, a small research community has doggedly attacked the question of what consciousness is and how it works. The effort has yielded real progress. And now, with the rapid advance of AI technology, these insights could offer our only guide to the untested, morally fraught waters of artificial consciousness. Read the full story.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s Fat Bear Week! Who gets your vote this year?
+ Learn about Lord Woodbine, the forgotten sixth Beatle
+ There are some truly wild and wacky recipes in this Medieval Cookery collection. Venison porridge, anyone? 
+ Pessimism about technology is as old as technology itself, as this archive shows.

Shoplifters could soon be chased down by drones

Flock Safety, whose drones were once reserved for police departments, is now offering them for private-sector security, the company announced today, with potential customers including including businesses intent on curbing shoplifting. 

Companies in the US can now place Flock’s drone docking stations on their premises. If the company has a waiver from the Federal Aviation Administration to fly beyond visual line of sight (these are becoming easier to get), its security team can fly the drones within a certain radius, often a few miles. 

“Instead of a 911 call [that triggers the drone], it’s an alarm call,” says Keith Kauffman, a former police chief who now directs Flock’s drone program. “It’s still the same type of response.”

Kauffman walked through how the drone program might work in the case of retail theft: If the security team at a store like Home Depot, for example, saw shoplifters leave the store, then the drone, equipped with cameras, could be activated from its docking station on the roof.

“The drone follows the people. The people get in a car. You click a button,” he says, “and you track the vehicle with the drone, and the drone just follows the car.” 

The video feed of that drone might go to the company’s security team, but it could also be automatically transmitted directly to police departments.

The company says it’s in talks with large retailers but doesn’t yet have any signed contracts. The only private-sector company Kauffman named as a customer is Morning Star, a California tomato processor that uses drones to secure its distribution facilities. Flock will also pitch the drones to hospital campuses, warehouse sites, and oil and gas facilities. 

It’s worth noting that the FAA is currently drafting new rules for how it grants approval to pilots flying drones out of sight, and it’s not clear if Flock’s use case would be allowed under the currently proposed guidance.

The company’s expansion to the private sector follows the rise of programs launched by police departments around the country to deploy drones as first responders. In such programs, law enforcement sends drones to a scene to provide visuals faster than an officer can get there. 

Flock has arguably led this push, and police departments have claimed drone-enabled successes, like a supply drop to a boy lost in the Colorado wilderness. But the programs have also sparked privacy worries, concerns about overpolicing in minority neighborhoods, and lawsuits charging that police departments should not block public access to drone footage. 

Other technologies Flock offers, like license plate readers, have drawn recent criticism for the ease with which federal US immigration agencies, including ICE and CBP, could look at data collected by local police departments amid President Trump’s mass deportation efforts.

Flock’s expansion into private-sector security is “a logical step, but in the wrong direction,” says Rebecca Williams, senior strategist for the ACLU’s privacy and data governance unit. 

Williams cited a growing erosion of Fourth Amendment protections—which prevent unlawful search and seizure—in the online era, in which the government can purchase private data that it would otherwise need a warrant to acquire. Proposed legislation to curb that practice has stalled, and Flock’s expansion into the private sector would exacerbate the issue, Williams says.

“Flock is the Meta of surveillance technology now,” Williams says, referring to the amount of personal data that company has acquired and monetized. “This expansion is very scary.”

New Ecommerce Tools: September 25, 2025

This week’s rundown of new products and services for ecommerce merchants includes updates on agentic commerce, multichannel fulfillment, email and SMS marketing, automated sales tax, sponsored ads, social commerce, AI assistants, and ecommerce accelerators.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

ReFiBuy launches Commerce Intelligence Engine. ReFiBuy, a company focused on leveraging agentic AI for ecommerce, has announced the availability of its Commerce Intelligence Engine. The tool helps retailers, brands, and agencies evaluate, enrich, distribute, and monitor their products on generative AI platforms such as ChatGPT, Google Gemini, Perplexity, Meta.ai, Microsoft Copilot, Grok, and Claude. ReFyBuy’s engine integrates with merchants’ existing systems.

Home page of ReFiBuy

ReFiBuy

Amazon Multi-Channel Fulfillment expands to Shein, Shopify, and Walmart. Amazon has announced the expansion of its logistics service, Amazon Multi-Channel Fulfillment, to support merchants on Shein, Shopify, and Walmart. By year-end, Shein merchants can fulfill their orders through the “Amazon MCF for Shein” app. Shopify merchants can now select Amazon MCF as their fulfillment partner via Shopify’s Fulfillment Network. Merchants can also use Amazon MCF for Walmart Marketplace orders.

Google releases store widget to build shopper confidence and drive sales. Google has released a store widget to embed on any page of an existing ecommerce site. The widget displays current store ratings as well as details such as shipping, return policies, and customer reviews. The widget is available in three tiers, depending on a store’s eligibility.

Fast Simon launches three agentic AI agents for ecommerce brands. Fast Simon, an ecommerce optimization platform, has released three AI agents for automation and personalization. AI Shopping Assistant delivers personalized product discovery, interpreting consumer intent and guiding shoppers through complex catalogs. AI Merchandising Assistant enables merchandisers to manage fast-changing assortments and product displays. The AI Analytics Assistant provides merchants with conversational access to insights and reports.

Home page of Fast Simon

Fast Simon

Omnisend launches AI-powered assistants, advanced SMS, and enhanced reporting. Omnisend, an email and SMS marketing platform for ecommerce brands, has announced a platform update featuring personalization tools, AI-powered assistants, advanced SMS capabilities, and enhanced reporting. With its Personalize Content suite, Omnisend automatically displays recently viewed products inside campaign emails, suggests complementary products based on purchase history, and dynamically displays content blocks. It also (i) enables AI-driven form creation based on events, brand assets, and tone, and (ii) facilitates branded domain shortlinks and enhanced reporting.

Numeral raises $35 million to automate sales tax with AI. Numeral, an AI-powered sales tax compliance platform, has announced a $35 million funding round by Mayfield, with participation from Benchmark, Uncork Capital, Y Combinator, and Mantis. The funding will fuel product innovation, global expansion, and AI-driven automation. Numeral says it serves 2,000 ecommerce and SaaS brands, including Ridge, Eight Sleep, Graza, Grüns, and Manus. The company supports tax compliance across 60 countries.

Amazon provides access to Marketing Cloud for sponsored ads advertisers. Amazon has opened Marketing Cloud to provide instant, self-service access for any sponsored ads advertisers directly within their Amazon Ads account. This capability enables advertisers to leverage Marketing Cloud’s analytics suite immediately upon logging in, without requiring partner intervention. Through an enhanced interface featuring no-code templates and AI-powered assistance, advertisers can analyze campaign performance and create high-intent audiences to improve relevance and effectiveness in their sponsored ads campaigns, per Amazon.

Yuno naunches Nova AI agents to convert payment friction. Yuno, a financial infrastructure platform for global payments, has launched Nova, a series of AI agents that streamline payment friction. Nova says it turns card declines, abandoned checkouts, and missed payments into AI-powered customer conversations via phone and WhatsApp. AI agents choose the optimal outreach method, generate the right script, localize conversations across 70 languages, and adapt dynamically based on a customer’s responses and preferences.

Home page of Yuno

Yuno

Social commerce platform Whop launches payments with smart routing. Whop, a social commerce platform, has launched a comprehensive payment infrastructure. Whop Payments connects to multiple processors simultaneously, rerouting declined transactions to alternative providers while offering sellers instant payouts via cryptocurrency and traditional banking channels across 241 regions worldwide. The system’s backup routing activates instantly when encountering declines, routing payments through alternative processors to consummate transactions. Sellers receive funds through traditional bank transfers, Bitcoin, stablecoins, or digital wallets such as Venmo.

Amazon Ads launches agentic AI tool for creating ads. Amazon Ads has announced an agentic AI tool to create ads for campaigns. Within Creative Studio, advertisers click “chat” to access a conversational, AI-powered partner to research, brainstorm ideas, develop storyboarded concepts, and produce video and display ads. The tool combines shopper signals with information from the advertiser’s product pages, Brand Store, and website.

Google and PayPal partner on commerce features. Google and PayPal have announced a multiyear partnership focused on advancing several commerce features. PayPal’s features, including branded checkout, Hyperwallet, and payouts, will integrate with various Google products. PayPal Enterprise Payments will be one of the key card processors across Google Cloud, Google Ads, and Google Play. PayPal will also partner with Google Cloud to reimagine its technology foundations, applications, and infrastructure.

Bolt launches founder-first program to accelerate digital commerce. Bolt, a financial technology company specializing in one-click checkouts, has launched Activate to help ecommerce founders bring ideas to market faster and scale. The program targets (i) early-stage direct-to-consumer businesses, (ii) gaming, fintech, consumer health, and creator-led brands, and (iii) physical retail companies expanding into ecommerce. Participants gain access to Bolt’s one-click checkout and product suite, as well as a curated global founder network.

Home page of Bolt

Bolt

Gemini 2.5 Flash Update: Clearer Answers, Better Image Understanding via @sejournal, @MattGSouthern

Google updates Gemini 2.5 Flash with clearer step-by-step help, more structured responses, and stronger image understanding, now live in the Gemini app.

  • Gemini 2.5 Flash adds step-by-step guidance aimed at homework and complex topics.
  • Responses are formatted with headers, lists, and tables for faster scanning.
  • Image understanding can explain detailed diagrams and turn notes into flashcards.
Marketing Is 4th Most Exposed To GenAI, Indeed Study Finds via @sejournal, @MattGSouthern

Marketing professionals face one of the highest levels of potential AI disruption across all occupations, with 69% of marketing job skills positioned for transformation by generative AI, according to new data from Indeed.

The analysis evaluated nearly 2,900 work skills against U.S. job postings and found that marketing is the fourth most exposed profession, trailing only software development, data and analytics, and accounting.

The Shift From Doing To Directing

Indeed’s GenAI Skill Transformation Index groups skills into four levels: minimal, assisted, hybrid, and full transformation.

For marketing professionals, the majority of affected skills fall into hybrid transformation, where AI handles routine execution while humans provide oversight, validation, and strategic direction.

Indeed writes:

“Human oversight will remain critical when applying these skills, but GenAI can already perform a significant portion of routine work.”

That covers tasks AI can complete reliably in standard cases, with people stepping in to manage exceptions, interpret ambiguous situations, and ensure quality control.

What Marketing Skills Are Most at Risk?

Administrative, documentation, and text-processing tasks show high transformation potential, where AI already performs well at information retrieval, drafting, and analysis.

Communication-related work sits in the hybrid zone for many occupations. In one example from the report, communication skills appear in 23% of nursing postings and are classified as “hybrid.” This illustrates how routine language tasks are increasingly AI-assistable while human judgment remains essential.

How the Study Scored Skills

The study used multiple large language models and based its ratings on consistent results from OpenAI’s GPT-4.1 and Anthropic’s Claude Sonnet 4, noting that model performance varies.

The team evaluated each skill on two dimensions: problem-solving requirements and physical necessity. Marketing scores high on problem-solving and low on physical necessity, making many skills strong candidates for AI transformation.

A Change From Previous Research

Earlier Hiring Lab work found zero skills “very likely” to be fully replaced by GenAI.

In this update, the report identifies 19 skills (0.7% of the ~2,900 analyzed) that cross that “very likely” threshold. The authors frame this as incremental progress toward end-to-end automation for narrow, well-structured tasks, not broad replacement.

The Broader Employment Picture

Across the labor market, 26% of jobs on Indeed could be highly transformed by GenAI, 54% are moderately transformed, and 20% show low exposure.

These are measures of potential transformation. Actual outcomes depend on adoption, workflow design, and reskilling.

The report notes:

“Any realized impacts will depend entirely on whether and how businesses adopt and integrate GenAI tools…”

Marketing vs. Other Professions

Software development tops the list with 81% of skills facing transformation, followed by data and analytics (79%) and accounting (74%).

On the other end, nursing shows 33% skill transformation, with core patient-care responsibilities remaining human-centered.

Marketing’s position reflects its reliance on cognitive, screen-based work that AI can increasingly assist.

Not All AI Models Are Equal

The report emphasizes that model choice matters. Different models varied in output quality and stability, so teams should test tools against their own use cases rather than assume uniform performance.

Looking Ahead

The report’s authors, Annina Hering and Arcenis Rojas, created the GenAI Skill Transformation Index to reflect the level of transformation rather than simple replacement.

They advise developing skills that complement AI, such as strategy, creative problem-solving, and the ability to validate and interpret AI-generated outputs.

The timeline for these changes will differ depending on the size of the company, the industry, and how digitally advanced they are.

But the overall trend is clear: roles are evolving from hands-on task execution to overseeing AI and developing strategies. Those who stay ahead by adopting hybrid workflows will likely be in the best position.


Featured Image: Roman Samborskyi/Shutterstock