Studies Suggest How To Rank On Google’s AI Overviews via @sejournal, @AdamHeitzman

Google’s AI Overviews (AIOs) are AI-generated responses that appear at the top of the search engine results page (SERP).

Unlike traditional search results, AIOs summarize information from multiple sources to provide direct answers to user queries while offering relevant links.

These overviews are displayed prominently: the AI Overview appears on the left, with relevant links to sources on the right.

Screenshot for search for [why is my cheese not melting], Google, November 2024

Google determines which sources to include based on their credibility and relevance to the user’s search intent. This is where SEO plays a critical role.

Why Are AI Overviews Important For SEO?

Being cited in an AI Overview boosts visibility since it’s the first result users see after their query. This positioning can significantly increase click-through rates (CTR), even for pages that aren’t ranked in the top 10 of the SERP.

Studies indicate that 52% of sources mentioned in AI Overviews rank in the top 10 results, meaning nearly half are pulled from beyond the first page.

This means that even if you don’t rank on the first page, you can still be featured on AI Overviews.

In addition to my own research with our clients, I studied different reports to better understand how you can rank on AI Overviews. Some of these reports include:

How To Rank In AI Overviews: 11 Tips For Organic Visibility

While you can’t directly control whether your pages are cited in an AI Overview, you can improve your chances by following these tips.

1. Add More Context To Your Articles

AI Overviews are designed to answer user queries directly. This means Google rewards content that is well-contextualized and written in a simple, easy-to-read format.

One thing to remember is that AIOs are triggered by informational search intent keywords 99.2% of the time, according to Ahrefs. If you’re writing an article on an informational keyword, focus on writing in a simple, easy-to-read format and add enough context to answer the query fully.

The Surfer SEO study shows that Google focuses on context over keywords. When AIOs show results to a user’s query, they mention exact keyword phrases only 5.4% of the time. Which means keywords are less important in AIOs.

In the example below, the query is [best month to visit Canada], but the AIO doesn’t emphasize the best month in its response. It’s the best time.

Screenshot from search for [best month to visit canada], Google, November 2024

Tips:

  • Use tools like Ahrefs to find AIO-triggering keywords with high-traffic potential. (Use the Ahrefs AI Overview SERP feature, and navigate to the intent filter to choose Informational as the search intent. It finds long-tail keywords for you, and you can write specific answers to these search queries.)
Screenshot from Ahrefs, November 2024
  • Structure your content to answer questions fully, incorporating related topics naturally.
  • Use tools like Google Autocomplete or People Also Ask to identify common questions users have about your topic. (See example below.)
Screenshot from search for [can dogs eat chocolate], Google, November 2024

2. Use Long-Tail Keywords

AI Overviews are more likely to be triggered by specific, long-tail keywords than by generic, short-tail ones.

According to Ahrefs, they’re triggered more for queries with three to four words than for queries with one- to three-word queries.

Screenshot from Ahrefs, November 2024

These keywords often align closely with user intent.

How To Find Them:

  • Use the “Questions” section in keyword tools like AnswerThePublic.
  • Leverage Google Autocomplete to identify conversational search terms.

3. Leverage Structured Markup

Implementing structured data, such as Schema.org markup, helps search engines understand the context and structure of your content. This makes your page more likely to be included in AI Overviews.

Key Markup Types To Use:

  • FAQ schema for question-based content.
  • Article schema for blog posts and informational pieces.
  • Breadcrumb schema to improve navigation signals.

4. Optimize On-Page SEO

On-page SEO remains foundational for ranking in both traditional SERPs and AI Overviews. 52% of AI Overviews sources come from the top 10 search results. This means you have a better chance of getting cited if your page ranks for that keyword.

Best Practices:

  • Use primary and secondary keywords in titles, headings, and subheadings.
  • Write compelling meta descriptions to boost CTR.
  • Ensure your content meets E-E-A-T (expertise, experience, authoritativeness, trustworthiness) guidelines.

5. Target Keywords With Low Difficulty

Focus on keywords with low competition (Keyword Difficulty < 20).

These are often high-intent, long-tail phrases that are easier to rank for and align well with informational search queries.

According to Ahrefs, AIO keywords have an average difficulty of 12. An example is the keyword phrase “Can dogs have cinnamon?” which has a KD of 12.

Screenshot from Ahrefs, November 2024

If you’re using Ahrefs, use the AI Overview SERP feature filter. Filter out keywords above 50 and go through keywords relevant to your topic.

Screenshot from Ahrefs, November 2024

6. Build Brand Credibility

From our experience optimizing content for AI Overviews, we’ve observed that sources frequently mentioned in authoritative publications or regularly cited by others are more likely to be included. While this aligns with Google’s emphasis on E-E-A-T, our firsthand results reinforce this approach.

Having a consistent presence in credible and trusted outlets has, in our experience, improved the likelihood of being featured in AI Overviews. Building this presence strengthens your site’s perceived authority.

Action Steps:

  • Engage in digital PR campaigns to secure mention in reputable publications.
  • Monitor mentions of your brand on platforms like Quora and Reddit to ensure positive associations.

7. Optimize For Mobile SEO

With mobile-first indexing, Google evaluates your site’s mobile performance when determining rankings.

According to Ahrefs study, mobile traffic accounts for 81% of AI Overview citations.

Tips:

  • Use responsive design to ensure your site displays well on all devices.
  • Improve page load speed for mobile users using tools like Google PageSpeed Insights.

8. Format Content For Easy Scanning

From firsthand analysis of sites that frequently rank in AI Overviews, we’ve found that well-structured content – using bullet points, lists, and clear sections – is often favored.

Formatting plays a critical role in helping AI parse information quickly.

Best Practices:

  • Use bullet points, numbered lists, and short paragraphs.
  • Structure content with clear headings and subheadings.
  • Break up long blocks of text with visual elements like charts or images.

9. Focus On Simplicity

Content written in plain, accessible language tends to perform better in AI Overviews. This is something we’ve consistently seen when optimizing content for diverse audiences and industries.

Tools:

  • Use Hemingway Editor or Grammarly to ensure your content is readable and concise.
Screenshot from Hemingway, November 2024

10. Acquire High-Quality Backlinks

While strong backlinks are widely recognized as important for SEO, our experience suggests they are equally critical for increasing the likelihood of being cited in AI Overviews.

Prioritizing quality over quantity in link building is key. Use strategic link building campaigns to improve your domain authority and visibility in AI Overviews.

11. Publish Timely, Relevant Content

AI Overviews often favor fresh, up-to-date information. Regularly update your articles and blog posts to ensure they remain current.

Do AI Overviews Affect SEO?

Yes, AI Overviews impact SEO strategies by shifting the focus from traditional rankings to citation opportunities.

While they can increase visibility and CTR for cited sources, they may also reduce traffic for pages that are not directly cited, even if they rank well organically.

FAQs About AI Overviews:

Are AI Overviews Accurate?

AI Overviews are generally reliable but not 100% accurate. These AI-generated summaries pull information from multiple web sources, which means their accuracy depends on the quality and timeliness of the source content.

Google has conducted extensive tests, though. It discovered that the accuracy rates of AI Overviews “is on par” with those of Featured Snippets, which is a trustworthy feature for quick information.

Where Do AI Overviews Get Their Information?

AI Overviews gather information from multiple credible sources across Google’s search results pages. It uses:

  • Top-ranking websites.
  • Authority websites.
  • Content relevance through sources that directly answer the user’s query, even if they don’t rank on the first page.
  • Recent content.

Key Takeaway For Ranking In Google’s AI Overviews

Ranking in Google’s AI Overviews requires a multi-faceted approach: creating well-structured, mobile-friendly content, targeting specific long-tail keywords, and building brand credibility.

Leveraging tools like structured markup and keeping your content updated can further boost your chances.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

39% Of Skills May Be Obsolete By 2030, WEF Jobs Report Warns via @sejournal, @MattGSouthern

A new report shows the most in-demand jobs as AI and automation change industries worldwide.

The World Economic Forum’s (WEF) Future of Jobs report (PDF link) lists the jobs expected to grow the most in the next five years.

Here’s what you need to know.

AI’s Impact On Job Market

The report surveyed over 1,000 global executives, representing over 14 million workers in 55 economies.

Most executives—86%—believe AI and related technologies will significantly change their businesses by 2030.

Key points include:

  • AI & Information Processing: This technology is expected to create about 11 million new jobs while displacing around 9 million, leading to net job growth in AI fields.
  • Robotics and Autonomous Systems: While some jobs may be replaced, many positions will support robotic tasks.
  • Broadening Digital Access: 60% of businesses see this as essential to their operations.

Despite advances in AI, human workers are still crucial. New job opportunities will emerge in big data, cybersecurity, and human-focused roles such as talent management and customer service.

The Fastest-Growing Jobs

According to the report, technology-related roles are expected to grow most by 2030.

Leading the pack are positions like:

  1. Big Data Specialists
  2. FinTech Engineers
  3. AI and Machine Learning Specialists
  4. Software and Applications Developers
  5. Security Management Specialists
  6. Data Warehousing Specialists
  7. Autonomous and Electric Vehicle Specialists
  8. UI and UX Designers
  9. Light Truck or Delivery Services Drivers
  10. Internet of Things Specialists
  11. Data Analysts and Scientists
  12. Environmental Engineers
  13. Information Security Analysts
  14. DevOps Engineer
  15. Renewable Energy Engineers

The demand for tech workers is increasing as businesses adopt AI, information processing technologies, and robotics.

The report notes that “AI and big data are the fastest-growing skills,” followed by networks, cybersecurity, and technology literacy.

Green jobs, like Electric Vehicle Specialists and Environmental Engineers, are also among the fastest-growing roles due to efforts to reduce carbon emissions.

While tech jobs grow the fastest in percentage terms, the largest increase in actual job numbers is expected in traditional frontline roles.

These include:

  1. Farmworkers, Labourers, and Other Agricultural Workers
  2. Light Truck or Delivery Services Drivers
  3. Software and Applications Developers
  4. Building Framers, Finishers, and Related Trades Workers
  5. Shop Salespersons
  6. Food Processing and Related Trades Workers
  7. Car, Van and Motorcycle Drivers
  8. Nursing Professionals
  9. Food and Beverage Serving Workers
  10. General and Operations Managers
  11. Social Work and Counselling Professionals
  12. Project Managers
  13. University and Higher Education Teachers
  14. Secondary Education Teachers
  15. Personal Care Aides

Care economy jobs, such as nursing professionals, social workers, counselors, and personal care aides, are also expected to grow significantly.

The Most In Demand Skills

As job roles transform, so do the skills required to perform them successfully.

The Future of Jobs Report finds that, on average, workers can expect 39% of their core skills to become outdated over the next five years.

However, this “skill instability” has slowed compared to the predictions in previous editions of the report, potentially due to increasing employee reskilling and upskilling rates.

Employers surveyed identified the following as the top skills workers will need in 2025 and beyond:

  • Analytical thinking
  • Resilience, flexibility, and agility
  • Leadership and social influence
  • AI and big data
  • Networks and cybersecurity
  • Technological literacy
  • Creative thinking
  • Curiosity and lifelong learning
  • Environmental stewardship
  • Systems thinking

Skills such as manual dexterity, endurance, precision, and basic skills such as reading, writing, and math are expected to be in less demand.

The report notes:

“Manual dexterity, endurance, and precision stand out with notable net declines in skills demand, with 24% of respondents foreseeing a decrease in their importance.”

Preparing The Workforce

The report highlights the need to upskill and reskill workers due to upcoming skill changes. Employers can upskill 29% of their staff and redeploy 19%, but 11% may not receive the necessary training.

The report states:

“If the world’s workforce was made up of 100 people, 59 would need training by 2030.”

To address these challenges, 85% of employers plan to focus on upskilling current workers, 70% will hire new staff with needed skills, and 50% aim to move workers from declining jobs to growing ones.

Saadia Zahidi, the Managing Director at the World Economic Forum, emphasized the need for collective action:

“The disruptions of recent years have underscored the importance of foresight and collective action. We hope this report will inspire an ambitious, multistakeholder agenda—one that equips workers, businesses, governments, educators, and civil society to navigate the complex transitions ahead.”

What Does This Mean?

The rise of AI and data-driven marketing is reshaping SEO roles.

Here’s what matters:

  1. SEO pros need AI basics. Understanding machine learning (ML), natural language processing (NLP), and analytics tools is becoming essential for managing automated systems and content optimization.
  2. While AI helps create content, success needs human insight. Focus on storytelling and brand strategy that connects with users and satisfies search intent.
  3. Better tools mean more data. Winners will be those who can turn metrics into effective campaigns and prove ROI.
  4. Privacy and data protection knowledge sets you apart. Expect more overlap with security teams.
  5. SEO isn’t solo work anymore. Success means working well with devs, AI teams, and product managers.

Bottom line: Blend AI and analytics skills with human creativity and strategy to stay competitive.


Featured Image: Lightspring/Shutterstock

Reuters: Publishers Pivot To Video As AI Disrupts Search Traffic via @sejournal, @MattGSouthern

A new report from the Reuters Institute examines the influence of AI overviews and Google Discover, which have changed how people access information.

Additionally, the report finds publishers relying more on video and social platforms like YouTube and TikTok to reach audiences.

These trends suggest the need to refine strategies and embrace new technologies to remain competitive.

Here are all the need-to-know highlights from the report.

AI Disruption & Zero-Click Search

A major threat to publishers is AI-driven search.

Platforms like Google and OpenAI provide direct answers to user questions, often making it unnecessary for users to click on links. This creates a “zero-click” search environment.

74% of publishers are concerned about losing traffic, prompting many to seek new strategies.

Larger publishers have made licensing deals with AI aggregators like ChatGPT or Perplexity, while smaller ones are still finding ways to gain visibility.

Building audience relationships through newsletters, subscriptions, or apps can help publishers withstand disruption from AI search.

Google Discover Traffic Grows

As social media referral traffic from platforms like Facebook and X continues to decline—67% and 50% drops over the past two years—publishers are increasingly turning to Google Discover.

The Reuters Institute notes that Discover grew by 12% year over year, and many publishers now rely on it as their primary referral source.

Its personalized recommendations have made it a focus for publishers looking to replace lost traffic from other platforms.

For SEOs, technical optimizations like structured data and engaging visuals are key to maximizing Discover’s potential.

However, the feed’s algorithmic nature means results can be unpredictable, requiring constant monitoring.

Video & Social Media

Video platforms like YouTube, TikTok, and Instagram are essential for publishers who want to connect with younger audiences.

The Reuters Institute reports that publishers plan to invest more in these platforms, with YouTube (+52%), TikTok (+48%), and Instagram (+43%) showing the biggest increases in focus.

Short-form videos are effective for engagement, but they have challenges. Making quality videos requires resources, and earning money on platforms like TikTok is hard.

For publishers, this means creating strategies optimized for each platform’s algorithm while driving traffic back to your websites or apps.

Cross-Team Collaboration

The Reuters Institute stresses the need for cross-team collaboration. As newsrooms adopt more AI tools, teams will need to work together to streamline content creation.

For instance, AI tools like automated headlines and fact-checking can enhance workflows. However, they depend on support from editorial teams, which many publishers find challenging.

Fostering good relationships between different departments will be necessary for continued success.

Broader Context

The Reuters Institute’s findings match those in the NewzDash 2025 News SEO Survey. They both highlight AI disruption, Google Discover, and a lack of resources as major challenges.

Together, these reports show an industry facing rapid change.

The key takeaways for publishers and SEO professionals are: embrace AI-driven search, make the most of Google Discover, and focus on video and social media platforms.


Featured Image: Inside Creative House/Shutterstock

TikTok Ban Update: Will The Supreme Court Pull The Plug? via @sejournal, @MattGSouthern

The U.S. Supreme Court heard arguments on January 10 over a law requiring ByteDance, TikTok’s Chinese parent company, to sell the app or face a U.S. ban by January 19.

The law, passed last year, is based on national security concerns related to TikTok’s data practices and its ties to the Chinese government.

The case will decide TikTok’s future in the U.S., which has 170 million users and is a major platform for creators and businesses.

Government: TikTok Is A Security Threat

The U.S. government argued that TikTok gives the Chinese government potential access to sensitive user data and a platform for covert influence.

Solicitor General Elizabeth Prelogar said:

“TikTok’s immense data set would give the PRC a powerful tool for harassment, recruitment, and espionage.”

Prelogar warned that China could use data collected from millions of Americans for blackmail or other purposes.

Referencing Chinese laws that require companies like ByteDance to share information with the government, Prelogar said:

“The Chinese government could weaponize TikTok at any time to harm the United States.”

Justice Brett Kavanaugh echoed these concerns, saying:

“China was accessing information about millions of Americans… including teenagers, people in their 20s.”

Kavanaugh warned that such data could be used to “develop spies, to turn people, to blackmail people.”

Chief Justice John Roberts emphasized that the law focuses on ByteDance’s ownership, not TikTok’s content.

Roberts stated:

“Congress doesn’t care about what’s on TikTok… They’re saying that the Chinese have to stop controlling TikTok.”

TikTok: The Law Violates Free Speech

TikTok’s legal team argued the law violates the First Amendment by targeting its ability to operate.

Attorney Noel Francisco compared TikTok’s algorithm to editorial decision-making, calling it protected speech.

Francisco said

“The government’s real target, rather, is the speech itself.”

He adds:

“There is no evidence that TikTok has engaged in covert content manipulation in this country.”

Francisco proposed alternatives, such as banning TikTok from sharing user data with ByteDance or requiring user risk disclosures.

He argued these measures would address security concerns without violating free speech.

Justice Neil Gorsuch questioned the government’s approach, asking:

“Isn’t that a pretty paternalistic point of view? Don’t we normally assume that the best remedy for problematic speech is counter-speech?”

Are Alternatives Feasible?

The justices also debated whether less drastic measures could work.

Justice Sonia Sotomayor questioned why Congress didn’t simply block TikTok from sharing data with ByteDance.

Sotomayor asks:

“If the concern is data security, why wouldn’t Congress simply prohibit TikTok from sharing sensitive user data with anyone?”

Prelogar countered that ByteDance’s control over TikTok’s core algorithm makes such measures ineffective.

Prelogar responded:

“There is no reasonable way to create a true firewall that would prevent the U.S. subsidiary from sharing data with the corporate parent.”

Prelogar explains that TikTok relies on data flows between the U.S. and China.

Justice Amy Coney Barrett questioned whether TikTok could operate without ByteDance’s algorithm.

Barrett said:

“It seems to me like we are saying to ByteDance, ‘We want to shut you up.’”

Barrett suggests that separating TikTok from ByteDance may fundamentally change the app.

What’s Next?

If the law is upheld and ByteDance doesn’t divest, TikTok could be banned in the U.S. by January 19.

TikTok’s legal team warned that such a ban would set a dangerous precedent.

Francisco said:

“If the First Amendment means anything, it means that the government cannot restrict speech in order to protect us from speech.”

The government argues the law is narrowly focused on security risks and doesn’t target speech.

Prelogar said:

“The Act leaves all of that speech unrestricted once TikTok is freed from foreign adversary control.”

The Supreme Court is expected to rule before the deadline. This decision could shape how foreign-owned tech platforms are handled in the U.S. in the future.


Featured Image: bella1105/Shutterstock

A New York legislator wants to pick up the pieces of the dead California AI bill

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

A different flavour of bill

The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

Focusing on the wrong issues?

Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

“But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

Early days

The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

When asked about the idea of mandated safety plans for AI companies, assemblymember Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

“There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

2025 is a critical year for climate tech

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies.

In case you haven’t browsed this year’s list or a previous version, it features tech that’s either breaking into prominence or changing society. We typically recognize a range of items running from early-stage research to consumer technologies that folks are getting their hands on now.

As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. It’s certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. We’re facing global political shifts and entering the second half of the decade. It’s time for these climate technologies to grow up and get out there.

Green steel

Steel is a crucial material for buildings and vehicles, and making it accounts for around 8% of global greenhouse-gas emissions. New manufacturing methods could be a huge part of cleaning up heavy industry, and they’re just on the cusp of breaking into the commercial market.

One company, called Stegra, is close to starting up the world’s first commercial green steel plant, which will make the metal using hydrogen from renewable sources. (You might know this company by its former name, H2 Green Steel, as we included it on our 2023 list of Climate Tech Companies to Watch.)

When I first started following Stegra a few years ago, its plans for a massive green steel plant felt incredibly far away. Now the company says it’s on track to produce steel at the factory by next year.

The biggest challenge in this space is money. Building new steel plants is expensive—Stegra has raised almost $7 billion. And the company’s product will be more expensive than conventional material, so it’ll need to find customers willing to pay up (so far, it has).

There are other efforts to clean up steel that will all face similar challenges around money, including another play in Sweden called Hybrit and startups like Boston Metal and Electra, which use different processes. Read more about green steel, and the potential obstacles it faces as we enter a new phase of commercialization, in this short blurb and in this longer feature about Stegra.

Cow burp remedies

Humans love burgers and steaks and milk and cheese, so we raise a whole bunch of cows. The problem is, these animals are among a group with a funky digestion process that produces a whole lot of methane (a powerful greenhouse gas). A growing number of companies are trying to develop remedies that help cut down on their methane emissions.

This is one of my favorite items on the list this year (and definitely my favorite illustration—at the very least, check out this blurb to enjoy the art).

There’s already a commercially available option right now: a feed additive called Bovaer from DSM-Firmenich that the company says can cut methane emissions by 30% in dairy cattle, and more in beef cattle. Startups are right behind with their own products, some of which could prove even better.

A key challenge all these companies face moving forward is acceptance: from regulatory agencies, farmers, and consumers. Some companies still need to go through lengthy and often expensive tests to show that their products are safe and effective. They’ll also need to persuade farmers to get on board. Some might also face misinformation that’s causing some consumers to protest these new additives.

Cleaner jet fuel

While planes crisscrossing the world are largely powered by fossil fuels, some alternatives are starting to make their appearance in aircraft.

New fuels, today mostly made from waste products like used cooking oil, can cut down emissions from air travel. In 2024, they made up about 0.5% of the fuel supply. But new policies could help these fuels break into new prominence, and new options are helping to widen their supply.

The key challenge here is scale. Global demand for jet fuel was about 100 billion gallons last year, so we’ll need a whole lot of volume from new producers to make a dent in aviation’s emissions.

To illustrate the scope, take LanzaJet’s new plant, opened in 2024. It’s the first commercial-scale facility that can make jet fuel with ethanol, and it has a capacity of about 9 million gallons annually. So we would need about 10,000 of those plants to meet global demand—a somewhat intimidating prospect. Read more in my write-up here.

From cow burps to jet fuel to green steel, there’s a huge range of tech that’s entering a new stage of deployment and will need to face new challenges in the next few years. We’ll be watching it all—thanks for coming along.


Now read the rest of The Spark

Related reading

Check out our full list of 2025’s Breakthrough Technologies here. There’s also a poll where you can vote for what you think the 11th item should be. I’m not trying to influence anyone’s vote, but I think methane-detecting satellites are pretty interesting—just saying … 

This package is part of our January/February print issue, which also includes stories on: 

A Polestar electric car prepares to park at an EV charging station on July 28, 2023 in Corte Madera, California.

JUSTIN SULLIVAN/GETTY

Another thing 

EVs are (mostly) set for solid growth in 2025, as my colleague James Temple covers in his newest story. Check it out for more about what’s next for electric vehicles, including what we might expect from a new administration in the US and how China is blowing everyone else out of the water. 

Keeping up with climate  

Winter used to be the one time of year that California didn’t have to worry about wildfires. A rapidly spreading fire in the southern part of the state is showing that’s not the case anymore. (Bloomberg)

Tesla’s annual sales decline for the first time in over a decade. Deliveries were lower than expected for the final quarter of the year. (Associated Press)

Meanwhile, in China, EVs are set to overtake traditional cars in sales years ahead of schedule. Forecasts suggest that EVs could account for 50% of car sales this year. (Financial Times)

KoBold metals raised $537 million in funding to use AI to mine copper. The funding pushes the startup’s valuation to $2.96 billion. (TechCrunch)
→ Read this profile of the company from 2021 for more. (MIT Technology Review)

We finally have the final rules for a tax credit designed to boost hydrogen in the US. The details matter here. (Heatmap)

China just approved the world’s most expensive infrastructure project. The hydroelectric dam could produce enough power for 300 million people, triple the capacity of the current biggest dam. (Economist)

In 1979, President Jimmy Carter installed 32 solar panels on the White House’s roof. Although they came down just a few years later, the panels lived multiple lives afterward. I really enjoyed reading about this small piece of Carter’s legacy in the wake of his passing. (New York Times)

An open pit mine in California is the only one in the US mining and extracting rare earth metals including neodymium and praseodymium. This is a fascinating look at the site. (IEEE Spectrum
→ I wrote about efforts to recycle rare earth metals, and what it means for the long-term future of metal supply, in a feature story last year. (MIT Technology Review)

How the US is preparing for a potential bird flu pandemic

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week marks a strange anniversary—it’s five years since most of us first heard about a virus causing a mysterious “pneumonia.” A virus that we later learned could cause a disease called covid-19. A virus that swept the globe and has since been reported to have been responsible for over 7 million deaths—and counting.

I first covered the virus in an article published on January 7, 2020, which had the headline “Doctors scramble to identify mysterious illness emerging in China.” For that article, and many others that followed it, I spoke to people who were experts on viruses, infectious disease, and epidemiology. Frequently, their answers to my questions about the virus, how it might spread, and the risks of a pandemic were the same: “We don’t know.”

We are facing the same uncertainty now with H5N1, the virus commonly known as bird flu. This virus has been decimating bird populations for years, and now a variant is rapidly spreading among dairy cattle in the US. We know it can cause severe disease in animals, and we know it can pass from animals to people who are in close contact with them. As of this Monday this week, we also know that it can cause severe disease in people—a 65-year-old man in Louisiana became the first person in the US to die from an H5N1 infection.

Scientists are increasingly concerned about a potential bird flu pandemic. The question is, given all the enduring uncertainty around the virus, what should we be doing now to prepare for the possibility? Can stockpiled vaccines save us? And, importantly, have we learned any lessons from a covid pandemic that still hasn’t entirely fizzled out?

Part of the challenge here is that it is impossible to predict how H5N1 will evolve.

A variant of the virus caused disease in people in 1997, when there was a small but deadly outbreak in Hong Kong. Eighteen people had confirmed diagnoses, and six of them died. Since then, there have been sporadic cases around the world—but no large outbreaks.

As far as H5N1 is concerned, we’ve been relatively lucky, says Ali Khan, dean of the college of public health at the University of Nebraska. “Influenza presents the greatest infectious-disease pandemic threat to humans, period,” says Khan. The 1918 flu pandemic was caused by a type of influenza virus called H1N1 that appears to have jumped from birds to people. It is thought to have infected a third of the world’s population, and to have been responsible for around 50 million deaths.

Another H1N1 virus was responsible for the 2009 “swine flu” pandemic. That virus hit younger people hardest, as they were less likely to have been exposed to similar variants and thus had much less immunity. It was responsible for somewhere between 151,700 and 575,400 deaths that year.

To cause a pandemic, the H5N1 variants currently circulating in birds and dairy cattle in the US would need to undergo genetic changes that allow them to spread more easily from animals to people, spread more easily between people, and become more deadly in people. Unfortunately, we know from experience that viruses need only a few such changes to become more easily transmissible.

And with each and every infection, the risk that a virus will acquire these dangerous genetic changes increases. Once a virus infects a host, it can evolve and swap chunks of genetic code with any other viruses that might also be infecting that host, whether it’s a bird, a pig, a cow, or a person. “It’s a big gambling game,” says Marion Koopmans, a virologist at the Erasmus University Medical Center in Rotterdam, the Netherlands. “And the gambling is going on at too large a scale for comfort.”

There are ways to improve our odds. For the best chance at preventing another pandemic, we need to get a handle on, and limit, the spread of the virus. Here, the US could have done a better job at limiting the spread in dairy cows, says Khan. “It should have been found a lot earlier,” he says. “There should have been more aggressive measures to prevent transmission, to recognize what disease looks like within our communities, and to protect workers.”

States could also have done better at testing farm workers for infection, says Koopmans. “I’m surprised that I haven’t heard of an effort to eradicate it from cattle,” she adds. “A country like the US should be able to do that.”

The good news is that there are already systems in place for tracking the general spread of flu in people. The World Health Organization’s Global Influenza Surveillance and Response System collects and analyzes samples of viruses collected from countries around the world. It allows the organization to make recommendations about seasonal flu vaccines and also helps scientists track the spread of various flu variants. That’s something we didn’t have for the covid-19 virus when it first took off.

We are also better placed to make vaccines. Some countries, including the US, are already stockpiling vaccines that should be at least somewhat effective against H5N1 (although it is difficult to predict exactly how effective they will be against some future variant). The US Administration for Strategic Preparedness and Response plans to have “up to 10 million doses of prefilled syringes and multidose vials” prepared by the end of March, according to an email from a representative.

The US Department of Health and Human Services has also said it will provide the pharmaceutical company Moderna with $176 million to create mRNA vaccines for pandemic influenza—using the same quick-turnaround vaccine production technology used in the company’s covid-19 vaccines.

Some question whether these vaccines should have already been offered to dairy farm workers in affected parts of the US. Many of these individuals have been exposed to the virus, a good chunk of them appear to have been infected with it, and some of them have become ill. If the decision had been up to Khan, he says, they would have been offered the H5N1 vaccine by now. And we should ensure they are offered seasonal flu vaccines in order to limit the risk that the two flu viruses will mingle inside one person, he adds.

Others worry that 10 million vaccine doses aren’t enough for a country with a population of around 341 million. But health agencies “walk a razor-thin line between having too much vaccine for something and not having enough,” says Khan. If an outbreak never transpires, 340 million doses of vaccine will feel like an enormous waste of resources.

We can’t predict how well these viruses will work, either. Flu viruses mutate all the time, and even seasonal flu vaccines are notoriously unpredictable in their efficacy. “I think we’ve become a little bit spoiled with the covid vaccines,” says Koopmans. “We were really, really lucky [to develop] vaccines with high efficacy.”

One vaccine lesson we should have learned from the covid-19 pandemic is the importance of equitable access to vaccines around the world. Unfortunately, it’s unlikely that we have. “It is doubtful that low-income countries will have early access to [a pandemic influenza] vaccine unless the world takes action,” Nicole Lurie of the Coalition for Epidemic Preparedness Innovations (CEPI) said in a recent interview for Gavi, a public-private alliance for vaccine equity.

And another is the impact of vaccine hesitancy. Making vaccines might not be a problem—but convincing people to take them might be, says Khan. “We have an incoming administration that has lots of vaccine hesitancy,” he points out. “So while we may end up having … vaccines available, it’s not very clear to me if we have the political and social will to actually implement good public health measures.”

This is another outcome that is impossible to predict, and I won’t attempt to do so. But I am hoping that the relevant administrations will step up our defenses. And that this will be enough to prevent another devastating pandemic.


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Bird flu has been circulating in US dairy cows for months. Virologists are worried it could stick around on US farms forever.

As the virus continues to spread, the risk of a pandemic continues to rise. We still don’t really know how the virus is spreading, but we do know that it is turning up in raw milk. (Please don’t drink raw milk.)

mRNA vaccines helped us through the covid-19 pandemic. Now scientists are working on mRNA flu vaccines—including “universal” vaccines that could protect against multiple flu viruses.

The next generation of mRNA vaccines is on the way. These vaccines are “self-amplifying” and essentially tell the body how to make more mRNA. 

Maybe there’s an alternative to dairy farms of the type that are seeing H5N1 in their cattle. Scientists are engineering yeasts and plants with bovine genes so they can produce proteins normally found in milk, which can be used to make spreadable cheeses and ice cream. The cofounder of one company says a factory of bubbling yeast vats could “replace 50,000 to 100,000 cows.”

From around the web

My colleagues and I put together an annual list of what we think are the breakthrough technologies of that year. This year’s list includes long-acting HIV prevention medicines and stem-cell treatments that actually work. Check out the full list here.

Calico, the Google biotech company focused on “tackling aging,” has released results from the trial of a drug to treat amyotrophic lateral sclerosis (ALS). The drug failed. (STAT

Around the world, birth rates are falling. The more concerned nations become about this fact, the greater the risk to gender rights, writes Angela Saini. (Wired)

Brooke Eby, a 36-year-old with ALS, is among a niche group of content creators documenting their journeys with terminal illness on social media platforms like TikTok. “I’m glad that I’m sharing my journey. I wish someone had come before me and shared, start to finish …,” she said. “I’m just going to post all this, because maybe it’ll help someone who’s like a year behind me in their progression.” (New York Times)

Do we each have 30 trillion genomes? A growing understanding of genetic mutations that occur in adults is changing the way doctors diagnose and treat disease. (The Atlantic)

Anthropic’s chief scientist on 5 ways agents will be even better in 2025

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry. 

“We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week.

In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary.

In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you.

Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana.

Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025.

(Kaplan’s answers have been lightly edited for length and clarity.)

1/ Agents will get better at using tools

“I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use. 

“So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful. 

“We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.”

2/ Agents will understand context  

“Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have.

Jared Kaplan

ANTHROPIC

“I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected.

“Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.”

3/ Agents will make coding assistants better

“We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities.

“I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI.

“My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.”

4/ Agents will need to be made safe

“We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection. 

[Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.]

“Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do.

“And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing.

“So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

Ugmonk Brings Design to Desk Tools

Jeff Sheldon is a designer turned entrepreneur. He started Ugmonk, a Pennsylvania-based direct-to-consumer brand, in 2008 as a seller of graphic-inspired t-shirts. His desktop organizers, which he added in 2020, are seemingly unrelated until realizing he designed both — the t-shirt graphics and the desk tools.

Jeff first appeared on the podcast in 2020. He had just moved t-shirt fulfillment in-house and launched a Kickstarter campaign for his first desktop tool.

In our recent conversation, he addressed phasing out the t-shirts, expanding the desktop line, and the dilemma of selling on Amazon. Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Tell our listeners who you are.

Jeff Sheldon: I’m the founder of Ugmonk, a 16-year-old direct-to-consumer brand. Initially, we sold t-shirts, but we’ve evolved into well-designed, functional desk and organization products. One of our standout items is Analog, our desktop note card system to stay organized and reduce digital distractions. Ugmonk is known for design — aesthetics and functionality.

Folks associated Ugmonk with graphic tees for our first 12 years. I designed the graphics, and we eventually moved to manufacturing the shirts for improved quality. Working with a manufacturer in Los Angeles, we created a better garment. Despite the manufacturing challenges, we found a good rhythm and built a customer base around those shirts.

However, about two years ago, we stopped making apparel. Our business saw its highest single revenue day when we announced that change. Customers bought 20 to 50 shirts, not wanting to miss out. While leaving that part of the business behind was tough, I knew it was the right decision.

Bandholz: Have your apparel customers transitioned to desk products?

Sheldon: I haven’t dug deeply into the analytics, but surprisingly, many customers who bought our t-shirts have also purchased our desk products. At first glance, this might seem odd — how do t-shirts and desk accessories relate? However, Ugmonk attracts customers who appreciate design and functionality. Many customers who have been with us since the t-shirt days have moved into careers where they need quality, well-designed tools for their workspaces.

When I started Ugmonk, graphic tees were huge. Platforms like Threadless were popular, and many of my customers were in their teens and twenties, buying shirts and posters. Fast forward to now, and many customers have desk jobs or work from home. So, while some of our old customers still miss the shirts, many have moved on to the Analog system, which is now more popular than our peak apparel days.

The online t-shirt market is incredibly saturated. Everyone sells t-shirts, and countless brands use drop shipping to offer generic products. In the early years of Ugmonk, we thrived on organic growth — email lists and social media before it became pay-to-play. However, it was tough when we tried, in 2017, to scale using ads. Selling t-shirts through a Facebook ad, especially when competing against a sea of similar products, is difficult. We didn’t see much success.

In contrast, we launched the Analog system on Kickstarter in 2020 with immediate success. We raised almost half a million dollars from over 5,500 backers. We decided to invest in paid acquisition for the product, and it worked. It’s a visual product that solves a real problem — people are distracted by their devices, and the Analog system offers a tangible way to stay organized. Compared to t-shirts, selling Analog through advertising has been more scalable. It’s an example of a good product-market fit.

Bandholz: Has your role in the company changed?

Sheldon: My role has evolved, but I still handle many of the tasks I did in the early days. For instance, I still shoot most photos because I’m passionate about capturing our products in a way that tells their story. I could outsource photography, but I enjoy the creative aspect. Plus it’s a core part of our brand’s identity.

Our team has grown. It used to be just me. Then, I added an employee. Now we have two full-time employees and a part-time staff of two to five people, depending on our needs.

We’ve scaled operations with our in-house warehouse and fulfillment. I outsource some aspects of the business, like advertising, yet I’m still hands-on with organic marketing and writing most emails and my monthly “Five Things I’m Digging” newsletter, which has become a fan favorite.

Managing the creative and operational sides of the business is stressful, but it’s all part of the journey.

Bandholz: Ugmonk’s products are not on Amazon.

Sheldon: Amazon is a love-hate relationship for me, similar to Meta. In 2017, we tested our Gather desk organizers there but didn’t see much traction. So we pulled back. Amazon is flooded with cheap, knockoff products, making it hard for customers to distinguish between quality and subpar items.

I’ve become more open-minded lately. The reality is folks are shopping on Amazon — it’s where a significant percentage of ecommerce searches start.

I buy consumable items, like coffee filters, on Amazon for convenience. We’re considering selling refill cards for the Analog system there for the same reason. It’s about meeting people where they are. I still value owning our customer experience directly on our site, but Amazon can be complementary for certain products.

Bandholz: So listeners should go to your site to buy products.

Sheldon: Yes, at Ugmonk.com. They can find me on X and Instagram.

Automattic Turns Against WordPress Community Itself via @sejournal, @martinibuster

Automattic announced it is minimizing support for the WordPress.org CMS project, using words and phrases that present the withdrawal of support as a positive change to make WordPress stronger, while casting blame on WP Engine for its decision to minimize contributions.

The entire statement uses double-speak, pretextual statements and passive-aggressive language to portray itself as a victim of WP Engine and framing the withdrawal of support as the unavoidable consequences of WPE’s lawsuit against Automattic, saying:

“Additionally, we’re having to spend significant time and money to defend ourselves against the legal attacks started by WP Engine and funded by Silver Lake, a large private equity firm.

…We’ve made the decision to reallocate resources due to the lawsuits from WP Engine.

…This legal action diverts significant time and energy that could otherwise be directed toward supporting WordPress’s growth and health.

…We remain hopeful that WP Engine will reconsider this legal attack, allowing us to refocus our efforts on contributions that benefit the broader WordPress ecosystem.”

At no point in the statement does Automattic acknowledge its role in creating the conflict, instead portraying itself as forced to go down the path of Mullenweg’s self-described “nuclear” war with WP Engine when in fact there has always been time to engage in constructive dialogue.

Automattic Turns Against The WordPress Community Itself

A stunning feature of Automattic’s statement is that this is the first time that it points a finger at the WordPress community itself as part of the reason for pulling back resources. It wraps the word “community” in quotation marks in a manner that seems to undermine the legitimacy of the critics, which has the subtext of portraying the critics as not true members of the WordPress community.

There is an undertone of contempt for the criticisms against Mullenweg, which to be fair started out as timid expressions of hope that things would work themselves out then gradually increased to outright calls for new a new governance structure that reflects the diversity of the entire WordPress community and a move away from the so-called “benevolent dictatorship” of Matt Mullenweg.

Automattic’s statements targeted the WordPress community itself:

“We’ve also faced intense criticism and even personal attacks against a number of Automatticians from members of the ‘community’ who want Matt and others to step away from the project.

…Automatticians who contributed to core will instead focus on for-profit projects within Automattic, such as WordPress.com, Pressable, WPVIP, Jetpack, and WooCommerce. Members of the ‘community’ have said that working on these sorts of things should count as a contribution to WordPress.”

Use Of Doublespeak

Lastly, Automattic’s statement uses language that seems to cross the line into doublespeak. Doublespeak is the use of language in a way that is deceptive and manipulative as opposed to a rhetorical approach that seeks to persuade. Doublespeak obscures and distorts reality and masks the real meaning and intent of a statement.

Example of doublespeak:

“To recalibrate and ensure our efforts are as impactful as possible, Automattic will reduce its sponsored contributions to the WordPress project. This is not a step we take lightly. It is a moment to regroup, rethink, and strategically plan how Automatticians can continue contributing in ways that secure the future of WordPress for generations to come. “

The portrayal of the withdrawal of support as a way of securing “the future of WordPress for generations to come” is manipulative and hides the reality that those actions have the opposite effect.

It also claims:

“This realignment is not an end, but a new beginning—one that will ultimately strengthen the foundation of WordPress.”

That’s an example of how Automattic’s statement portrays the actual weakening WordPress.org as a way to strengthen it.

There are many other examples of how the statement portrays Automattic as the victim, WP Engine as the aggressor and the WordPress community itself as complicit in undermining itself.

Read the statement here:

Aligning Automattic’s Sponsored Contributions to WordPress

Featured image by Shutterstock/Wirestock Creators