AI’s search for more energy is growing more urgent

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If you drove by one of the 2,990 data centers in the United States, you’d probably think little more than “Huh, that’s a boring-looking building.” You might not even notice it at all. However, these facilities underpin our entire digital world, and they are responsible for tons of greenhouse-gas emissions. New research shows just how much those emissions have skyrocketed during the AI boom. 

Since 2018, carbon emissions from data centers in the US have tripled, according to new research led by a team at the Harvard T.H. Chan School of Public Health. That puts data centers slightly below domestic commercial airlines as a source of this pollution.

That leaves a big problem for the world’s leading AI companies, which are caught between pressure to meet their own sustainability goals and the relentless competition in AI that’s leading them to build bigger models requiring tons of energy. The trend toward ever more energy-intensive new AI models, including video generators like OpenAI’s Sora, will only send those numbers higher. 

A growing coalition of companies is looking toward nuclear energy as a way to power artificial intelligence. Meta announced on December 3 it was looking for nuclear partners, and Microsoft is working to restart the Three Mile Island nuclear plant by 2028. Amazon signed nuclear agreements in October. 

However, nuclear plants take ages to come online. And though public support has increased in recent years, and president-elect Donald Trump has signaled support, only a slight majority of Americans say they favor more nuclear plants to generate electricity. 

Though OpenAI CEO Sam Altman pitched the White House in September on an unprecedented effort to build more data centers, the AI industry is looking far beyond the United States. Countries in Southeast Asia, like Malaysia, Indonesia, Thailand, and Vietnam, are all courting AI companies, hoping to be their new data center hubs. 

In the meantime, AI companies will continue to use up power from their current sources, which are far from renewable. Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The researchers found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. Read more about the new research here.


Deeper Learning

We saw a demo of the new AI system powering Anduril’s vision for war

We’re living through the first drone wars, but AI is poised to change the future of warfare even more drastically. I saw that firsthand during a visit to a test site in Southern California run by Anduril, the maker of AI-powered drones, autonomous submarines, and missiles. Anduril has built a way for the military to command much of its hardware—from drones to radars to unmanned fighter jets—from a single computer screen. 

Why it matters: Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself are increasingly adopting a new worldview: A future “great power” conflict—military jargon for a global war involving multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. The Pentagon is betting lots of energy and money that AI—despite its flaws and risks—will be what puts the US and its allies ahead in that fight. Read more here.

Bits and Bytes

Bluesky has an impersonator problem 

The platform’s rise has brought with it a surge of crypto scammers, as my colleague Melissa Heikkilä experienced firsthand. (MIT Technology Review)

Tech’s elite make large donations to Trump ahead of his inauguration 

Leaders in Big Tech, who have been lambasted by Donald Trump, have made sizable donations to his ​​inauguration committee. (The Washington Post)

Inside the premiere of the first commercially streaming AI-generated movies

The films, according to writer Jason Koebler, showed the telltale flaws of AI-generated video: dead eyes, vacant expressions, unnatural movements, and a reliance on voice-overs, since dialogue doesn’t work well. The company behind the films is confident viewers will stomach them anyway. (404 Media)

Meta asked California’s attorney general to stop OpenAI from becoming for-profit

Meta now joins Elon Musk in alleging that OpenAI has improperly enjoyed the benefits of nonprofit status while developing its technology. (Wall Street Journal)

How Silicon Valley is disrupting democracy

Two books explore the price we’ve paid for handing over unprecedented power to Big Tech—and explain why it’s imperative we start taking it back. (MIT Technology Review)

The 8 worst technology failures of 2024

They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.

Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. We also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. But then the economic winds shifted. Money wasn’t free anymore. The result? Bankruptcy and dissolution for companies whose ambitious technological projects, from vertical farms to carbon credits, hadn’t yet turned a profit and might never do so.

Read on.

Woke AI blunder

ai-generated image of a female pope

GOOGLE GEMINI VIA X.COM/END WOKENESS

People worry about bias creeping into AI. But what if you add bias on purpose? Thanks to Google, we know where that leads: Black Vikings and female popes.

Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books. Ask Google for a picture of German soldiers from World War II, and it would create a Benetton ad in Wehrmacht uniforms. 

Critics pounced and Google beat an embarrassed retreat. It paused Gemini’s ability to draw people and agreed its well-intentioned effort to be inclusive had “missed the mark.” 

The free version of Gemini still won’t create images of people. But paid versions will. When we asked for an image of 12 CEOs of public biotech companies, the software produced a photographic-quality image of middle-aged white men. Less than ideal. But closer to the truth. 

More: Is Google’s Gemini chatbot woke by accident, or by design? (The Economist), Gemini image generation got it wrong. We’ll do better. (Google)


Boeing Starliner

Boeing CST-100 Starliner

THE BOEING COMPANY VIA NASA

Boeing, we have a problem. And it’s your long-delayed reusable spaceship, the Starliner, which stranded NASA astronauts Sunita “Suni” Williams and  Barry “Butch” Wilmore on the International Space Station.

The June mission was meant to be a quick eight-day round trip to test Starliner before it embarked on longer missions. But, plagued by helium leaks and thruster problems, it had to come back empty. 

Now Butch and Suni won’t return to Earth until 2025, when a craft from Boeing competitor SpaceX is scheduled to bring them home. 

Credit Boeing and NASA with putting safety first. But this wasn’t Boeing’s only malfunction during 2024. The company began the year with a door blowing off one of its planes midflight, faced a worker strike, agreed to a major fine for misleading the government about the safety of its 737 Max airplane (which made our 2019 list of worst technologies), and saw its CEO step down in March.

After the Starliner fiasco, Boeing fired the chief of its space and defense unit. “At this critical juncture, our priority is to restore the trust of our customers and meet the high standards they expect of us to enable their critical missions around the world,” Boeing’s new CEO, Kelly Ortberg, said in a memo.

More: Boeing’s beleaguered space capsule is heading back to Earth without two NASA astronauts (NY Post), Boeing’s space and defense chief exits in new CEO’s first executive move (Reuters), CST-100 Starliner (Boeing)


CrowdStrike outage

MITTR / ENVATO

The motto of the cybersecurity company CrowdStrike is “We stop breaches.” And it’s true: No one can breach your computer if you can’t turn it on.

That’s exactly what happened to many people on July 19, when thousands of Windows computers at airlines, TV stations, and hospitals started displaying the “blue screen of death.” 

The cause wasn’t hackers or ransomware. Instead, those computers were stuck in a boot loop because of a bad update shipped by CrowdStrike itself. CEO George Kurtz jumped on X to say the “issue” had been identified as a “defect” in a single computer file.

So who is liable? CrowdStrike customer Delta Airlines, which canceled 7,000 flights, is suing for $500 million. It alleges that the security firm caused a “global catastrophe” when it took “uncertified and untested shortcuts.” 

CrowdStrike countersued. It says Delta’s management is to blame for its troubles and that the airline is due little more than a refund. 

More: “Crowdstrike is working with customers(George Kurtz), How to fix a Windows PC affected by the global outage (MIT Technology Review), Delta Sues CrowdStrike Over July Operations Meltdown (WSJ)


Vertical farms

a blighted brown leaf of lettuce

MITTR / ENVATO

Grow lettuce in buildings using robots, hydroponics, and LED lights. That’s what Bowery, a “vertical farming” startup, raised over $700 million to do. But in November, Bowery went bust, making it the biggest startup failure of the year, according to the business analytics firm CB Insights. 

Bowery claimed that vertical farms were “100 times more productive” per square foot than traditional farms, since racks of plants could be stacked 40 feet high. In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.

More: How a leaf-eating pathogen, failed deals brought down Bowery Farming (Pitchbook), Vertical farming “unicorn” Bowery to shut down (Axios)


Exploding pagers

an explosion behind a pager

MITTR / ADOBE STOCK

They beeped, and then they blew up. Across Lebanon, fingers and faces were shredded in what was called Israel’s “surprise opening blow in an all-out war to try to cripple Hezbollah.” 

The deadly attack was diabolically clever. Israel set up shell companies that sold thousands of pagers packed with explosives to the Islamic faction, which was already worried that its phones were being spied on. 

A coup for Israel’s spies. But was it a war crime? A 1996 treaty prohibits intentionally manufacturing “apparently harmless objects” designed to explode. The New York Times says nine-year-old Fatima Abdullah died when her father’s booby-trapped beeper chimed and she raced to take it to him.

More: Israel conducted Lebanon pager attack… (Axios), A 9-Year-Old Girl Killed in Pager Attack Is Mourned in Lebanon (New York Times), Did Israel break international law? (Middle East Eye)


23andMe

The 23 and me logo protruding from a cardboard box of desk items held by an office worker.

MITTR / ADOBE STOCK

The company that pioneered direct-to-consumer gene testing is sinking fast. Its stock price is going toward zero, and a plan to create valuable drugs is kaput after that team got pink slips this November.

23andMe always had a celebrity aura, bathing in good press. Now, though, the press is all bad. It’s a troubled company in the grip of a controlling founder, Anne Wojcicki, after its independent directors resigned en masse this September. Customers are starting to worry about what’s going to happen to their DNA data if 23andMe goes under.

23andMe says it created “the world’s largest crowdsourced platform for genetic research.” That’s true. It just never figured out how to turn a profit. 

More:  23andMe’s fall from $6 billion to nearly $0 (Wall Street Journal), How to…delete your 23andMe data (MIT Technology Review), 23andMe Financial Report, November 2024 (23andMe)


AI slop

ai-generated image of a representation of Jesus with outspread arms and body composed of shrimp parts

AUTHOR UNKNOWN VIA WIKIMEDIA COMMONS

Slop is the scraps and leftovers that pigs eat. “AI slop” is what you and I are increasingly consuming online now that people are flooding the internet with computer-generated text and pictures.  

AI slop is “dubious,” says the New York Times, and “dadaist,” according to Wired. It’s frequently weird, like Shrimp Jesus (don’t ask if you don’t know), or deceptive, like the picture of a shivering girl in a rowboat, supposedly showing the US government’s poor response to Hurricane Helene.

AI slop is often entertaining. AI slop is usually a waste of your time. AI slop is not fact-checked. AI slop exists mostly to get clicks. AI slop is that blue-check account on X posting 10-part threads on how great AI is—threads that were written by AI. 

Most of all, AI slop is very, very common. This year, researchers claimed that about half the long posts on LinkedIn and Medium were partly AI-generated.

More: First came ‘Spam.’ Now, With A.I., We’ve got ‘Slop’ (New York Times), AI Slop Is Flooding Medium (Wired)


Voluntary carbon markets

a spindly tree with a cloud of emissions hovering around it

MITTR / ENVATO

Your business creates emissions that contribute to global warming. So why not pay to have some trees planted or buy a more efficient cookstove for someone in Central America? Then you could reach net-zero emissions and help save the planet.

Neat idea, but good intentions aren’t enough. This year the carbon marketplace Nori shut down, and so did Running Tide, a firm trying to sink carbon into the ocean. “The problem is the voluntary carbon market is voluntary,” Running Tide’s CEO wrote in a farewell post, citing a lack of demand.

While companies like to blame low demand, it’s not the only issue. Sketchy technology, questionable credits, and make-believe offsets have created a credibility problem in carbon markets. In October, US prosecutors charged two men in a $100 million scheme involving the sale of nonexistent emissions savings. 

More: The growing signs of trouble for global carbon markets (MIT Technology Review), Running Tide’s ill-fated adventure in ocean carbon removal (Canary Media), Ex-carbon offsetting boss charged in New York with multimillion-dollar fraud (The Guardian) 

The Download: 2024’s biggest technology flops, and AI’s search for energy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The 8 worst technology failures of 2024

They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.

Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. And we also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. Check out what made our list of this year’s biggest technology failures.

—Antonio Regalado

Antonio will be discussing this year’s worst failures with our executive editor Niall Firth in a subscriber-exclusive online Roundtable event today at 12.00 ET. Register here to make sure you don’t miss outf you haven’t already, subscribe

AI’s search for more energy is growing more urgent

If you drove by one of the 2,990 data centers in the United States, you’d probably think little more than “Huh, that’s a boring-looking building.” You might not even notice it at all. However, these facilities underpin our entire digital world, and they are responsible for tons of greenhouse-gas emissions. New research shows just how much those emissions have skyrocketed during the AI boom.

That leaves a big problem for the world’s leading AI companies, which are caught between pressure to meet their own sustainability goals and the relentless competition in AI that’s leading them to build bigger models requiring tons of energy. And the trend toward ever more energy-intensive new AI models will only send those numbers higher. Read the full story.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok has asked the US Supreme Court for a lifeline   
It’s asked lawmakers to intervene before the proposed ban kicks in on January 19. (WP $)
+ TikTok CEO Shou Zi Chew reportedly met with Donald Trump yesterday. (NBC News)
+ Trump will take office the following day, on January 20. (WSJ $)
+ Meanwhile, the EU is investigating TikTok’s role in Romania’s election. (Politico)

2 Waymo’s autonomous cars are heading to Tokyo
In the first overseas venture for the firm’s vehicles. (The Verge)
+ The cars will require human safety drivers initially. (CNBC)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

3 China’s tech workers are still keen to work in the US
But securing the right to work there is much tougher than it used to be. (Rest of World)

4 Digital license plates are vulnerable to hacking
And they’re already legal to buy in multiple US states. (Wired $)

5 We’re all slaves to the algorithms
From the mundane (Spotify) to the essential (housing applications.) (The Atlantic $)
+ How a group of tenants took on screening systems—and won. (The Guardian)
+ The coming war on the hidden algorithms that trap people in poverty. (MIT Technology Review)

6 How to build an undetectable submarine
The race is on to stay hidden from the competition. (IEEE Spectrum)
+ How underwater drones could shape a potential Taiwan-China conflict. (MIT Technology Review)

7 How Empower became a viable rival to Uber
Its refusal to cooperate with authorities is straight out of Uber’s early playbook. (NYT $)

8 Even airlines are using AirTags to find lost luggage 🧳
Which begs the question: how were they looking for missing bags before?(Bloomberg $)
+ Here’s how to keep tabs on your suitcase as you travel. (Forbes $)

9 You’re reading your blood pressure all wrong
Keep your feet flat on the floor and ditch your phone, for a start. (WSJ $)

10 The rise and rise of the group chat 
Expressing yourself publicly on social media is so last year. (Insider $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“Where are the adults in the room?”

—Francesca Marano, a long-time contributor to WordPress, lambasts the platform’s decision to require users to check a box reading “Pineapple is delicious on pizza” to log in, 404 Media reports.

The big story

Responsible AI has a burnout problem

October 2022

Margaret Mitchell had been working at Google for two years before she realized she needed a break. Only after she spoke with a therapist did she understand the problem: she was burnt out.

Mitchell, who now works as chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible AI teams.

All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This timelapse of a pine tree growing from a tiny pinecone is pretty special 🎄
+ Shaboozey’s A Bar Song (Tipsy) is one of 2024’s biggest hits. But why has it struck such a chord?
+ All hail London’s campest Christmas tree!
+ Stay vigilant, Oregon’s googly eye bandit has struck again 👀

A woman in the US is the third person to receive a gene-edited pig kidney

Towana Looney, a 53-year-old woman from Alabama, has become the third living person to receive a kidney transplant from a gene-edited pig. 

Looney, who donated one of her kidneys to her mother back in 1999, developed kidney failure several years later following a pregnancy complication that caused high blood pressure. She started dialysis treatment in December of 2016 and was put on a waiting list for a kidney transplant soon after, in early 2017. 

But it was difficult to find a match. So Looney’s doctors recommended the experimental pig organ as an alternative. After eight years on the waiting list, Looney was authorized to receive the kidney under the US Food and Drug Administration’s expanded access program, which allows people with serious or life-threatening conditions to try experimental treatments.

The pig in question was developed by Revivicor, a United Therapeutics company. The company’s technique involves making 10 gene edits to a pig cell. The edits are made to prevent too much organ growth, curb inflammation, and, importantly, stop the recipient’s immune system from rejecting the organ. The edited pig cell is then placed into a pig egg cell that has had its nucleus removed, and the egg is transferred to the uterus of a sow, which eventually gives birth to a gene-edited piglet.

JOE CARROTTA FOR NYU LANGONE HEALTH

In theory, once the piglet has grown, its organs can be used for human transplantation. Pig organs are similar in size to human ones, after all. A few years ago, David Bennett Sr. became the first person to receive a heart transplant from such a pig. He died two months after the operation, and the heart was later found to have been infected with a pig virus.

Richard Slayman was the first person to get a gene-edited pig kidney, which he received in early 2024. He died two months after his surgery, although the hospital treating him said in a statement that it had “no indication that it was the result of his recent transplant.” In April, Lisa Pisano was reported to be the second person to receive such an organ. Pisano also received a heart pump alongside her kidney transplant. Her kidney failed because of an inadequate blood supply and was removed the following month. She died in July.

Looney received her pig kidney during a seven-hour operation that took place at NYU Langone Health in New York City on November 25. The surgery was led by Jayme Locke of the US Health Resources & Services Administration and Robert Montgomery of the NYU Langone Transplant Institute.

Looney was discharged from the hospital 11 days after her surgery, to an apartment in New York City. She’ll stay in New York for another three months so she can check in with doctors at the hospital for evaluations.

“It’s a blessing,” Looney said in a statement. “I feel like I’ve been given another chance at life. I cannot wait to be able to travel again and spend more quality time with my family and grandchildren.”

Looney’s doctors are hopeful that her kidney will last longer than those of her predecessors. For a start, Looney was in better health to begin with—she had chronic kidney disease and required dialysis, but unlike previous recipients, she was not close to death, Montgomery said in a briefing. He and his colleagues plan to start clinical trials within the next year.

There is a huge unmet need for organs. In the US alone, there more than 100,000 people are waiting for one, and 17 people on the waiting list die every day. Researchers hope that gene-edited animals might provide a new source of organs for such individuals.

Revivicor isn’t the only company working on this. Rival company eGenesis, which has a different approach to gene editing, has used CRISPR to create pigs with around 70 gene edits

“Transplant is one of the few therapies that can cure a complex disease overnight, yet there are too few organs to provide a cure for all in need,” Locke said in a statement. “The thought that we may now have a solution to the organ shortage crisis for others who have languished on our waiting lists invokes the most welcome of feelings: pure joy!”

Today, Looney is the only person living with a pig organ. “I am full of energy. I got an appetite I’ve never had in eight years,” she said at a briefing. “I can put my hand on this kidney and feel it buzzing.”

This story has been updated with additional information after a press briefing.

Roundtables: The Worst Technology Failures of 2024

Recorded on December 17, 2024

The Worst Technology Failures of 2024

Speakers: Antonio Regalado, senior editor for biomedicine, and Niall Firth, executive editor.

MIT Technology Review publishes an annual list of the worst technologies of the year. This year, The Worst Technology Failures of 2024 list was unveiled live by our editors. Hear from MIT Technology Review executive editor Niall Firth and senior editor for biomedicine Antonio Regalado as they discuss each of the 8 items on this list.

Related Coverage

11 Books to Help Navigate Risk

Entrepreneurs and executives face risks daily. Competitors, markets, technology, politics, climate, health — all can dramatically impact a business and career. Here’s a rundown of new, forthcoming, and classic books on recognizing and mitigating risks. The authors are notable researchers, leaders, and risk-avoidance practitioners.

The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck

Cover of The Art of Uncertainty

The Art of Uncertainty

by David Spiegelhalter

Coming in March, this book is already hailed as lively, entertaining, insightful, and witty — not terms often applied to numbers! The author, knighted for his work on medical statistics, illuminates life’s uncertainties — balancing risks and benefits of medical treatments, predicting sports wins and losses, facing the unknowable — with real-world examples and more than 60 illustrations.

Risk-Proof Your Business: The Complete Guide to Smart Insurance Choices

Cover of Risk-Proof Your Business

Risk-Proof Your Business

by Michael Gay and Patrick Wraight

The right insurance policies are key to reducing risks such as lawsuits, accidents, and other losses. But how can you be sure you have the right kind and amount of coverage for your situation? Gay and Wraight explain all aspects of insurance clearly, providing an understandable guide to navigating and mitigating business risks.

On the Edge: The Art of Risking Everything

Cover of On the Edge

On the Edge

By Nate Silver

In his third bestseller, Silver focuses on “professional risk-takers” — poker players, hedge fund managers, crypto mavens, art collectors — and the common traits that have made them wealthy and powerful and how their (sometimes flawed) mindsets are important drivers of technology and the global economy.

Shocks, Crises, and False Alarms: How to Assess True Macroeconomic Risk

Cover of Shocks, Crises, False Alarms

Shocks, Crises, False Alarms

by Philipp Carlsson-Szlezak and Paul Swartz

A Financial Times Best Book of 2024, Shocks, Crises aims to help business leaders avoid the contradictory traps of being fooled by false alarms or failing to recognize real changes in local and global markets and economies.

Playing with Reality: How Games Have Shaped Our World

Cover of Playing with Reality

Playing with Reality

by Kelly Clancy

The Economist calls this book “provocative and fascinating” and, along with The Guardian, included it in the Best Books of 2024. Clancy, a neuroscientist and physicist, reviews how games have shaped human culture from the Enlightenment to today, showing that game theory still underlies many assumptions in economics, politics, and technology.

How to Listen When Markets Speak: Risks, Myths, and Investment Opportunities

Cover of How to Listen When Markets Speak

How to Listen When Markets Speak

by Lawrence G. McDonald and James Patrick Robinson

McDonald is a former Lehman Brothers vice president and author of the bestseller about its collapse, “A Colossal Failure of Common Sense.” In this new book, he challenges old assumptions about economics and offers thought-provoking insights on the factors that will shape the financial future in what he believes is a radically altered world economy.

Management of Political Risks: Fundamentals and Tools for Executives and Entrepreneurs

Cover of Management of Political Risks

Management of Political Risks

by Marc-Felix Otto

Geopolitical risks can endanger companies, shake up entire industries, and even threaten national economies. Otto, a strategy and management consultant with international expertise, shares his approach to identifying, avoiding, and managing such risks while finding ways to turn them into competitive advantages.

A Crash Course on Crises: Macroeconomic Concepts for Run-Ups, Collapses, and Recoveries

Cover of A Crash Course on Crises

A Crash Course on Crises

by Markus K. Brunnermeier and Ricardo Reis

Writing clearly on the latest cutting-edge research, two top economists explain what we know about financial crises and how they can spread and intensify, drawing lessons from real-life case studies.

Risk: A User’s Guide

Cover of Risk: A User's Guide

Risk: A User’s Guide

by General Stanley McChrystal with Anna Butrico

The author, a retired U.S. four-star general, presents a system for detecting and responding to risk developed from his extensive military and business experience. Using a simple framework that defines 10 key risk dimensions, he provides practical exercises to help readers address each.

The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win

Cover of The Biggest Bluff

The Biggest Bluff

by Maria Konnikova

After a run of personal bad luck, psychologist Konnikova became a tournament-winning professional poker player. She shares what she learned about human nature, making good decisions, and luck in this acclaimed New York Times bestseller.

Against the Gods: The Remarkable Story of Risk

Cover of Against the Gods

Against the Gods

by Peter L. Bernstein

Though published in 1998, this worldwide bestseller still holds the top spot in risk management titles on Amazon. Bernstein’s lively and engaging history argues that the idea of risk propelled humankind from primitive belief in soothsayers and oracles to the creation of today’s sophisticated risk-management methods and tools.

Google Formalizes Decade-Old Faceted Navigation Guidelines via @sejournal, @MattGSouthern

Google has updated its guidelines on faceted navigation by turning an old blog post into an official help document.

What started as a blog post in 2014 is now official technical documentation.

This change reflects the complexity of ecommerce and content-heavy websites, as many sites adopt advanced filtering systems for larger catalogs.

Faceted Navigation Issues

Ever used filters on an e-commerce site to narrow down products by size, color, and price?

That’s faceted navigation – the system allowing users to refine search results using multiple filters simultaneously.

While this feature is vital for users, it can create challenges for search engines, prompting Google to release new official documentation on managing these systems.

Modern Challenges

The challenge with faceted navigation lies in the mathematics of combinations: each additional filter option multiplies the potential URLs a search engine might need to crawl.

For example, a simple product page with options for size (5 choices), color (10 choices), and price range (6 ranges) could generate 300 unique URLs – for just one product.

According to Google Analyst Gary Illyes, this multiplication effect makes faceted navigation the leading cause of overcrawling issues reported by website owners.

The impact includes:

  • Wasting Server Resources: Many websites use too much computing power on unnecessary URL combinations.
  • Inefficient Crawl Budget: Crawlers may take longer to find important new content because they are busy with faceted navigation.
  • Weakening SEO Performance: Having several URLs for the same content can hurt a website’s SEO.

What’s Changed?

The new guidance is similar to the 2014 blog post, but it includes some important updates:

  1. Focus on Performance: Google now clearly warns about the costs of using computing resources.
  2. Clear Implementation Options: The documentation gives straightforward paths for different types of websites.
  3. Updated Technical Recommendations: Suggestions now account for single-page applications and modern SEO practices.

Implementation Guide

For SEO professionals managing sites with faceted navigation, Google now recommends a two-track approach:

Non-Critical Facets:

  • Block via robots.txt
  • Use URL fragments (#)
  • Implement consistent rel=”nofollow” attributes

Business-Critical Facets:

  • Maintain standardized parameter formats
  • Implement proper 404 handling
  • Use strategic canonical tags

Looking Ahead

This documentation update suggests Google is preparing for increasingly complex website architectures.

SEO teams should evaluate their current faceted navigation against these guidelines to ensure optimal crawling efficiency and indexing performance.


Featured Image: Shutterstock/kenchiro168

Google Refreshes Generative AI Prohibited Use Policy via @sejournal, @MattGSouthern

Google has updated its Generative AI Prohibited Use Policy to clarify the proper use of its generative AI products and services.

The update simplifies the language, and lists prohibited behaviors with examples of unacceptable conduct.

Key Updates To Policy

The updated policy clarifies existing rules without adding new restrictions.

It specifically bans using Google’s AI tools to create or share non-consensual intimate images or to conduct security breaches through phishing or malware.

The policy states:

“We expect you to engage with [generative AI models] in a responsible, legal, and safe manner.”

Prohibited activities include dangerous, illegal, sexually explicit, violent, hateful, or deceptive actions, as well as content related to child exploitation, violent extremism, self-harm, harassment, and misinformation.

Prohibited Activities

The policy prohibits using Google’s generative AI for an expansive range of dangerous, illegal, and unethical activities:

  • Illegal Activities: Engaging in or facilitating child exploitation, violent extremism, terrorism, non-consensual intimate imagery, self-harm, or other illegal activities.
  • Security Violations: Compromising security through phishing, malware, spam, infrastructure abuse, or circumventing safety protections.
  • Explicit and Harmful Content: Generating sexually explicit content, hate speech, harassment, violence incitement, or other abusive content.
  • Deception and Misinformation: Impersonation without disclosure, misleading claims of expertise, misrepresenting content provenance, or spreading misinformation related to health, governance, and democratic processes.

Exceptions Allowed

New language in the policy carves out exceptions for some restricted activities in particular contexts.

Educational, documentary, scientific, artistic, and journalistic uses may be permitted, as well as other cases “where harms are outweighed by substantial benefits to the public.”

Why This Matters

The policy update addresses the rapid advancement of generative AI technologies that create realistic text, images, audio, and video.

This progress raises concerns about ethics, misuse, and societal impact.

Looking Ahead

Google’s updated policy is now in effect, and the old and new versions are publicly available.

Leading AI companies like OpenAI and Microsoft have released their own usage rules. However, raising awareness and consistently enforcing these rules still need to be improved.

As generative AI becomes more common, creating clear usage guidelines is essential to ensure responsible practices and reduce harm.


Featured Image: Algi Febri Sugita/Shutterstock

2024 Annual Review: How 2024 Went For Me And What Changes In 2025 via @sejournal, @Kevin_Indig

It’s the time of the year! I can write about myself again without feeling guilty ;-).

Over the last few years, I’ve made it a habit to share how the year went for me and what next year looks like. This really seems to resonate with you, so I’ll keep doing it until you tell me to stop.

Image Credit: Kevin Indig

Previous annual reviews:

I had five big goals for 2024:

  1. Hit 1,000 paid growth memo subscribers
  2. Keep income above a certain level
  3. Become a better speaker
  4. Create more time and space
  5. Keep my weight between a certain number, work out a minimum of four times a week, and pick up MMA.

I’m happy to say that I met all goals except for No. 1.

The Advisory is going really well.

In early 2024, I made an effort to focus on larger, more in-depth engagements and worked with phenomenal brands: Reddit, Alltrails, About You, Toast, and Hims, just to mention a few.

For 2025, I’m opening my calendar again for low-touch engagements. I’ll keep ~ three large clients, but I found really good success and high demand for sparring/office hour-like engagements.

Speaking: Shifting Gears

Image Credit: Kevin Indig

This year, I managed to speak at 10 conferences:

  • Recommerce, London (UK).
  • Friends of Search, Amsterdam (NL).
  • NYC SEO Meetup, NYC.
  • SaaStock, Austin.
  • Digital Olympus, Eindhoven (NL).
  • SMX Advanced, Berlin (DE).
  • SEO Campixx, Berlin (DE).
  • SEOktoberfest, Kitzbuhel (AU).
  • Tech SEO Connect, Raleigh.
  • SEOkomm, Salzburg (AU).

Phew, that was a lot! I don’t know how others do it, but that burned me out a little.

My mistake was probably to bring a new deck and topic to every event. Anyway.

Image Credit: Kevin Indig

My hypothesis for taking on many speaking engagements in 2024 was to grow my subscriber base. Unfortunately, the impact wasn’t as strong as I had hoped.

I had a ton of fun and met awesome people, but any speaking gig was outmatched by a good post. The SEOzempic Memo led to almost 150 new subscribers, which is about as much as a good presentation would drive.

That’s why I’m changing gears next year.

In 2025, I will set a strict limit of five conferences, focus on non-SEO conferences, and ask for a speaking fee of $5,000 + travel cost + accommodation (with one to two exceptions).

The reason is simple: It takes a lot of effort! Speaking takes a ton of time that I could spend with my family and invest in client work.

The ROI of speaking for free (or travel cost covered) isn’t there when you consider about 30 to 50 hours of preparation plus travel time.

I’m not even talking about paying experts for help with research and working with a speaking coach (who has been amazing).

I don’t get nor want any business from SEO conferences. It’s been fun-positive but ROI-negative for me.

Growth Memo: Going Video-First And Other Changes

Image Credit: Kevin Indig

How it’s going:

Growth Memo has done well this year. The free newsletter topped 16,000 subscribers, which is more than I projected (13,700 to 15,300).

Top 10 most-read articles:

How to craft a winning SEO strategy by Kevin Indig

A simple 5-step framework to build your own SEO strategy

Read on Substack

SEOzempic by Kevin Indig

Quality Over Quantity for Google Indexing

Read on Substack

The traffic impact of AI Overviews by Kevin Indig

An analysis of 1,675 keywords shows AIOs could reduce organic clicks

Read on Substack

Universe by Kevin Indig

#242 A better alternative to keyword research

Read on Substack

Information Gainz by Kevin Indig

#250 Prioritizing information gain = rethinking how we create content

Read on Substack

The cookie crumbles by Kevin Indig

#236 – The final death blow for 3rd-party cookies might bring more value to SEO

Read on Substack

Internal Link Optimization with TIPR by Kevin Indig

Internal link optimization is incomplete without factoring in backlinks. In this article, I introduce a model called TIPR that helps you to optimize the internal link graph of your site.

Read on Substack

2024 predictions by Kevin Indig

#234 🔮AI, Organic Growth and Winner / loser predictions

Read on Substack

AI on Innovation by Kevin Indig

Analysis of +546,000 AIO overviews

Read on Substack

Chat GPT Search by Kevin Indig

Chat GPT Search may have a shot at Google

Read on Substack

I also started an experimental WhatsApp group that already has ~450 members, where I share my research as I find it (check it out). I feel so grateful and proud about how well it’s growing!

Image Credit: Kevin Indig

I widely missed my goal for the paid newsletter, though. I had hoped for 1,000 paying subscribers and barely made it over 300.

Don’t get me wrong – 300 premium subscribers is basically an average American salary. However, that number needs to grow.

I make 80% of my income from advising, but writing and researching take up about two days of my week.

Here’s how I plan to accomplish that in three ways:

1. New: Video First

When I started to do live streaming sessions for premium subscribers in which I shared my latest research and observations, numbers started to pick up.

I also strongly believe that video is where the action is: LinkedIn added a video tab. Youtube is the No. 1 podcast platform.

Lots of stats that I shared in my annual review show that even B2B buyers want more videos from companies.

Probably most important is the fact that LLMs are getting so good at writing that video creates a much more human connection with audiences.

So, starting in January,

  1. The premium offering consists of two live sessions a month.
  2. The free Memo comes with a video in the email and Youtube.
  3. I’ll experiment with short-form videos throughout the year.

What I’m working towards is not just an easier way to consume the Growth Memo, but also for you to participate.

I don’t know what this will look like exactly, but I want to turn the experience of broadcasting into collaboration. Stay tuned.

If you’re wondering whether Kevin has gone full creator mode, the answer is no. I’m still an advisor who shares his insights and experience. But I think video is the format du jour. And it’s time to adapt.

2. New: Publishing Calendar For The Free And Premium Version

I’m taking off two weeks in the summer and two weeks in the winter time to recharge and get a head-start on publishing.

In fact, I’m already taking off for a winter break this year. The next Memo comes out on January 6.

In 2025, I’ll likely take two weeks off in August and the last two weeks of the year again.

I’ve created an overview of free Memos and (paid) Live Sessions: 2025 publishing calendar.

3. New Topics

Let’s be real: SEO is not what it used to be. Yes, tech SEO is still important for large sites, and too much mediocre content still bites sites in the rear end during Core updates – nothing new on the Western front.

However, with brands getting preferential treatment in many verticals, the rise of Redditrogue core UpdatesAI overviews, and AI chatbot search, the landscape of Search has made a profound leap.

Not only are new user interfaces powered by AI sprouting up left and right, but Google itself is evolving after almost 20 years of incremental changes.

On top of that, YouTube gets more attention than ever before, Reddit is one of the largest sites on the web, and podcasts enter a second wave of popularity.

If I want to continue covering Organic Growth, my attempt to find a term for all the non-paid marketing channels and activities, I need to expand the scope of this newsletter.

So, expect more diverse topics in non-paid marketing in 2025. I know that most Growth Memo readers care deeply about SEO.

I won’t abandon SEO but introduce you to new topics to help you become a better marketer.

And with that, I’m checking out for the rest of the year.

Until 2025,

Kevin


Featured Image: Paulo Bobita/Search Engine Journal

Searchquake: Consumers Now Consider ChatGPT A Real Google Alternative via @sejournal, @gsterling

In just two years, ChatGPT has managed to do something no company has done in the last 20 years: present a viable challenge to Google.

There’s evidence that people are using it instead of traditional search in an increasing number of cases.

For example, ChatGPT’s traffic recently surpassed Bing, and its referral traffic has been growing by triple digits.

Yet, Google’s search volumes and market share appear to be unaffected. Is it a question of scale, and is ChatGPT’s impact still too small to register? If so, perhaps not for much longer

There have been several consumer surveys asking about current perceptions of search quality and others exploring AI adoption. But, there haven’t been any studies that looked closely into whether AI impacts consumer attitudes toward Google and their usage of Search.

So, we decided to create one to answer a range of direct questions we were curious to know the answers to:

  • Is it easier or harder to find what you’re looking for on Google vs. three years ago?
  • What’s your “go-to” AI tool, and how often do you use it?
  • What do you like about AI?
  • Are AI applications and search engines basically interchangeable or different?
  • Has using AI changed how much you use Google?
  • Does AI or search provide a better experience (across multiple categories)?
  • If you had to choose only one tool (Search or AI), what would it be?
  • Will AI replace traditional search engines in the next three years?

My research program, Dialog, asked these and numerous other questions to an online consumer panel last month. We qualified potential respondents using two criteria:

  1. They had to be at least weekly search users.
  2. They must have used at least one AI application “ever” (on a list of 11).

We recruited more than 2,200 respondents and disqualified over half of them, most often because they didn’t answer yes to the AI screening question.

In the end, we had 1,000 U.S. respondents who roughly mirrored U.S. Census data.

Key Survey Findings

Here are some of the survey’s major findings:

  • While Google is dominant, consumers use multiple sites to make purchase decisions.
  • 44% of U.S. adults have used AI applications at least once (100% of respondents had).
  • 77% of survey respondents said it had become easier to find things on Google.
  • 57% use AI daily; roughly half of them use it multiple times a day.
  • 49% see AI and search as essentially interchangeable.
  • 67% think AI will likely replace traditional search engines within three years.

Search Is Fragmenting

It’s important to point out that the often binary discussion of Search vs. AI misses the fact that people have been using numerous other sites for search and discovery for some time.

Some people might be surprised, for example, that a majority of U.S. adults on TikTok are looking for product reviews and recommendations.

Dialog’s survey suggests that people routinely use multiple sites to conduct pre-purchase research, though Google is the most widely used.

The precise percentages are less important than the fact that so many sites were named.

Image from author, December 2024

Search Today Is ‘Much Easier’

The general consensus in the SEO community and tech press is that Google’s search quality has declined for several years.

If you don’t believe this, just Google “Is Google getting worse?” (There’s a longer debate as to why this might be.)

We fully expected consumers to express a similar sentiment. But they didn’t.

In fact, 77% said that they thought it was easier or “much easier” to find what they were looking for on Google today vs. three years ago.

While this doesn’t explicitly address search quality, it reflects a positive user experience.

Image from author, December 2024

We didn’t follow up on this question, so we don’t have a good explanation for the finding.

One potential theory is that much of search activity today is brand-related or navigational, which Google does a good job with.

Another theory is that users have become more capable searchers. But neither is fully persuasive.

Search And AI Are ‘Interchangeable’

As mentioned, we disqualified potential respondents who said they’d never used an AI application.

Among our sample, however, there were very few infrequent AI users; 92% said they used AI at least weekly, and 57% were daily users, with a substantial minority using it multiple times a day.

ChatGPT was the dominant AI tool, although Gemini was not far behind – and these are regular searchers, with 64% using Search/Google multiple times a day.

We also wanted to understand whether consumers saw Search and AI as similar tools or different.

Roughly half of our respondents said that Search and AI were indeed similar and that they used them in similar ways. The other half said that they were different or weren’t sure.

Image from author, December 2024

The broad significance of this finding is that a meaningful number of relatively heavy search users are potentially open to substituting AI (ChatGPT) for Google.

Beyond this, our respondents said they liked many things about AI/ChatGPT:

  1. Ability to ask follow-up questions – 44%
  2. Direct answers vs. website links – 42%
  3. Overall quality of answers – 40%
  4. ‘Conversational’ interaction – 38%
  5. More comprehensive information – 37%
  6. Lack of ads – 35%
  7. Other (please specify) – 1%

While the majority said they found AI content trustworthy, there were still concerns about privacy and information accuracy.

Search Beats ChatGPT – Or Does It?

We asked consumers to decide whether they thought search or AI would provide a better experience and outcome across a range of content categories and use cases.

Across the board, Google/Search won. Some categories were closer than others (i.e., recipes, product research, and financial planning).

Image from author, December 2024

This is a Rorschach-like, “half empty-half full” chart.

If you’re rooting for Search, you can take comfort in Google’s seemingly clear victory. But, the other side of this is that a substantial number of people thought AI would do a better job.

Presenting consumers with a list of 11 Search and Search-adjacent tools, including Google, Amazon, Yahoo, Perplexity, ChatGPT, and others, we then asked, “If you had to choose only one of these for all your research and purchase decision-making needs, which would it be?”

If you only had to choose one, most people chose GoogleImage from author, December 2024

The largest group of 36% chose Google, as you would expect. ChatGPT was second, and Gemini came in third.

When you combine the ChatGPT and Gemini respondents, Google only prevails by a slim two-point margin.

Conclusion: AI Inevitability?

More than two-thirds of these consumers answered “likely” or “very likely” to the question, “Will AI replace search in the next three years?”

Only 12% said it was unlikely, and the rest weren’t sure. Again, this is a group that likes Google and thinks it delivers a better experience than AI in most cases.

Will Google be displaced in three years? Not a chance.

But, the fact that a majority believe it’s possible may impact their expectations and behavior – it also indicates their potential openness to switching. Google has been seen as invulnerable until now.

Feeling competitive pressure, Google is rapidly evolving and leaning on AI to beat back the ChatGPT threat.

In doing so, the Google SERP may increasingly come to mimic the AI user experience.

Google CEO Sundar Pichai recently proclaimed that the search experience would “continue to change profoundly in 2025.”

What we know for sure is that the next phase of search will be quite different, and that the search landscape may, in fact, be fragmenting.

Regardless, Google and AI “answer engines” will co-exist, and the customer journey will undoubtedly become even more complex.

Marketers will need to be flexible and ready. Business as usual is over.

More Resources:


Featured Image: Pickadook/Shutterstock