At Yoast, we talk a lot about writing and readability. We consider it an essential part of SEO. Your text needs to be easy to follow and it needs to satisfy your users’ needs. This focus on your user will help your rankings. However, we rarely talk about how search engines like Google read and understand these texts. In this post, we’ll explore what we know about how Google analyzes your online content.
Are we sure Google understands text?
We know that Google understands text to some degree. Just think about it. One of the most important things Google has to do is match what someone types into the search bar to a suitable search result. User signals (like click-through and bounce rates) alone won’t help Google to do this properly. Moreover, we know that it’s possible to rank for a keyword that you don’t use in your text (although it’s still good practice to identify and use one or more specific keywords). So clearly, Google does something to actually read and assess your text in some way or another.
How Google understands text
Back to our initial question: How does Google understand text? To be honest, we don’t know this in detail. Unfortunately, that information isn’t freely available. And we also know, that Google is continuously evolving their ability to understand text online. But there are some clues that we can draw conclusions from. We know that Google has taken big steps when it comes to understanding context. We also know that the search engine tries to determine how words and concepts are related to each other. How do we know this? By keeping an eye on any news surrounding Google’s algorithm and considering how the actual search results pages have changed.
Word embeddings
One interesting technique Google has filed patents for and worked on is called word embedding. The goal is to find out what words are closely related to other words. A computer program is fed a certain amount of text. It then analyzes the words in that text and determines what words tend to appear together. Then, it translates every word into a series of numbers. This allows the words to be represented as a point in space in a diagram, like a scatter plot. This diagram shows what words are related in what ways. More accurately, it shows the distance between words, sort of like a galaxy made up of words. So for example, a word like “keywords” would be much closer to “copywriting” than it would be to say “kitchen utensils”.
Interestingly, this can also be done for phrases, sentences and paragraphs. The bigger the dataset you feed the program, the better it will be able to categorize and understand words and work out how they’re used and what they mean. And, what do you know, Google has a database of the entire internet. With a dataset like that, it’s possible to create very reliable models that predict and assess the value of text and context.
From word embeddings, it’s only a small step to the concept of related entities. Let’s take a look at the search results to illustrate what related entities are. If you type in “types of pasta”, this is what you’ll see right at the top of the SERP: a heading called “pasta varieties”, with a number of rich results that include a ton of different types of pasta. These pasta varieties are even subcategorized into “ribbon pasta”, “tubular pasta”, and other subtypes of pasta. And there are lots of similar SERPs that reflect how words and concepts are related to each other.
After typing [types of pasta] Google now shows this entity-based rich result
The related entities patent that Google has filed actually mentions the related entities index database. This is a database that stores concepts or entities, like pasta. These entities also have characteristics. Lasagna, for example, is a pasta. It’s also made of dough. And it’s food. Now, by analyzing the characteristics of entities, they can be grouped and categorized in all kinds of different ways. This allows Google to understand how words are related, and, therefore, to understand context.
Google has heavily invested in NLP
Natural language processing is the understanding of language by machines. It is one of the hardest parts of computer science and one where the most advances are being made. Today, with a world increasingly powered by systems run by AI, proper language understanding is key. Google understands this and invests a ton in the development of NLP models. One key system was BERT, a model that could understand the text coming after the content words and before those words. This way, the system has the full context of a sentence to make proper sense of its meaning. What BERT did is awesome, but Google is doing more. Meet MUM.
MUM: Google’s language model
In 2021, Google introduced a new language model that can multitask: MUM. This means that this model can read text, understand its meaning, form a deeper knowledge about the subject, use other media to enrich that knowledge, get insights from more than 75 languages and translate everything into content that answers complex search queries. All at the same time.
A visual representation of how Google MUM works (image from Google’s blog)
Does the rise of AI change all of this?
Over the past year, we’ve seen a lot of developments in the area of AI. Naturally, Google could not stay behind and introduced their own set of tools including the well-known AI model Gemini. Most recently, they introduced AI overviews in their search engine. And you might have already guessed it, but natural language processing models come in handy when you’re developing AI features. So Google’s ongoing research into NLP and machine learning is not slowing down anytime soon.
Practical conclusions
So, how does Google understand text exactly? What we know leads us to two very important points:
1. Context is key
If Google understands context, it’s likely to assess and judge context as well. The better your copy matches Google’s notion of the context, the better its chances of ranking well. So thin copy with a limited scope is going to be at a disadvantage. You need to cover your topics properly and in enough detail. And on a larger scale, covering related concepts and presenting a full body of work on your site will reinforce your authority on the topic you write about and specialize in.
2. Write for your reader
Texts that are easy to read and reflect relationships between concepts don’t just benefit your readers, they help Google as well. Difficult, inconsistent and poorly structured writing is more difficult to understand for both humans and machines. You can help the search engine understand your texts by focusing on:
Readability: making your text as easy to read as possible without compromising your message.
Good content: adding clear explanations that show how what you’re saying relates to what’s already known about a topic.
The better you do, the easier your users and Google will understand your text and what it tries to achieve. Which also helps you rank with the right pages when a user types in a certain search query. Especially because Google is basically creating a model that mimics the way humans process language and information.
Google wants to be a reader
In the end, it boils down to this: Google is becoming more and more like an actual reader. By writing rich content that is well-structured and easy to read and embedded into the context of the topic at hand, you’ll improve your chances of doing well in the search results.
Last week, Google Ads emailed advertisers in the United States who use Google Ads or Google Analytics with upcoming compliance changes.
Many are asking, “why now”?
As user privacy concerns are at an all-time high, many individual states enacted their privacy law provisions, which will be implemented later this year.
Read on to learn how Google is preparing its products for compliance, what this means for advertisers, and whether you need to take action.
What’s Changing In Privacy?
Two major updates are coming to the privacy landscape for the United States this year.
Five states have privacy law provisions going into effect. These states include Florida, Texas, Oregon, Montana, and Colorado.
Colorado Privacy Act (CPA) enforcements. The state of Colorado will begin enforcement of its Universal Opt-Out Mechanism (UOOM) provisions.
This means that individual states are cracking down on user privacy and how data processing occurs to those users.
This is also known as “Restricted Data Processing” (RDP), a compliance tool Google developed in 2019 to help advertisers comply with various country and state laws.
What is Google Doing To Help Advertisers Comply?
Due to these upcoming state changes, Google is making several changes to protect data and ensure advertisers are compliant.
For new U.S. State Laws going into effect
In the email sent to advertisers, Google is updating the language to these existing terms:
Google Ads Data Processing Terms
Google Ads Controller-Controller Data Protection Terms
Google Measurement Controller-Controller Data Protection Terms
U.S. State Privacy Laws Addendum
If you’ve already agreed to the online data protection terms in your Google Ads account, you need not take any further action on this update.
Additionally, Google states it will act as your service provider or processor while Restricted Data Processing (RDP) is enabled for the states above. What’s nice about this at a product control level in Google Ads is that if it’s turned on, the RDP functionality will expand as other states enact their own privacy laws.
For partners who operate in Colorado
This change is more specific to advertisers who operate in the state of Colorado.
In the upcoming Colorado Privacy Act, the Universal Opt-Out Mechanism requires that Global Privacy Control (GPC) signals opt the user out of Ad Targeting.
When users or potential customers create or receive a Global Privacy Control, they can send that signal to Google as a Privacy Parameter (like RDP mentioned above) to turn off things like:
Ad Targeting
Sale data
Share of data
To comply with this law, Google can receive GPC signals directly from users and will engage in RDP mode on their behalf.
What Does This Mean For Advertisers?
While the legal language above is extensive, let’s examine how these state law changes and Google’s response to them may affect advertisers.
#1: Less Personalized Ads inventory
One of the first apparent updates will be less personalized ad inventory.
Because of the restricted data processing updates and opt-out mechanisms, it’s easier for users not to be targeted.
If users decide not to enable ad targeting, that directly affects advertisers’ ability to personally target those users, affecting the ad inventory.
This can affect the inventory, targeting efficiency, and bidding strategies they use in campaigns.
#2: Customer Match will be impacted
Similar to the above, the match rate on Customer Match lists and other Remarketing lists will likely decline. This is mainly due to the Global Privacy Controls update.
Users must have given consent to receive marketing updates from a brand. Additionally, they won’t be tracked if they’re not logged into their Google account—or if they decline to be tracked while logged in.
If you use Customer Match lists, watch those match rates when reviewing performance volatility.
#3: Performance reporting will likely be impacted
According to Navah Hopkin’s LinkedIn post about this update, advertisers are in for a “wild summer.”
If advertisers’ ability to serve personalized ads or fully utilize Customer Match or other remarketing capabilities, performance will undoubtedly be impacted.
This could mean volatility or fluctuations in reporting regarding conversions, attributions back to campaigns, ROAS, or CPA metrics.
Navah makes a great point in the comments of her post on this matter, stating that advertisers should “get away from hard numbers” when it comes to reporting.
Simply put, there will be further limitations on what advertisers can and cannot report on, and the performance reports shouldn’t be the “end all be all” when making strategic campaign decisions.
In Summary
This isn’t Google’s first rodeo regarding user privacy laws and compliance.
What started in 2018 with compliance tools for GDPR updates in the EEA and U.K. is undoubtedly making its way to the United States.
In the ever-changing world of user privacy and data regulations, advertisers can breathe a small sigh of relief in these welcome updates from Google. They show that they’re being proactive about individual state law compliance policies and giving advertisers a heads-up before any action is needed.
Ultimately, advertisers must remember that they are responsible for ensuring compliance for their company and on behalf of the companies they advertise for.
After ten years of watching Google Marketing Live virtually from my computer, I had the opportunity to attend the event in person this year.
If you’re not familiar with it, Google Marketing Live is an annual event put on by Google that showcases its major product announcements for Google advertisers.
This year, Google unveiled 30 product updates for Google Ads! While not all of them were mentioned during the keynote, there was one core theme of the updates:
AI.
The majority of product updates can be categorized under these main components of Google Ads:
Creatives.
Media.
Measurement.
Now that the event is over, I’ve had time to digest these platform announcements and really think about how to use them as an advertiser—or at the very least, what these updates can mean in the broader scope of marketing.
Below are my top product updates I’m most looking forward to in 2024.
AI Creative Announcements
As expected, Google announced multiple updates specifically around ad creatives and functionality for advertisers. These include:
AI-powered image editing. Advertisers can edit images (including product feed). AI-generated images will appear in recommendations for Performance Max campaigns. Add and remove objects, extend backgrounds, and adjust image sizes to meet many sizes, orientations, and ratios.
New brand standard guidelines. Advertisers can provide Google AI with brand guidelines, colors, and fonts. The option to upload example images as a reference point for the generative AI will also be available.
Animated image ads for Demand Gen campaigns. If you have a Merchant Center account, you can serve animated image ads on YouTube Shorts and video ads. The animated images will be automatically generated from your product feed images.
Integrations with creative platforms. Google now has partnerships with Canva, Smartly, and Pencil Pro to integrate into the Google Ads asset library. This means you can create assets on those platforms and export them to Google Ads.
As far as timing goes, not everything announced is available right away.
According to Google, the new brand standard guidelines will be rolled out in the later months.
AI Media Announcements
During the live keynote, one of the most livening parts was Philipp Schindler’s demo with its generative AI technology.
It sparked imagination around non-traditional ways to search and interact to find new information.
Photo taken by author, May 2024
With that comes some new ad space available for advertisers.
Some of the key media announcements within Google Ads include:
Shopping ads to appear in Google Lens and Circle to Search. This is a new ad space and experience in the visual search results. Your shopping ad will showcase products when they’re relevant to the photos and screenshots users search with.
Search and Shopping ads to appear in AI Overviews. Another new placement is unlocked – in the United States for now. Ads from existing AI-powered Search and Performance Max campaigns may be able to show in the AI Overview “Sponsored” section when they’re relevant to both the user search and the output from AI Overview.
Conversational campaign creation. While this was announced last year, Google published more data showing that SMBs who use the conversational tool are 63% more likely to publish search campaigns with ‘Good’ or ‘Excellent’ ad strength.
New loyalty promotions. Advertisers running Performance Max and Standard Shopping campaigns will have a way to tailor promotions like special pricing or member-only exclusive offers.
Similar to the creative announcements, not all these are live immediately.
Shopping ads and loyalty promotions will roll out over the next few months in visual search results.
By now, we’ve all likely heard some shocking outputs from AI Overviews. Google is starting this ad placement rollout as a small experiment in the United States and will take its learnings from all inputs to provide a better experience as it matures.
AI Measurement Announcements
While I thought there would be more hype around measurement improvements, there was only one brief callout about more in-depth reporting on Performance Max campaigns.
A more robust outline of measurement announcements included:
Asset-level reporting in Performance Max. In the next few months, you’ll be able to see conversion metrics for each creative asset. This is a huge win for advertisers after years of being unable to report at this level of detail.
Performance Insights in Merchant Center Next. More tailored recommendations and generated insights are coming to Merchant Center Next, along with conversational reporting requests, with more visual insights quicker.
New audience signals in App campaigns. Another huge win for app advertisers! Advertisers can now link a wide range of audience lists, including first-party data audiences. Google AI will learn from the audience inputs to show ads to users more likely to convert.
New Measurement Diagnostics hub. Advertisers can validate and troubleshoot any measurement setup within Google Ads. You’ll also see your measurement durability and readiness in this hub to build and activate first-party data audiences.
Marketing Mix Models attribution. Google announced its newest open-source Marketing Mix Model (MMM), Meridian. It will help you measure the impact of your entire marketing strategy and make informed campaign decisions.
Better iOS measurement with SKAdNetwork enhancements. New reporting updates are available across the Google Ads API and third-party app attribution partners, and a new beta is available to integrate your SKAdNetwork conversion value schema into Google Ads. This will help enhance any iOS app campaigns using tCPA or tROAS bidding strategies.
Summary: What This Means For Advertisers
In the short hour and 15-minute keynote, we were presented with a wide variety of new updates. As you can see below, these updates affect many different campaign types and are either available today or will be coming in the next few months.
Photo taken by author, May 2024
So, with these updates, is there a larger theme at play?
In the grand scheme of the Google Marketing Live updates, a few major themes stood out to me.
1. Video content is key.
A lot of updates from Google Marketing Live revolved around content creation, automation, and animation.
Essentially, Google seems “all-in” on video content—specifically YouTube Shorts.
If you don’t incorporate video content into your marketing strategy, you may fall behind your competitors and see lower engagement.
Some advertisers shy away from video content because it seems tedious, too complicated, or insufficient resources.
With Google’s updates, anyone can start with video content from their phone. Google’s tools can turn images into video content.
Incorporating video into your Google Ads strategy can help you connect better with your target audience.
2. AI strategies are a must-have.
During the keynote, Google said, “AI is not a marketer.”
Photo taken by author, May 2024.
Basically, your jobs are safe right now, advertisers.
So, while AI is not a marketer, we marketers must have an AI strategy to keep up with these changes.
Trying to outsmart the Google Ads system or retaining too much control will prevent your campaigns from growing and scaling as they need to.
To have a successful AI strategy, think of a few of these main points:
Always ensure the goal and measurement setup is correct.
Set realistic bidding strategies so Google can learn from these inputs.
Send appropriate signals to Google within your campaign setup.
AI outputs are typically only as good as the inputs it gets. That’s where you, the marketer, come in.
When used correctly and strategically, these new AI tools can help scale your workload, learn faster, and make more real-time decisions to optimize your campaigns.
You can find the full recap of Google Ads product announcements in the official article from Google.
Joe Pulizzi is the author of seven books on content marketing, host of two podcasts, and a popular speaker. He’s the founder of multiple content-marketing businesses, including The Tilt (website, newsletter, publishing services) and the annual Content Entrepreneur Expo.
His latest book, “The Content Entrepreneur,” is a 40-chapter guide from 33 seasoned content marketers. Topics include choosing a content niche, technology and platforms, audience-building, revenue channels, and long-term growth. The book is available in hardcover ($45), paperback ($25), and as an ebook ($10).
I asked Pulizzi about the book, the industry, authorship, and more.
Practical Ecommerce: What’s a content entrepreneur?
Joe Pulizzi: A content entrepreneur is a mini media company. This means first building an audience that knows, likes, and trusts you. What are the members’ pain points? What keeps them up at night? If you solve their challenges consistently, you’ll be a content entrepreneur in no time. Produce a compelling newsletter, podcast, or YouTube show. Then monetize.
It’s different from launching a product — the content is the product, but you can’t make money until you build the right audience.
An entrepreneur’s life has crazy ups and downs — tremendous successes and incredible failures on the same day.
Joe Pulizzi, publisher of “The Content Entrepreneur”
Pulizzi: My VIP group from Content Entrepreneur Expo wanted to collaborate on a project. We believed a resource was needed for content entrepreneurs. We met at CEX 2023 and assembled 33 authors to create the book, which we launched before CEX 2024 in early May.
Collaboration among 33 contributors is incredibly difficult. I wouldn’t recommend it. Luckily, we have a group of content pros who know their areas well. Most are authors in their own right. A fantastic editorial and project management team through Tilt Publishing made it happen.
PEC: You’ve now published seven books since 2008 on content marketing.
Pulizzi: The book business has changed. Most authors assume they must publish and sell through external channels. That’s no longer the case.
Affordable technology can create amazing book experiences without a traditional publisher and sell directly from authors’ websites. Why create an audience and send members elsewhere to buy the book? It makes no sense.
I created Tilt Publishing to help authors create excellent books and then sell them directly to subscribers, fans, and followers. It’s essential for long-term success.
Tilt Publishing is a hybrid — not a traditional publisher but not a vanity (custom) press, either. We assist with the backend editing, design, and distribution. Authors no longer need publishers for credibility.
Yet books remain vital. The data doesn’t lie. In 2023, almost 1 billion books were sold worldwide.
A nonfiction book is a content entrepreneur’s mission statement. What do I stand for? What do I believe?
Pulizzi: Innovative companies will continue focusing on building or buying their own audiences. Ads will never go away. They serve a purpose and still work. But why rent when you can own? To own, you need to tell a compelling story. If you do it well, you can dominate your industry. It’s not a bad place.
In the early 2010s, electricity seemed poised for a hostile takeover of your doctor’s office. Research into how the nervous system controls the immune response was gaining traction. And that had opened the door to the possibility of hacking into the body’s circuitry and thereby controlling a host of chronic diseases, including rheumatoid arthritis, asthma, and diabetes, as if the immune system were as reprogrammable as a computer.
To do that you’d need a new class of implant: an “electroceutical,” formally introduced in an article in Naturein 2013. “What we are doing is developing devices to replace drugs,” coauthor and neurosurgeon Kevin Tracey told Wired UK. These would become a “mainstay of medical treatment.” No more messy side effects. And no more guessing whether a drug would work differently for you and someone else.
There was money behind this vision: the British pharmaceutical giant GlaxoSmithKline announced a $1 million research prize, a $50 million venture fund, and an ambitious program to fund 40 researchers who would identify neural pathways that could control specific diseases. And the company had an aggressive timeline in mind. As one GlaxoSmithKline executive put it, the goal was to have “the first medicine that speaks the electrical language of our body ready for approval by the end of this decade.”
In the 10 years or so since, around a billion dollars has accreted around the effort by way of direct and indirect funding. Some implants developed in that electroceutical push have trickled into clinical trials, and two companies affiliated with GlaxoSmithKline and Tracey are ramping up for splashy announcements later this year. We don’t know much yet about how successful the trials now underway have been. But widespread regulatory approval of the sorts of devices envisioned in 2013—devices that could be applied to a broad range of chronic diseases—is not imminent. Electroceuticals are a long way from fomenting a revolution in medical care.
At the same time, a new area of science has begun to cohere around another way of using electricity to intervene in the body. Instead of focusing only on the nervous system—the highway that carries electrical messages between the brain and the body—a growing number of researchers are finding clever ways to electrically manipulate cells elsewhere in the body, such as skin and kidney cells, more directly than ever before. Their work suggests that this approach could match the early promise of electroceuticals, yielding fast-healing bioelectric bandages, novel approaches to treating autoimmune disorders, new ways of repairing nerve damage, and even better treatments for cancer. However, such ventures have not benefited from investment largesse. Investors tend to understand the relationship between biology and electricity only in the context of the nervous system. “These assumptions come from biases and blind spots that were baked in during 100 years of neuroscience,” says Michael Levin, a bioelectricity researcher at Tufts University.
Electrical implants have already had success in targeting specific problems like epilepsy, sleep apnea, and catastrophic bowel dysfunction. But the broader vision of replacing drugs with nerve-zapping devices, especially ones that alter the immune system, has been slower to materialize. In some cases, perhaps the nervous system is not the best way in. Looking beyond this singular locus of control might open the way for a wider suite of electromedical interventions—especially if the nervous system proves less amenable to hacking than originally advertised.
How it started
GSK’s ambitious electroceutical venture was a response to an increasingly onerous problem: 90% of drugs fall down during the obstacle race through clinical trials. A new drug that does squeak by can cost $2 billion or $3 billion and take 10 to 15 years to bring to market, a galling return on investment. The flaw is in the delivery system. The way we administer healing chemicals hasn’t had much of a conceptual overhaul since the Renaissance physician Paracelsus: ingest or inject. Both approaches have built-in inefficiencies: it takes a long time for the drugs to build up in your system, and they can disperse widely before arriving in diluted form at their target, which may make them useless where they are needed and toxic elsewhere. Tracey and Kristoffer Famm, a coauthor on the Nature article who was then a VP at GlaxoSmithKline, explained on the publicity circuit that electroceuticals would solve these problems—acting more quickly and working only in the precise spot where the intervention was needed. After 500 years, finally, here was a new idea.
Well … new-ish.Electrically stimulating the nervous system had racked up promising successes since the mid-20th century. For example, the symptoms of Parkinson’s disease had been treated via deep brain stimulation, and intractable pain via spinal stimulation. However, these interventions could not be undertaken lightly; the implants needed to be placed in the spine or the brain, a daunting prospect to entertain. In other words, this idea would never be a money spinner.
The vagus nerve runs from the brain through the body
WELLCOME COLLECTION
What got GSK excited was recent evidence that health could be more broadly controlled, and by nerves that were easier to access. By the dawn of the 21st century it had become clear you could tap the nervous system in a way that carried fewer risks and more rewards. That was because of findings suggesting that the peripheral nervous system—essentially, everything but the brain and spine—had much wider influence than previously believed.
The prevailing wisdom had long been that the peripheral nervous system had only one job: sensory awareness of the outside world. This information is ferried to the brain along many little neural tributaries that emerge from the extremities and organs, most of which converge into a single main avenue at the torso: the vagus nerve.
Starting in the 1990s, research by Linda Watkins, a neuroscientist leading a team at the University of Colorado, Boulder, suggested that this main superhighway of the peripheral nervous system was not a one-way street after all. Instead it seemed to carry message traffic in both directions, not just into the brain but from the brain back into all those organs. Furthermore, it appeared that this comms link allows the brain to exert some control over the immune system—for example, stoking a fever in response to an infection.
And unlike the brain or spinal cord, the vagus nerve is comparatively easy to access: its path to and from the brain stem runs close to the surface of the neck, along a big cable on either side. You could just pop an electrode on it—typically on the left branch—and get zapping.
Meddling with the flow of traffic up the vagus nerve in this wayhad successfully treated issues in the brain, specifically epilepsy and treatment-resistant depression (and electrical implants for those applications were approved by the FDA around the turn of the millennium). But the insights from Watkins’s team put the down direction in play.
It was Kevin Tracey who joined all these dots, after which it did not take long for him to become the public face of research on vagus nerve stimulation. During the 2000s, he showed that electrically stimulating the nerve calmed inflammation in animals. This “inflammatory reflex,” as he came to call it, implied that the vagus nerve could act as a switch capable of turning off a wide range of diseases, essentially hacking the immune system. In 2007, while based at what is now called the Feinstein Institutes for Medical Research, in New York, he spun his insights off into a Boston startup called SetPoint Medical. Its aim was to develop devices to flip this switch and bring relief, starting with inflammatory bowel disease and rheumatoid arthritis.
By 2012, a coordinated relationship had developed between GSK, Tracey, and US government agencies. Tracey says that Famm and others contacted him “to help them on that Nature article.” A year later the electroceuticals road map was ready to be presented to the public.
The story the researchers told about the future was elegant and simple. It was illustrated by a tale Tracey recounted frequently on the publicity circuit, of a first-in-human case study SetPoint had coordinated at the University of Amsterdam’s Academic Medical Center. That team had implanted a vagus nerve stimulator in a man suffering from rheumatoid arthritis. The stimulation triggered his spleen to release a chemical called acetylcholine. This in turn told the cells in the spleen to switch off production of inflammatory molecules called cytokines. For this man, the approach worked well enough to let him resume his job, play with his kids, and even take up his old hobbies. In fact, his overenthusiastic resumption of his former activities resulted in a sports injury, as Tracey delighted in recounting for reporters and conferences.
Such case studies opened the money spigot. The combination of a wider range of disease targets and less risky surgical targets was an investor’s love language. Where deep brain stimulation and other invasive implants had been limited to rare, obscure, and catastrophic problems, this new interface with the body promised many more customers: the chronic diseases now on the table are much more prevalent, including not only rheumatoid arthritis but diabetes, asthma, irritable bowel syndrome, lupus, and many other autoimmune disorders. GSK launched an investment arm it dubbed Action Potential Venture Capital Limited, with $50 million in the coffers to invest in the technologies and companies that would turn the futuristic vision of electroceuticals into reality. Its inaugural investment was a $5 million stake in SetPoint.
If you were superstitious, what happened next might have looked like an omen. The word “electroceutical” already belonged to someone else—a company called Ivivi Technologies had trademarked it in 2008. “I am fairly certain we sent them a letter soon after they started that campaign, to alert them of our trademark,” says Sean Hagberg, a cofounder and then chief science officer at the company. Today neither GSK nor SetPoint can officially call its tech “electroceuticals,” and both refer to the implants they are developing as “bioelectronic medicine.” However, this umbrella term encompasses a wide range of other interventions, some quite well established, including brain implants, spine implants, hypoglossal nerve stimulation for sleep apnea (which targets a motor nerve running through the vagus), and other peripheral-nervous-system implants, including those for people with severe gastric disorders.
Kevin Tracey has been one of the leading proponents of using electrical stimulation to target inflammation in the body.
MIKE DENORA VIA WIKIPEDIA
The next problem appeared in short order: how to target the correct nerve. The vagus nerve has roughly 100,000 fibers packed tightly within it, says Kip Ludwig, who was then with the US National Institutes of Health and now co-directs the Wisconsin Institute for Translational Neuroengineering at the University of Wisconsin, Madison. These myriad fibers connect to many different organs, including the larynx and lower airways, and electrical fields are not precise enough to hit a single one without hitting many of its neighbors (as Ludwig puts it, “electric fields [are] really promiscuous”).
This explains why a wholesale zap of the entire bundle had long been associated with unpredictable “on-target effects” and unpleasant “off-target effects,” which is another way of saying it didn’t always work and could carry side effects that ranged from the irritating, like a chronic cough, to the life-altering, including headaches and a shortness of breath that is better described as air hunger. Singling out the fibers that led to the particular organ you were after was hard for another reason, too: the existing maps of the human peripheral nervous system were old and quite limited. Such a low-resolution road map wouldn’t be sufficient to get a signal from the highway all the way to a destination.
In 2014, to remedy this and generally advance the field of peripheral nerve stimulation, the NIH announced a research initiative known as SPARC—Stimulating Peripheral Activity to Relieve Conditions—with the aim of pouring $248 million into research on new ways to exploit the nervous system’s electrical pathways for medicine. “My job,” says Gene Civillico, who managed the program until 2021, “was to do a program related to electroceuticals that used the NIH policy options that were available to us to try to make something catalytic happen.” The idea was to make neural anatomical maps and sort out the consequences of following various paths. After the organs were mapped, Civillico says, the next step was to figure out which nerve circuit would stimulate them, and settle on an access point—“And the access point should be the vagus nerve, because that’s where the most interest is.”
Two years later, as SPARC began to distribute its funds, companies moved forward with plans for the first generation of implants. GSK teamed up with Verily (formerly Google Life Sciences) on a $715 million research initiative they called Galvani Bioelectronics, with Famm at its helm as president. SetPoint, which had relocated to Valencia, California, moved to an expanded location, a campus that had once housed a secret Lockheed R&D facility.
How it’s going
Ten years after electroceuticals entered (and then quickly departed) the lexicon, the SPARC program has yielded important information about the electrical particulars of the peripheral nervous system. Its maps have illuminated nodes that are both surgically attractive and medically relevant. It has funded a global constellation of academic researchers. But its insights will be useful for the next generation of implants, not those in trials today.
Today’s implants, from SetPoint and Galvani, will be in the news later this year. Though SetPoint estimates that an extended study of its phase III clinical trial will conclude in 2027, the primary outcomes will be released this summer, says Ankit Shah, a marketing VP at SetPoint. And while Galvani’s trial will conclude in 2029, Famm says, the company is “coming to an exciting point” and will publish patient data later in 2024.
The results could be interpreted as a referendum on the two companies’ different approaches. Both devices treat rheumatoid arthritis, and both target the immune system via the peripheral nervous system, but that’s where the similarities end. SetPoint’s device uses a clamshell design that cuffs around the vagus nerve at the neck. It stimulates for just one minute, once per day. SetPoint representatives say they have never seen the sorts of side effects that have resulted from using such stimulators to treat epilepsy. But if anyone did experience those described by other researchers—even vomiting and headaches—they might be tolerable if they only lasted a minute.
But why not avoid the vagus nerve entirely? Galvani is using a more precise implant that targets the “end organ” of the spleen. If the vagus nerve can be considered the main highway of the peripheral nervous system, an end organ is essentially a particular organ’s “driveway.” Galvani’s target is the point where the splenic nerve (having split off from a system connected to the vagus highway) meets the spleen.
To zero in on such a specific target, the company has sacrificed ease of access. Its implant, which is about the size of a house key, is laparoscopically injected into the body through the belly button. Famm says if this approach works for rheumatoid arthritis, then it will likely translate for all autoimmune disorders. Highlighting this clinical trial in 2022, he told Nature Reviews: “This is what makes the next 10 years exciting.”
The Galvani device and system targets the splenic nerve.
GALVANI VIA BUSINESSWIRE
Perhaps more so for researchers than for patients, however. Even as Galvani and SetPoint prepare talking points, other SPARC-funded groups are still pondering the sorts of research questions suggesting that the best technological interface with the immune system is still up for debate. At the moment, electroceuticals are in the spotlight, but they have a long way to go, says Vaughan Macefield, a neurophysiologist at Monash University in Australia, whose work is funded by a more recent $21 million SPARC grant: “It’s an elegant idea, [but] there are conflicting views.”
Macefield doesn’t think zapping the entire bundle is a good idea. Many researchers are working on ways to get more selective about which particular fibers of the vagus nerve they stimulate. Some are designing novel electrodes that will penetrate specific fibers rather than clamping around all of them. Others are trying to hit the vagus at deeper points in the abdomen. Indeed, some aren’t sure either electricity or an implant is a necessary ingredient of the “electroceutical.” Instead, they are pivoting from electrical stimulation to ultrasound.
The sheer range of these approaches makes it pretty clear that the electroceutical’s final form is still an open research question. Macefield says we still don’t know the nitty-gritty of how vagus nerve stimulation works.
However, Tracey thinks the variety of approaches being developed doesn’t contravene the merits of the basic idea. How tech companies will make this work in the clinic, he says, is a separate business and IP question: “Can you do it with focused ultrasound? Can you do it with a device implanted with abdominal surgery? Can you do it with a device implanted in the neck? Can you do it with a device implanted in the brain, even? All of these strategies are enabled by the idea of the inflammatory reflex.” Until clinical trial data is in, he says, there’s no point arguing about the best way to manipulate the mechanism—and if one approach fails to work, that is not a referendum on the validity of the inflammatory reflex.
After stepping down from SetPoint’s board to resume a purely consulting role in 2011, Tracey focused on his lab work at the Feinstein Institutes, which he directs, to deepen understanding of this pathway. The research there is wide-ranging. Several researchers under his remit are exploring a type of noninvasive, indirect manipulation called transcutaneous auricular vagus nerve stimulation, which stimulates the skin of the ear with a wearable device. Tracey says it’s a “malapropism” to call this approach vagus nerve stimulation. “It’s just an ear buzzer,” he says. It may stimulate a sensory branch of the vagus nerve, which may engage the inflammatory reflex. “But nobody knows,” he says. Nonetheless, several clinical trials are underway.
SetPoint’s device is cuffed around the vagus nerve within the neck of a patient.
SETPOINT MEDICAL
“These things take time,” Tracey says. “It is extremely difficult to invent and develop a completely revolutionary new thing in medicine. In the history of medicine, anything that was truly new and revolutionary takes between 20 and 40 years from the time it’s invented to the time it’s widely adopted.”
“As the discoverer of this pathway,” he says, “what I want to see is multiple therapies, helping millions of people.” This vision will hinge on bigger trials conducted over many more years. These tend to be about as hard for devices as they are for drugs. Many results that look compelling in early trials disappoint in later rounds—just as for drugs. It will be possible, says Ludwig, “for them to pass a short-duration FDA trial yet still really not be a major improvement over the drug solutions.” Even after FDA approval, should it come, yet more studies will be needed to determine whether the implants are subject to the same issues that plague drugs, including habituation.
This vision of electroceuticals seems to have placed about a billion eggs into the single basket of the peripheral nervous system. In some ways, this makes sense. After all, the received wisdom has it that these nervous signals are the only way to exert electrical control of the other cells in the body. Those other trillions—the skin cells, the immune cells, the stem cells—are beyond the reach of direct electrical intervention.
Except in the past 20 years it’s become abundantly clear thatthey are not.
Other cells speak electricity
At the end of the 19th century, the German physiologist Max Verworn watched as a single-celled marine creature was drawn across the surface of his slide as if captured by a tractor beam. It had been, in a way: under the influence of an electric field, it squidged over to the cathode (the pole that attracts positive charge). Many other types of cells could be coaxed to obey the directional wiles of an electric field, a phenomenon known as galvanotaxis.
But this was too weird for biology, and charlatans already occupied too much of the space in the Venn diagram where electricity met medicine. (The association was formalized in 1910 in the Flexner Report, commissioned to improve the dismal state of American medical schools, which sent electrical medicine into exile along with the likes of homeopathy.) Everyone politely forgot about galvanotaxis until the 1970s and ’80s, when the peculiar behavior resurfaced. Yeast, fungi, bacteria, you name it—they all liked a cathode. “We were pulling every kind of cell along on petri dishes with an electric field,” says Ann Rajnicek of the University of Aberdeen in Scotland, who was among the first group of researchers who tried to discover the mechanism when scientific interest reawakened.
Galvanotaxis would have raised few eyebrows if the behavior had been confined to neurons. Those cells have evolved receptors that sense electric fields; they are a fundamental aspect of the mechanism the nervous system uses to send its information. Indeed, the reason neurons are so amenable to electrical manipulation in the first place is that electric implants hijack a relatively predictable mechanism. Zap a nerve or a muscle and you are forcing it to “speak” a language in which it is already fluent.
Non-excitable cells such as those found in skin and bone don’t share these receptors, but it keeps getting more obvious that they somehow still sense and respond to electric fields.
Why? We keep finding more reasons. Galvanotaxis, for example, is increasingly understood to play a crucial role in wound healing. In every species studied, injury to the skin produces an instant, internally generated electric field, and there’s overwhelming evidence that it guides patch-up cells to the center of the wound to start the rebuilding process. But galvanotaxis is not the only way these cells are led by electricity. During development, immature cells seem to sense the electric properties of their neighbors, which plays a role in their future identity—whether they become neurons, skin cells, fat cells, or bone cells.
Early experiments showed that paramecia on a wet plate will orient themselves in the direction of a cathode.
PUBLIC DOMAIN
Intriguing as this all was, no one had much luck turning such insights into medicine. Even attempts to go after the lowest-hanging fruit—by exploiting galvanotaxis for novel bandages—were for many years at best hit or miss. “When we’ve come upon wounds that are intractable, resistant, and will not heal, and we apply an electric field, only 50% or so of the cases actually show any effect,” says Anthony Guiseppi-Elie, a senior fellow with the American International Institute for Medical Sciences, Engineering, and Innovation.
However, in the past few years, researchers have found ways to make electrical stimulation outside the nervous system less of a coin toss.
That’s down to steady progress in our understanding of how exactly non-neural cells pick up on electric fields, which has helped calm anxieties around the mysticism and the Frankenstein associations that have attended biological responses to electricity.
The first big win came in 2006, with the identification of specific genes in skin cells that get turned on and off by electric fields. When skin is injured, the body’s native electric field orients cells toward the center of the wound, and the physiologist Min Zhao and his colleagues found important signaling pathways that are turned on by this field and mobilized to move cells toward this natural cathode. He also found associated receptors, and other scientists added to the catalogue of changes to genes and gene regulatory networks that get switched on and off under an electric field.
What has become clear since then is that there is no simple mechanism waiting at the end of the rainbow. “There isn’t one single master protein, as far as anybody knows, that regulates responses [to an electric field],” says Daniel Cohen, a bioengineer at Princeton University. “Every cell type has a different cocktail of stuff sticking out of it.”
But recent years have brought good news, in both experimental and applied science. First, the experimental platforms to investigate gene expression are in the middle of a transformation. One advance was unveiled last year by Sara Abasi, Guiseppi-Elie, and their colleagues at Texas A&M and the Houston Methodist Research Institute: their carefully designed research platform kept track of pertinent cellular gene expression profiles and how they change under electric fields—specifically, ones tuned to closely mimic what you find in biology. They found evidence for the activation of two proteins involved in tissue growth along with increased expression of a protein called CD-144, a specific version of what’s known as a cadherin. Cadherins are important physical structures that enable cells to stick to each other, acting like little handshakes between cells. They are crucial to the cells’ ability to act en masse instead of individually.
The other big improvement is in tools that can reveal just how cells work together in the presence of electric fields.
A different kind of electroceutical
A major limit on past experiments was that they tended to test the effects of electrical fields either on single cells or on whole animals. Neither is quite the right scale to offer useful insights, explains Cohen: measuring these dynamics in animals is too “messy,” but in single cells, the dynamics are too artificial to tell you much about how cells behave collectively as they heal a wound. That behavior emerges only at relevant scales, like bird flocks, schools of fish, or road traffic. “The math is identical to describe these types of collective dynamics,” he says.
In 2020, Cohen and his team came up with a solution: an experimental setup that strikes the balance between single cell (tells you next to nothing) and animal (tells you too many things at once). The device, called SCHEEPDOG, can reveal what is going on at the tissue level, which is the relevant scale for investigating wound healing.
It uses two sets of electrodes—a bit the way you might twiddle the dials on an Etch A Sketch—placed in a closed bioreactor, which better approximates how electric fields operate in biology. With this setup, Cohen and his colleagues can precisely tune the electrical environment of tens of thousands of cells at a time to influence their behavior.
In this time-lapse, SCHEEPDOG maneuvers epithelial cells with electric fields.
COHEN ET AL
Their subsequent “healing-on-a-chip” platform yielded an interesting discovery: skin cells’ response to an electric field depends on their maturity. The less mature, the easier they were to control.
The culprit? Those cadherins that Abasi and Guiseppi-Elie had also observed changing under electric fields. In mature cells, these little handshakes had become so strong that a competing electric field, instead of gently guiding the cells, caused them to rip apart. The immature skin cells followed the electric field’s directions without complaint.
After they found a way to dial down the cadherins with an antibody drug, all the cells synchronized. For Cohen, the lesson was that it’s more important to look at the system, and the collective dynamics that govern a behavior like wound healing, than at what is happening in any single cell. “This is really important because many clinical attempts at using electrical stimulation to accelerate wound healing have failed,” says Guiseppi-Elie, and it had never become clear why some worked and others some didn’t.
Cohen’s team is now working to translate these findings into next-generation bioelectric plasters. They are far from alone, and the payoff is more than skin deep. A lot of work is going on, some of it open and some behind closed doors with patents being closely guarded, says Cohen.
At Stanford, the University of Arizona, and Northwestern, researchers are creating smart electric bandages that can be implanted under the skin. They can also monitor the state of the wound in real time, increasing the stimulation if healing is too slow. More challenging, says Rajnicek, are ways to interface with less accessible areas of the body. However, here too new tools are revealing intriguing creative solutions.
Electric fields don’t have to directly changecells’ gene expression to be useful. There is another way their application can be turned to medical benefit. Electric fields evoke reactive oxygen species (ROS) in biological cells. Normally, these charged molecules are a by-product of a cell’s everyday metabolic activities. If you induce them purposefully using an external DC current, however, they can be hijacked to do your bidding.
Starting in 2020, theSwiss bioengineer Martin Fussenegger and an international team of collaborators began to publish investigations into this mechanism to power gene expression. He and his team engineered human kidney cells to be hypersensitive to the induced ROSs in quantities that normal cells couldn’t sense. But when these were generated by DC electrodes, the kidney cells could sense the minute quantities just fine.
Using this instrument, in 2023 they were able to create a tiny, wearable insulin factory. The designer kidney cells were created with a synthetic promoter—an engineered sequence of DNA that can drive expression of a target gene—that reacted to those faint inducedROSs by activating a cascade of genetic changes that opened a tap for insulin production on demand.
Then they packaged this electrogenetic contraption into a wearable device that worked for a month in a living mouse, which had been engineered to be diabetic (Fussenegger says that “others have shown that implanted designer cells can generally be active for over a year”). The designer cells in the wearable are kept alive by algae gelatine but are fed by the mouse’s own vascular system, permitting the exchange of nutrients and protein. The cells can’t get out, but the insulin they secrete can, seeping straight into the mouse’s bloodstream. Ten seconds a day of electrical stimulation delivered via needles connected to three AAA batteries was enough to make the implant perform like a pancreas, returning the mouse’s blood sugar to nondiabetic levels. Given how easy it would be to generalize the mechanism, Fussenegger says, there’s no reason insulin should be the only drug such a device can generate. He is quick to stress that this wearable device is very much in the proof-of-concept stage, but others outside the team are excited about its potential. It could provide a more direct electrical alternative to the solution electroceuticals promised for diabetes.
Escaping neurochauvinism
Before the concerted push around branding electroceuticals, efforts to tap the peripheral nervous system were fragmented and did not share much data. Today, thanks to SPARC, which is winding down, data-sharing resources have been centralized. And money, both direct and indirect, for the electroceuticals project has been lavish. Therapies—especially vagus nerve stimulation—have been the subject of “a steady increase in funding and interest,” says Imran Eba, a partner at GSK’s bioelectronics investment arm Action Potential Venture Capital. Eba estimates that the initial GSK seed of $50 million at Action Potential has grown to about $200 million in assets under management.
Whether you call it bioelectronic medicine or electroceuticals, some researchers would like to see the definition take on a broader remit. “It’s been an extremely neurocentric approach,” says Daniel Cohen.
Neurostimulation has not yet shown success against cancer. Other forms of electrical stimulation, however, have proved surprisingly effective. In one study on glioblastoma, tumor-treating fields offered an electrical version of chemotherapy: an electric field blasts a brain tumor, preferentially killing only cells whose electrical identity marks them as dividing (which cancer cells do, pathologically—but neurons, being fully differentiated, do not). A study recently published in The Lancet Oncology suggests that these fields could also work in lung cancer to boost existing drugs and extend survival.
All of this points to more sophisticated interventions than a zap to a nerve. “The complex things that we need to do in medicine will be about communicating with the collective decision-making and problem-solving of the cells,” says Michael Levin. He has been working to repurpose already-approved drugs so they can be used to target the electrical communication between cells. In a funny twist, he has taken to calling these drugs electroceuticals, which has ruffled some feathers. But he would certainly find support from researchers like Cohen. “I would describe electroceuticals much more broadly as anything that manipulates cellular electrophysiology,” Cohen says.
Even interventions with the nervous system could be helped by expanding our understanding of the ways nerve cells react to electricity beyond action potentials. Kim Gokoffski, a professor of clinical ophthalmology at the University of Southern California, is working with galvanotaxis as a possible means of repairing damage to the optic nerve. In prior experiments that involve regrowing axons—the cables that carry messages out of neurons—these new nerve fibers tend to miss the target they’re meant to rejoin. Existing approaches “are all pushing the gas pedal,” she says, “but no one is controlling the steering wheel.” So her group uses electric fields to guide the regenerating axons into position. In rodent trials, this has worked well enough to partially restore sight.
And yet, Cohen says, “there’s massive social stigma around this that is significantly hampering the entire field.” That stigma has dramatically shaped research direction and funding. For Gokoffski, it has led to difficulties with publishing. She also recounts hearing a senior NIH official refer to her lab’s work on reconnecting optic nerves as “New Age–y.” It was a nasty surprise: “New Age–y has a very bad connotation.”
However, there are signs of more support for work outside the neurocentric model of bioelectric medicine. The US Defense Department funds projects in electrical wound healing (including Gokoffski’s). Action Potential’s original remit—confined to targeting peripheral nerves with electrical stimulation—has expanded. “We have a broader approach now, where energy (in any form, be it electric, electromagnetic, or acoustic) can be directed to regulate neuronal or other cellular activities in the body,” Eba wrote in an email. Three of the companies now in their portfolio focus on areas outside neurostimulation. “While we don’t have any investments targeting wound healing or regenerative medicine specifically, there is no explicit exclusion here for us,” he says.
This suggests that the “social stigma” Cohen described around electrical medicine outside the nervous system is slowly beginning to abate. But if such projects are to really flourish, the field needs to be supported, not just tolerated—perhaps with its own road map and dedicated NIH program. Whether or not bioelectric medicine ends up following anything like the original electroceuticals road map, SPARC ensured a flourishing research community, one that is in hot pursuit of promising alternatives.
The use of electricity outside the nervous system needs a SPARC program of its own. But if history is any guide, first it needs a catchy name. It can’t be “electroceuticals.” And the researchers should definitely check the trademark listings before rolling it out.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
SUVs are taking over the world—larger vehicle models made up nearly half of new car sales globally in 2023, a new record for the segment.
There are a lot of reasons to be nervous about the ever-expanding footprint of vehicles, from pedestrian safety and road maintenance concerns to higher greenhouse-gas emissions. But in a way, SUVs also represent a massive opportunity for climate action, since pulling the worst gas-guzzlers off the roads and replacing them with electric versions could be a big step in cutting pollution.
It’s clear that we’re heading toward a future with bigger cars. Here’s what it might mean for the climate, and for our future on the road.
SUVs accounted for 48% of global car sales in 2023, according to a new analysis from the International Energy Agency. This is a continuation of a trend toward bigger cars—just a decade ago, SUVs only made up about 20% of new vehicle sales.
Big vehicles mean big emissions numbers. Last year there were more than 360 million SUVs on the roads, and they produced a billion metric tons of carbon dioxide. If SUVs were a country, they’d have the fifth-highest emissions of any nation on the planet—more than Japan. Of all the energy-related emissions growth last year, over 20% can be attributed to SUVs.
There are several factors driving the world’s move toward larger vehicles. Larger cars tend to have higher profit margins, so companies may be more likely to make and push those models. And drivers are willing to jump on the bandwagon. I understand the appeal—I learned to drive in a huge SUV, and being able to stretch out my legs and float several feet above traffic has its perks.
Electric vehicles are very much following the trend, with several companies unveiling larger models in the past few years. Some of these newly released electric SUVs are seeing massive success. The Tesla Model Y, released in 2020, was far and away the most popular EV last year, with over 1.2 million units sold in 2023. The BYD Song (also an SUV) took second place with 630,000 sold.
Globally, SUVs made up nearly 50% of new EV sales in 2023, compared to just under 20% in 2018, according to the IEA’s Global EV Outlook 2024. There’s also been a shift away from small cars (think the size of the Fiat 500) and toward large ones (similar to the BMW 7-series).
And big-car obsession is a global phenomenon. The US is the land of the free and the home of the massive vehicles—SUVs made up 65% of new electric-vehicle sales in the country in 2023. But other major markets aren’t all that far behind: in Europe, the share was 52%, and in China, it was 36%. (You can see the above chart broken down by region from the IEA here.)
So it’s clear that we’re clamoring for bigger cars. Now what?
One way of looking at this whole thing is that SUVs offer up an incredible opportunity for climate action. EVs will reduce emissions over their life span relative to gas-powered versions of the same model, so electrifying the biggest emitters on the roads would have an outsize impact. If all gas-powered and hybrid SUVs sold in 2023 were instead electric vehicles, about 770 million metric tons of carbon dioxide would be avoided over the lifetime of those vehicles, according to the IEA report. That’s equivalent to all of China’s road emissions last year.
I previously wrote a somewhat hesitant defense of large EVs for this reason—electric SUVs aren’t perfect, but they could still help us address climate change. If some drivers are willing to buy an EV but aren’t willing to downsize their cars, then having larger electric options available could be a huge lever for climate action.
But there are several very legitimate reasons why not everyone is welcoming the future of massive cars (even electric ones) with open arms. Larger vehicles are harder on roads, making upkeep more expensive. SUVs and other big vehicles are way more dangerous for pedestrians, too. Vehicles with higher front ends and blunter profiles are 45% more likely to cause fatalities in crashes with pedestrians.
Bigger EVs could also have a huge effect on the amount of mining we’ll need to do to meet demand for metals like lithium, nickel, and cobalt. One 2023 study found that larger vehicles could increase the amount of mining needed more than 50% by 2050, relative to the amount that would be necessary if people drove smaller vehicles. Given that mining is energy intensive and can come with significant environmental harms, it’s not an unreasonable worry.
New technologies could help reduce the mining we need to do for some materials: LFP batteries that don’t contain nickel or cobalt are quickly growing in market share, especially in China, and they could help reduce demand for those metals.
Another potential solution is reducing the demand for bigger cars in the first place. Policies have historically had a hand in pushing people toward larger cars and could help us make a U-turn on car bloat. Some countries, including Norway and France, now charge more in taxes or registration for larger vehicles. Paris recently jacked up parking rates for SUVs.
For now, our vehicles are growing, and if we’re going to have SUVs on the roads, then we should have electric options. But bigger isn’t always better.
Now read the rest of The Spark
Related reading
I’ve defended big EVs in the past—SUVs come with challenges, but electric ones are hands-down better for emissions than gas-guzzlers. Read this 2023 newsletter for more.
The average size of batteries in EVs has steadily ticked up in recent years, as I touched on in this newsletter from last year.
Electric cars are still cars, and smaller, safer EVs, along with more transit options, will be key to hitting our climate goals, Paris Marx argued in this 2022 op-ed.
Keeping up with climate
We might be underestimating how much power transmission lines can carry. Sensors can give grid operators a better sense of capacity based on factors like temperature and wind speed, and it could help projects hook up to the grid faster. (Canary Media)
North America could be in for an active fire season, though it’s likely not going to rise to the level of 2023. (New Scientist)
Climate change is making some types of turbulence more common, and that could spell trouble for flying. Studying how birds move might provide clues about dangerous spots. (BBC)
The perceived slowdown for EVs in the US is looking more like a temporary blip than an ongoing catastrophe. Tesla is something of an outlier with its recent slump—most automakers saw greater than 50% growth in the first quarter of this year. (Bloomberg)
This visualization shows just how dominant China is in the EV supply chain, from mining materials like graphite to manufacturing battery cells. (Cipher News)
Climate change is coming for our summer oysters. The variety that have been bred to be eaten year round are sensitive to extreme heat, making their future rocky. (The Atlantic)
The US has new federal guidelines for carbon offsets. It’s an effort to fix up an industry that studies and reports have consistently shown doesn’t work very well. (New York Times)
The most stubborn myth about heat pumps is that they don’t work in cold weather. Heat pumps are actually more efficient than gas furnaces in cold conditions. (Wired)
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Here in the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure.
The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.
The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs. Yes, chickens. One of the birds that’s susceptible to avian flu. (Talk about putting all our eggs in one basket. #sorrynotsorry)
This week in The Checkup, let’s look at why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.
The idea to grow flu virus in fertilized chicken eggs originated with Frank Macfarlane Burnet, an Australian virologist. In 1936, he discovered that if he bored a tiny hole in the shell of a chicken egg and injected flu virus between the shell and the inner membrane, he could get the virus to replicate.
Even now, we still grow flu virus in much the same way. “I think a lot of it has to do with the infrastructure that’s already there,” says Scott Hensley, an immunologist at the University of Pennsylvania’s Perelman School of Medicine. It’s difficult for companies to pivot.
The process works like this:Health officials provide vaccine manufacturers with a candidate vaccine virus that matches circulating flu strains. That virus is injected into fertilized chicken eggs, where it replicates for several days. The virus is then harvested, killed (for most use cases), purified, and packaged.
Making flu vaccine in eggs has a couple of major drawbacks. For a start, the virus doesn’t always grow well in eggs. So the first step in vaccine development is creating a virus that does. That happens through an adaptation process that can take weeks or even months. This process is particularly tricky for bird flu: Viruses like H5N1 are deadly to birds, so the virus might end up killing the embryo before the egg can produce much virus. To avoid this, scientists have to develop a weakened version of the virus by combining genes from the bird flu virus with genes typically used to produce seasonal flu virus vaccines.
And then there’s the problem of securing enough chickens and eggs. Right now, many egg-based production lines are focused on producing vaccines for seasonal flu. They could switch over to bird flu, but “we don’t have the capacity to do both,” Amesh Adalja, an infectious disease specialist at Johns Hopkins University, told KFF Health News. The US government is so worried about its egg supply that it keeps secret, heavily guarded flocks of chickens peppered throughout the country.
Most of the flu virus used in vaccines is grown in eggs, but there are alternatives. The seasonal flu vaccine Flucelvax, produced by CSL Seqirus, is grown in a cell line derived in the 1950s from the kidney of a cocker spaniel. The virus used in the seasonal flu vaccine FluBlok, made by Protein Sciences, isn’t grown; it’s synthesized. Scientists engineer an insect virus to carry the gene for hemagglutinin, a key component of the flu virus that triggers the human immune system to create antibodies against it. That engineered virus turns insect cells into tiny hemagglutinin production plants.
And then we have mRNA vaccines, which wouldn’t require vaccine manufacturers to grow any virus at all. There aren’t yet any approved mRNA vaccines for influenza, but many companies are fervently working on them, including Pfizer, Moderna, Sanofi, and GSK. “With the covid vaccines and the infrastructure that’s been built for covid, we now have the capacity to ramp up production of mRNA vaccines very quickly,” says Hensley. This week, the Financial Timesreported that the US government will soon close a deal with Moderna to provide tens of millions of dollars to fund a large clinical trial of a bird flu vaccine the company is developing.
There are hints that egg-free vaccines might work better than egg-based vaccines.A CDC study published in January showed that people who received Flucelvax or FluBlok had more robust antibody responses than those who received egg-based flu vaccines. That may be because viruses grown in eggs sometimes acquire mutations that help them grow better in eggs. Those mutations can change the virus so much that the immune response generated by the vaccine doesn’t work as well against the actual flu virus that’s circulating in the population.
Hensley and his colleagues are developing an mRNA vaccine against bird flu. So far they’ve only tested it in animals, but the shot performed well, he claims. “All of our preclinical studies in animals show that these vaccines elicit a much stronger antibody response compared with conventional flu vaccines.”
No one can predict when we might need a pandemic flu vaccine. But just because bird flu hasn’t made the jump to a pandemic doesn’t mean it won’t. “The cattle situation makes me worried,” Hensley says. Humans are in constant contact with cows, he explains. While there have only been a couple of human cases so far, “the fear is that some of those exposures will spark a fire.” Let’s make sure we can extinguish it quickly.
I don’t have to tell you that mRNA vaccines are a big deal. In 2021, MIT Technology Review highlighted them as one of the year’s 10 breakthrough technologies. Antonio Regalado explored their massive potential to transform medicine. Jessica Hamzelou wrote about the other diseases researchers are hoping to tackle.I followed up with a story after two mRNA researchers won a Nobel Prize. And earlier this year I wrote about a new kind of mRNA vaccine that’s self-amplifying, meaning it not only works at lower doses, but also sticks around for longer in the body.
From around the web
Researchers installed a literal window into the brain, allowing for ultrasound imaging that they hope will be a step toward less invasive brain-computer interfaces. (Stat)
People who carry antibodies against the common viruses used to deliver gene therapies can mount a dangerous immune response if they’re re-exposed. That means many people are ineligible for these therapies and others can’t get a second dose. Now researchers are hunting for a solution. (Nature)
More good news about Ozempic. A new study shows that the drug can cut the risk of kidney complications, including death in people with diabetes and chronic kidney disease. (NYT)
Must read: This story, the second in series on the denial of reproductive autonomy for people with sickle-cell disease, examines how the US medical system undermines a woman’s right to choose. (Stat)
When Google announced it was rolling out its artificial-intelligence-powered search feature earlier this month, the company promised that “Google will do the googling for you.” The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.
Unfortunately, AI systems are inherently unreliable. Within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875.
On Thursday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system to make it less likely to generate incorrect answers, including better detection mechanisms for nonsensical queries. It is also limiting the inclusion of satirical, humorous, and user-generated content in responses, since such material could result in misleading advice.
But why is AI Overviews returning unreliable, potentially dangerous information? And what, if anything, can be done to fix it?
How does AI Overviews work?
In order to understand why AI-powered search engines get things wrong, we need to look at how they’ve been optimized to work. We know that AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs), that’s been customized for Google Search. That model has been integrated with Google’s core web ranking systems and designed to pull out relevant results from its index of websites.
Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.
Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.
One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.)
So why does it return bad answers?
But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.
In the case of AI Overviews’ recommendation of a pizza recipe that contains glue—drawing from a joke post on Reddit—it’s likely that the post appeared relevant to the user’s original query about cheese not sticking to pizza, but something went wrong in the retrieval process, says Shah. “Just because it’s relevant doesn’t mean it’s right, and the generation part of the process doesn’t question that,” he says.
Similarly, if a RAG system comes across conflicting information, like a policy handbook and an updated version of the same handbook, it’s unable to work out which version to draw its response from. Instead, it may combine information from both to create a potentially misleading answer.
“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” says Suzan Verberne, a professor at Leiden University who specializes in natural-language processing.
The more specific a topic is, the higher the chance of misinformation in a large language model’s output, she says, adding: “This is a problem in the medical domain, but also education and science.”
According to the Google spokesperson, in many cases when AI Overviews returns incorrect answers it’s because there’s not a lot of high-quality information available on the web to show for the query—or because the query most closely matches satirical sites or joke posts.
The spokesperson says the vast majority of AI Overviews provide high-quality information and that many of the examples of bad answers were in response to uncommon queries, adding that AI Overviews containing potentially harmful, obscene, or otherwise unacceptable content came up in response to less than one in every 7 million unique queries. Google is continuing to remove AI Overviews on certain queries in accordance with its content policies.
It’s not just about bad training data
Although the pizza glue blunder is a good example of a case where AI Overviews pointed to an unreliable source, the system can also generate misinformation from factually correct sources. Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico, googled “How many Muslim presidents has the US had?’” AI Overviews responded: “The United States has had one Muslim president, Barack Hussein Obama.”
While Barack Obama is not Muslim, making AI Overviews’ response wrong, it drew its information from a chapter in an academic book titled Barack Hussein Obama: America’s First Muslim President?So not only did the AI system miss the entire point of the essay, it interpreted it in the exact opposite of the intended way, says Mitchell. “There’s a few problems here for the AI; one is finding a good source that’s not a joke, but another is interpreting what the source is saying correctly,” she adds. “This is something that AI systems have trouble doing, and it’s important to note that even when it does get a good source, it can still make errors.”
Can the problem be fixed?
Ultimately, we know that AI systems are unreliable, and so long as they are using probability to generate text word by word, hallucination is always going to be a risk. And while AI Overviews is likely to improve as Google tweaks it behind the scenes, we can never be certain it’ll be 100% accurate.
Google has said that it’s adding triggering restrictions for queries where AI Overviews were not proving to be especially helpful and has added additional “triggering refinements” for queries related to health. The company could add a step to the information retrieval process designed to flag a risky query and have the system refuse to generate an answer in these instances, says Verberne. Google doesn’t aim to show AI Overviews for explicit or dangerous topics, or for queries that indicate a vulnerable situation, the company spokesperson says.
Techniques like reinforcement learning from human feedback, which incorporates such feedback into an LLM’s training, can also help improve the quality of its answers.
Similarly, LLMs could be trained specifically for the task of identifying when a question cannot be answered, and it could also be useful to instruct them to carefully assess the quality of a retrieved document before generating an answer, Verbene says: “Proper instruction helps a lot!”
Although Google has added a label to AI Overviews answers reading “Generative AI is experimental,” it should consider making it much clearer that the feature is in beta and emphasizing that it is not ready to provide fully reliable answers, says Shah. “Until it’s no longer beta—which it currently definitely is, and will be for some time— it should be completely optional. It should not be forced on us as part of core search.”
For years Brandon Eley’s online shoe company, 2BigFeet, prospered by reselling prominent brands. But the profit margins slowly narrowed as did the sources for large footwear sizes, Eley’s niche.
The solution is Michael Ellis Footwear, Eley’s direct-to-consumer brand, launched in 2021. “We finally decided to take matters into our own hands,” he told me.
In this second appearance on the podcast, he and I discussed the evolution of 2BigFeet, the launch of Michael Ellis, custom manufacturing, and more. Our entire audio is embedded below. The transcript is edited for clarity and length.
Eric Bandholz: Tell us about your business.
Brandon Eley: I started a company 24 years ago that sells shoes to guys with big feet, up to size 21. It’s called 2BigFeet. Our most popular sizes are 16, 17, and 18.
We added Michael Ellis Footwear, our own shoe brand, in 2021. We have been selling roughly 40 other brands for years. It’s always been a struggle to find items that our customers want. We beg and plead with manufacturers, these big national, international brands, multi-billion dollar companies. But they will not invest in extreme sizes and widths for the popular style. After years of begging, we finally decided to take matters into our own hands.
Bandholz: Are custom shoes an option for extreme sizes?
Eley: The cheapest custom shoes we’ve seen are around $600 a pair. They’re handmade leather soles, hand-stitched with hand-cut leathers. The manufacturers make cardboard templates and cut the leather by hand. It takes a long time, many months. They’re ugly shoes, not anything a young guy would want to wear.
Parents call us desperate to find shoes for their kids. Clothing is a problem if you’re 6’8″, 300 pounds, with an enormous foot. It’s hard on a lot of these kids going to school. We empathize with them. Those sizes — 21 and higher — are not huge moneymakers, but we want to say we’ve got shoes for everybody.
Bandholz: So anyone sized 21 and up has a limited choice.
Eley: Yes. To my knowledge, Bogs is the only company besides us that makes a 21. The rumor is it’s for Shaquille O’Neal, the basketball player. He’s reportedly a 23, but that’s in Nike, and they’re not true-to-size. Shaq, or somebody like Shaq, wanted a pair of hunting boots, and Bogs made them. They already went to 18 and now have sizes 19, 20, and 21. But that’s it. And it’s only in medium width.
We make ours in four widths: medium, wide, extra wide, and extra-extra wide. And then each whole size up to 25.
Bandholz: How do you know they won’t sit in inventory, unsold?
Eley: We order only a few pairs of each color, size, and width. We know the customers who will buy them. It takes a while to manufacture those units, and then they sit on a boat for a month and a half. We don’t want to be out of stock for long, but we’re also not going to order a dozen pairs of each.
It’s a small market. There’s a good chance we’ll never recoup our initial mold costs on sizes 22, 23, 24, and 25. Again, that’s not the goal. We want to be known as the company that has shoes for everybody with big feet.
Bandholz: You’ve found a niche.
Eley: We fell into it. My former business partner has big feet. It was a struggle for him, but that was the idea for our business. Building a successful footwear company targeting everybody, all sizes, would be much more difficult. There are thousands of those brands. Many are venture-backed and spend millions of dollars on advertising.
We’ve worked at our Michael Ellis brand for several years. It’s a slow and expensive process, and we’ve invested a lot of money. It’s a risky proposition, but we saw the margins shrinking in reselling name-brand products. We’ve been in business for 24 years. At about 18 years, making a decent profit with pure retail started getting harder. We’re a small family-owned business with fewer than 10 employees.
Bandholz: What are the lessons thus far with an in-house brand?
Eley: We started our manufacturing journey in India. We had intermediaries between us and the factory, not knowing what we didn’t know. We skipped a few steps in quality control and then tossed a good portion of the first container of merchandise. The quality was bad, and the sizing was off at least a third. And then the delays with the back-and-forth on creating samples, testing sizes, and, again, quality control.
We missed much of our fourth-quarter sales last year with our sneaker launch. The container got here on Cyber Monday. It easily cost us $150,000 in revenue. We still made money on them, but it will take time to turn that inventory because the first half of the year is much slower. That means less cash flow going into other styles.
We quickly learned the importance of personal relationships at the factories. The factories we work with now, in China and Brazil, are second- and third-generation owners, and the founders, their children, and their grandchildren are active in the business. They employ high-skilled workers — craftsmen and artisans.
As Google rolls out its new AI Overviews, SEO professionals and website owners must adapt to these changes to maintain and improve their search rankings. Google’s AI Overview, powered by the advanced Gemini model, integrates generative AI into the SERP. It provides users with summarized answers to complex queries at the top — well above the organic results. Google intends to improve user experience by offering quick, comprehensive answers; however, it also introduces new dynamics in interacting and preparing for these AI Overviews.
Understanding Google’s AI Overviews
Google’s AI Overviews are designed to summarize information from multiple web pages to answer user queries more efficiently. These overviews appear at the top of search results. They include links to the source websites, potentially driving more traffic to diverse sites. According to reports, users tend to click on links within AI Overviews more frequently than traditional search listings. This suggests an opportunity for increased visibility for well-optimized content. AI Overviews are widely available in the US and will gradually roll out worldwide.
Steps to prepare for Google’s AI Overview rollout
Enhance content quality:
Focus on creating high-quality, comprehensive content that addresses user queries in depth. Google’s AI will pull from various sources, so ensure your content is detailed and authoritative. Your ability to influence or to be cited in the summary will be directly related to your existing authority on the subject.
Use structured data to help Google understand and categorize your content accurately.
Optimize for user intent:
Identify and target the specific user intents behind searches. Use tools to determine your audience’s questions and tailor your content to provide clear, concise answers.
Incorporate FAQs and summaries that Google’s AI can easily extract for overviews.
Leverage Yoast SEO features:
Content analysis: Use Yoast SEO’s readability and SEO analysis tools to ensure your articles are well-optimized with relevant keywords, meta descriptions, and readability scores.
Schema markup: Implement structured data with Yoast SEO’s schema markup feature. This makes your content more accessible and understandable to search engines.
Internal linking: Improve your site’s internal linking structure to guide users and search engines to your most important content.
Monitor performance:
Keep an eye on your Google Search Console for changes in traffic and visibility. Google plans to include AI Overview impressions and clicks in Search Console reports. Unfortunately, these won’t be distinguished from traditional search results (SiliconANGLE) (Search Engine Land).
Use this data to adjust your strategies and optimize underperforming content.
Stay informed and adaptive:
Update your SEO knowledge regularly and stay informed about changes in Google’s algorithms and AI capabilities.
Participate in SEO communities and forums to share insights and strategies with peers.
How Yoast SEO can help
Yoast SEO can help you navigate the upcoming AI upheaval. Here’s how:
Content optimization: Yoast SEO’s analysis tools help you create content that meets high-quality standards. Quality is crucial if you want to feature in AI Overviews.
Readability checks: Ensure your content is easy to read and understand. This increases the chance of AI summarizing your content effectively.
Keyword optimization: Yoast helps you target the right keywords and phrases that align with user intent. As a result, you improve your chances of getting into AI-driven summaries.
Structured data implementation: Use Yoast SEO to add structured data to your pages. With structured data, you make it easier for Google to extract and present your content in AI Overviews.
Use Yoast SEO and be proactive
Use Yoast SEO and stay proactive with your SEO strategies. This way, you prepare for and thrive in today’s world of AI-driven search results. Stay adaptable and focus on high-quality content. Use the powerful tools at your disposal to maintain your visibility and traffic in this new era of search.
Carolyn Shelby
Carolyn Shelby is a seasoned SEO expert with a knack for making complex digital strategies accessible to everyone. She specializes in technical SEO, she excels at optimizing web architectures and crafting strategies that enhance visibility and drive growth. Carolyn views SEO as a powerful tool to narrate a brand’s journey, aligning content with core values to engage and convert audiences. Her approach is data-driven yet distinctly human. She uses straightforward, innovative methods to engage audiences and see real results.