Google Explains Why Low-Quality Traffic Can’t Impact Rankings via @sejournal, @martinibuster

Google’s Martin Splitt addressed a question about website trustworthiness and whether competitors can negatively impact it. He explained how Google assesses site trustworthiness and clarified why factors like links and site traffic don’t have a negative influence on Google’s perception of trustworthiness.

Trustworthiness

Googlers, research papers and patents mention the trustworthiness of websites but there is no actual trust metric in use at Google. It was confirmed at one time a long time ago that there are multiple signals that together indicate if a site could be trusted but that’s not a trust algorithm, those are just signals.

When Googlers talk about whether a site is trustworthy it’s probably best to not overthink it, they’re just talking about whether a site is trustworthy.

Can A Competitor Create Negative Trustworthiness Signals?

The person asking the question was worried about a competitor that was sending bot traffic to their site in what they felt was an effort to make their site appear to be untrustworthy by Google’s algorithm.

That might be a reference to an SEO idea that Google uses click metrics to rank web pages but most research papers about clicks are using clicks to validate search results, not for ranking web pages, it’s generally a quality assurance thing.

This is the question that was asked:

“Do I have to be concerned about bad actors trying to make our site appear untrustworthy by sending spam or fake traffic to my site? Since site trustworthiness is binary.”

Binary means it’s either this or that. In this case the person asking the question probably means a site is either trustworthy or untrustworthy with no gray areas in between.

Martin Splitt downplayed the idea of a binary quality to trustworthiness and outright denied that traffic could influence how Google sees a site.

He answered:

“It’s not really binary and just by sending traffic from questionable sources to a site, that won’t be ‘tainted’.”

“Spam or fake traffic” is not something that can negatively influence trust.

Martin explained that if a site itself is spammy then it’s going to be seen as spammy. He then confirmed that what other sites do in terms of linking or traffic has no effect on whether a site looks spammy or not.

He answered:

“If a site itself does shady things, such as spam, malware, sure, that’s a problem, but nobody gets to choose or control where traffic or links are coming from, so that’s not something Google Search will look at to judge a website’s trustworthiness.”

Bot Traffic Doesn’t Affect How Google Sees A Site

Pretty much every website experiences high levels of hacker bots probing around looking for vulnerabilities. Some bots repeatedly hit a site looking for non-existent pages. That’s just the state of the web, every site experiences that.

So what Martin said about third parties being unable to make another site appear to be untrustworthy makes sense, especially when it’s understood that all sites have low quality inbound links and low quality bot traffic.

Watch the SEO Office Hours podcast at the 18:48 minute mark:

Featured Image by Shutterstock/Krakenimages.com

How To Maintain Brand Visibility And Thrive In This Evolving Search Landscape via @sejournal, @marthavanberkel

As of July 2024, ChatGPT had approximately 2.9 billion monthly visits, compared to Google’s 130.1 billion, Facebook’s 13.2 Billion, X’s 4.1 billion, and Amazon’s 3.2 billion.

While ChatGPT’s monthly visits might seem like peanuts in comparison, it has built a strong user base in months and will likely continue to dominate as AI search gains popularity.

Similarly, we’re seeing these major search engines and social media platforms incorporate generative AI into the user experience.

What does this mean for your customers and prospects? Consumers can use new and existing platforms to interact with your content in a more natural conversation style.

They can ask questions, build on ideas, and get answers without leaving the interface.

So, how can brands maintain their visibility on these platforms and control how their content is understood in the face of generative AI? The answer boils down to three things:

  • Investing in great content.
  • Developing a content knowledge graph.
  • Having a growth mindset.

Together, these three pillars will ensure that people and machines understand your brand and content, enabling your team to thrive in this new world of generative AI search.

Let’s start with content.

Investing In Great Content

Over the last two years, we’ve seen Google place heavy emphasis on “Helpful, reliable, people-first content.

This means creating helpful, reliable content that focuses primarily on your audience’s needs – not the search engines.

While this seems logical, it’s a big shift from the old way of developing website content for search engines.

Your content should seek to provide a comprehensive answer to your audience’s questions, address any additional information they might need to know, and demonstrate your brand’s first-hand expertise or experience in the topic.

I love how people-first content brings us back to content as a service. Your content answers questions, helps make decisions, points people in the right direction, and delights them.

For brands to thrive in this new age of AI, they must continue to invest in great content. This requires your team to take the time to understand your audience and their changing needs, find gaps in your current content strategy, and iterate on existing content to continue delighting your audience.

Apart from investing and evolving your website content to delight your audience, you must also ensure that search engines, AI, and other machines can easily understand your content and infer answers from your data.

You can do this by implementing advanced schema markup on your site to develop a content knowledge graph.

Developing A Content Knowledge Graph

Earlier this year, Gartner published its research on the 30 Emerging Technologies That Will Guide Your Business Decisions in 2024.

It identified generative AI and knowledge graphs as two emerging technologies organizations should start investing in to be ready to utilize when they mature. While generative AI has gained widespread popularity through its productivity potential, knowledge graphs remain less well-known.

A knowledge graph is a collection of relationships between things defined using a standardized vocabulary, from which new knowledge can be gained through inferencing.

In fact, Google and Microsoft have been using knowledge graphs to generate knowledge panels and infer answers in search since 2010.

Many research papers, including the article Unifying Large Language Models and Knowledge Graphs: A Roadmap, explain that knowledge graphs “can enhance LLMs by providing external knowledge for inference and interpretability.” Hence reducing the risk of hallucinations in LLMs.

Content knowledge graphs, similar to a knowledge graph, are collections of relationships between entities on your website. Many digital marketing organizations have invested in building content knowledge graphs for their web data by implementing advanced schema markup (aka structured data).

Schema markup, also known as structured data, is machine-readable data that helps search engines understand the content on a page and show the content as a rich result on the search engine result page. However, the value of schema markup goes beyond that.

Your website content is filled with entities about your business – for example, your location, your employees, your products, etc.

When you describe the entities on your page and define their relationship with other entities in your business and things in the world using schema markup, you’ve effectively created a content knowledge graph.

Your content knowledge graph, filled with factual data based on your website content, can help AI and search engines infer things about your brand and provide users with accurate answers to their queries.

It can also help your content team identify areas where content is lacking or could be improved, thus supporting your content strategy.

Having A Growth Mindset

The third pillar to thriving in this new world of AI and search is the mindset of your team.

The marketing industry is experiencing unprecedented speed of change, propelled by AI.

As AI technology grows exponentially more powerful, those who don’t keep up will have to play catch up and potentially lose out to competition.

Therefore, teams that want to thrive must keep their learning hats on at all times and be ready to adapt and embrace innovation.

Furthermore, silos within organizations must be busted.

Take the two capabilities I mentioned earlier: schema markup and content knowledge graphs. Their success depends on the cross-functional work across marketing, SEO, IT, digital, and support.

Together, these teams will need to try something new, measure what they can, see if it drives the business’s desired outcomes, and then iterate on what they’ve learned.

At Schema App, we say that we are either “winning or learning,” and this is what I view as having a growth mindset.

In the Harvard Business Review article, What Having a Growth Mindset Really Means, author Carol Dweck defined having a growth mindset as:

“[individuals] who believe their talents can be developed (through hard work, good strategies, and input from others)… they worry less about looking smart and put more energy into learning.”

Organizations that fully embrace the growth mindset also see greater collaboration and innovation.

In Summary

As the world of AI and search continues to evolve, there are things you can do now, like invest in content and develop a content knowledge graph to control how AI and search engines understand your brand and content.

However, fostering collaboration and innovation within your team is the key to thriving in this ever-changing world of AI and search.

When your team embraces a growth mindset, no matter the changes disrupting search, content, or AI, you’ll be ready.

More resources: 


Featured Image: Gracia Chua/Schema App

5 Predictions for the 2024 Holiday Shopping Season

Get ready for a short and competitive 2024 Christmas shopping season as mobile devices, the election, email, and social commerce take center stage.

What follows are five holiday-shopping predictions, input from AI, and a report card of my forecasts last year.

54% on Mobile

In 2023, slightly more than half of U.S. online purchases from November 1 and December 25 came from mobile devices, accounting for approximately $113.5 billion, according to Adobe. This year, mobile will represent 54% of holiday ecommerce sales.

Three factors drive the growth: age, marketplaces, and buy-now pay-later options.

First, there is the age factor. Gen Z shoppers (ages 18 to 25) are more likely to purchase from a mobile device, and some 15% of Gen Z consumers buy directly from social media sites, according to Emarketer. Social commerce is mobile-friendly, doubling the impact.

Second, Amazon and other massive marketplaces accounted for more than half of holiday sales last year. These marketplaces offer native mobile apps that make shopping from a smartphone effortless.

Third, BNPL is increasingly popular for Christmas purchases. It’s often associated with relatively large retailers and marketplaces, which may offer a better mobile shopping experience.

5% Increase in Ecommerce

Early predictions from Emarketer, ecommerce platform makers, and others had the 2024 Christmas shopping season growing to $271 billion, a 23% increase from the $222.1 billion in 2023.

That is too optimistic, given the contentious U.S. election, inflation, and other economic woes. I’m predicting a 5% increase this year.

Email Volume Grows 25%

In 2024, we will continue to see growth in email marketing.

For example, last year Sinch Mailgun, which provides email infrastructure services, reported a 16% year-over-year increase in email volume for the Black Friday to Cyber Monday weekend. For all of November 2023, the volume rose 22% over the prior year.

The Christmas shopping season is relatively short in 2024 — just 29 days — which could make digital advertising more competitive and expensive. Marketers will turn to email to boost sales.

40% of Gen Zs Use Social

A recent survey of Gen Z shoppers found that about four in 10 will use some form of social commerce during the 2024 Christmas shopping season.

TikTok Shop, launched in September 2023, generated $11.9 billion in gross merchandise volume in 2023. TikTok does not typically share performance results, but it is reasonable to assume growth. Add the increase on other platforms, and social commerce in 2024 could be significant.

BNPL 9% of Retail Sales

For the second consecutive year, I predict that BNPL will represent 9% of total U.S. online holiday sales.

I overestimated last year’s BNPL growth, which was 7.4%. However, a recent Experian survey found that about 25% of American consumers plan to use BNPL this year. Thus I’ll stick with 9% growth for 2024.

Moreover, the ratio of in-store to ecommerce sales is leveling out. In 2023, ecommerce represented 34% of U.S. holiday sales, up one percentage point from the prior year. Brick-and-click retailers will find more ways to get shopping into physical locations. One of those ways is to promote BNPL offers in-store.

AI Predictions

After completing my list of five 2024 holiday predictions, I asked three generative artificial intelligence tools to do the same, using this prompt:

“Based on historical data and recent projections, make five bold predictions about the 2024 holiday shopping season for an audience of retail and ecommerce professionals.”

ChatGPT 4.o (by Open AI)

Home page of ChatGPT 4

ChatGPT 4 by OpenAI

  • Expect social commerce to account for nearly 20% of all online holiday transactions.
  • Sustainability will become a critical factor for consumers during the 2024 holiday season. Expect a 25% increase in sales for retailers offering carbon-neutral shipping, recyclable packaging, and sustainably sourced products.
  • Omnichannel fulfillment will dominate, with same-day delivery as the norm.
  • Mobile shopping will outpace desktop and in-store experiences, with more than 70% of holiday ecommerce transactions occurring on smartphones.

Grok-2 Beta (by X)

Home page of Grok-2 beta

Grok-2 Beta by X

  • Mobile commerce will dominate. Over 60% of all ecommerce transactions during the holiday season will occur on mobile devices, driven by enhanced mobile payment solutions and an increase in mobile-first consumer behavior.
  • Social commerce will rise. Retailers who integrate seamless in-app purchasing experiences could see up to a 30% increase in sales directly through social media.
  • Despite the holiday season being structurally shorter, the intensity of shopping activities will spike earlier and more sharply. Retailers will start their holiday promotions as early as late October, with a significant emphasis on pre-Black Friday deals.
  • AI personalization and predictive shopping will blossom. Retailers will use AI to not only suggest products but to predict what consumers might want before they explicitly search for it, potentially increasing conversion rates by offering personalized “surprise and delight” deals. This could lead to AI influencing over 25% of online holiday purchases.
  • With increasing awareness about environmental issues, sustainability will become a significant factor in purchasing decisions.

Gemini (by Google)

Home page of Gemini

Gemini by Google

  • We’ll see mobile devices account for over 60% of all online holiday shopping traffic and close to 50% of total online sales.
  • AI-powered personalization will go mainstream, with retailers using customer data and predictive analytics to create highly targeted product recommendations, offers, and experiences.
  • Social media platforms will become major players in the holiday shopping landscape.
  • Sustainability drives purchase decisions. Consumer awareness of environmental and social issues will reach new heights, with shoppers increasingly choosing brands and products that align with their values.
  • Supply chain disruptions will continue. Despite efforts to mitigate disruptions, ongoing challenges like port congestion, labor shortages, and material scarcity will persist, impacting product availability and pricing.

Last Year’s Predictions

Since 2013 I have predicted ecommerce trends and sales for the coming holiday season. Here’s how my 2023 forecast went.

Holiday spending grows less than 5% — correct. The National Retail Federation put total holiday sales growth at 3.8% in 2023. Emarketer set the 2023 holiday season growth rate at 3.9% while others had it as high as 4.9%, which was below the 5% prediction. By comparison, in 2022 year-over-year U.S. holiday spending grew 8.1%.

Buy-now pay-later accounts for 9% of retail sales — wrong. While the number of BNPL transactions grew 14% last year, they totaled just $16.6 billion, according to Adobe Analytics — 7.4% of sales.

Discounting drives sales — correct. This prediction was easy, with some estimates concluding that 60% of Christmas orders in 2023 had at least one sale item. Emarketer put it this way, “Deep discounts drove consumer spending across the Cyber Five in 2023.”

Shoppers wait for those discounts — correct. While U.S. Veteran’s Day (November 11) sales remained relatively flat from 2022 to 2023, Black Friday retail sales enjoyed a 7.5% year-over-year increase, and ecommerce leaped about 19%. Shoppers waited for the Black Friday and Cyber Monday discounts.

Generative AI impacts product discovery — unknown. This prediction is difficult to measure. I could not find data on how generative search results had impacted holiday purchase behavior.

Google Shows How To Block Bots And Boost Site Performance via @sejournal, @martinibuster

Google’s Martin Splitt answered a question about malicious bots that impact site performance, offering suggestions every SEO and site owner should know and put into action.

Malicious Bots Are An SEO Problem

Many SEOs who do site audits commonly overlook security and bot traffic as part of their audits because it’s not widely understood by digital marketers that security events impact site performance and can account for why a site is inadequately crawled. Improving core web vitals will do nothing to improve site performance when a poor security posture is contributing to poor site performance.

Every website is under attack and the effects of excessive crawling can trigger a “500 server error” response code, signaling an inability to serve web pages and hindering Google’s ability to crawl web pages.

How To Defend Against Bot Attacks

The person asking the question wanted Google’s advice on how to fight back against the waves of scraper bots impacting their server performance.

This is the question asked:

“Our website is experiencing significant disruptions due to targeted scraping by automated software, leading to performance issues, increased server load, and potential data security concerns. Despite IP blocking and other preventive measures, the problem persists. What can we do?”

Google’s Martin Splitt suggested identifying the service that is serving as the source of the attacks and notifying them of an abusive use of their services. He also recommended the firewall capabilities of a CDN (Content Delivery Network).

Martin answered:

“This sounds like somewhat of a distributed denial-of-service issue if the crawling is so aggressive that it causes performance degradation.

You can try identifying the owner of the network where the traffic is coming from, thank “their hoster” and send an abuse notification. You can use WHOIS information for that, usually.

Alternatively, CDNs often have features to detect bot traffic and block it and by definition they take the traffic away from your server and distribute it nicely, so that’s a win. Most CDNs recognize legitimate search engine bots and won’t block them but if that’s a major concern for you, consider asking them before starting to use them.”

Will Google’s Advice Work?

Identifying the cloud provider or server data center that’s hosting the malicious bots is good advice. But there are many scenarios where that won’t work.

Three Reasons Why Contacting Resource Providers Won’t Work

1. Many Bots Are Hidden

Bots often use VPNs and open source “Tor” networks that hide the source of the bots, defeating all attempts of identifying the cloud services or web host providing the infrastructure for the bots. Hackers also hide behind compromised home and business computers, called botnets to launch their attacks. There’s no way to identify them.

2. Bots Switch IP Addresses

Some bots respond to IP blocking by instantly switching to a different network to immediately resume their attack. An attack can originate from a German server and when blocked will switch to a network provider in Asia.

3. Inefficient Use Of Time

Contacting network providers about abusive users is futile when the source of the traffic is obfuscated or from hundreds of sources. Many site owners and SEOs might be surprised to discover how intensive the attacks on their websites are. Even taking action against a small group of offenders is an inefficient use of time because there are literally millions of other bots that will replace the ones blocked by a cloud provider.

And what about botnets made up of thousands of compromised computers around the world? Think you have time to notify all of those ISPs?

Those are three reasons why notifying infrastructure providers is not a viable approach to stopping bots that impact site performance. Realistically, it’s a futile and inefficient use of time.

Use A WAF To Block Bots

Using a Web Application Firewall (WAF) is a good idea and that’s the function that Martin Splitt suggests when he mentioned using a CDN (content delivery network). A CDN, like Cloudflare, sends browsers and crawlers the requested web page from a server that’s located closest to them, speeding up site performance and reducing server resources for the site owner.

A CDN also has a WAF (Web Application Firewall) which automatically blocks malicious bots. Martin’s suggestion for using a CDN is definitely a good option, especially because it has the additional benefit of improving site performance.

An option that Martin didn’t mention is to use a WordPress plugin WAF like Wordfence. Wordfence has a WAF that automatically shuts down bots based on their behavior. For example, if a bot is requesting ridiculous amounts of pages it will automatically create a temporary IP block. If the bot rotates to another IP address it will identify the crawling behavior and block it again.

Another solution to consider is a SaaS platform like Sucuri that offers a WAF and a CDN to speed up performance. Both Wordfence and Sucuri are trustworthy providers of WordPress security and they come with limited but effective free versions.

Listen to the question and answer at the 6:36 minute mark of the Google SEO Office Hours podcast:

Featured Image by Shutterstock/Krakenimages.com

Google Says Title Tags “Maybe” Impact Rankings via @sejournal, @martinibuster

Google’s John Mueller offered a surprising explanation about the ranking impact of title tags. His answer challenged the SEO belief that title tags are a critical ranking factor and clarified their actual role.

Mueller also discussed the proper use of meta descriptions.

Title Elements

The purpose of title tags is to provide a general description of the topic of a webpage.

Google’s SEO Starter Guide shows how to write titles:

“…a good title is unique to the page, clear and concise, and accurately describes the contents of the page. For example, your title could include the name of your website or business, other bits of important information like the physical location of the business, and maybe some information about what the particular page has to offer for users.”

The official W3C documentation defines the purpose of the title tag like this:

“The title element represents the document’s title or name. Authors should use titles that identify their documents even when they are used out of context, for example in a user’s history or bookmarks, or in search results. The document’s title is often different from its first heading, since the first heading does not have to stand alone when taken out of context.”

Meta Description

The meta description describes the web page (that’s why it’s called a meta description).

The official W3C HTML documentation says:

description
The value must be a free-form string that describes the page. The value must be appropriate for use in a directory of pages, e.g. in a search engine. There must not be more than one meta element with its name attribute set to the value description per document.”

Google’s SEO Stater Guide explains that meta descriptions are “occasionally” used for generating the snippet shown in the search results:

“The snippet is sourced from the actual content of the page the search result is linking to… Occasionally the snippet may be sourced from the contents of the meta description tag, which is typically a succinct, one- or two-sentence summary of the page. A good meta description is short, unique to one particular page, and includes the most relevant points of the page.”

And Google’s Meta Description Best Practices recommends:

“A meta description tag generally informs and interests users with a short, relevant summary of what a particular page is about. They are like a pitch that convince the user that the page is exactly what they’re looking for.”

Notice that both the official HTML documentation and Google do not recommend using your title tag or meta description as a place to park your keywords or for putting a call to action to get people to visit the site? Those are SEO practices that SEOs invented and it’s probably why so many title tags get rewritten, because they’re incorrectly done.

The Ranking Impact Of Title Tags

Google’s John Mueller directly answered and confirmed that changing the title element maybe can impact rankings. He didn’t say anything about the title tag being a ranking factor. He just said that changing the title tag maybe can impact rankings.

Mueller’s answer confirmed that changing the title element could maybe impact rankings:

This is the question:

“We have a website with satisfying ranks and now our product added new features. We need to modify the page meta title & description, does that affect the current rankings?”

John Mueller answered that maybe changing the title tag could change the rankings in the search results.

Here’s his answer:

“Yes, or better, maybe. Changing things like titles or headings on pages can result in changes in Search.”

Why did Mueller say maybe?

He didn’t explain why.

But his answer confirms that changing your title elements doesn’t automatically cause a change in rankings.

My opinion (based on my experience) is that title elements are just content and changing your content can affect rankings.

Mueller’s answer about meta descriptions implied that it doesn’t have an affect on rankings but it could affect the snippet shown in the search results. That’s something that the digital marketing community is already on board with.

This is what he said about meta descriptions:

“Similarly, changing the meta description on a page can result in changes with how the snippet of a page is shown in Search. This is expected, and usually something that SEOs or site-owners focus on in an attempt to improve things in search.”

Title Tags Maybe Can Change Rankings

Mueller’s answer might come as a surprise to some because belief that the title tag is an important ranking factor has been a part of the SEO tradition since the beginning of SEO over 20 years ago when search engines were relatively primitive compared to today. Some in the SEO community, for whatever reason, continue the decades old tradition that title elements are a huge ranking factor.

Mueller confirmed that changing the title element might maybe impact search results, which is what some people have experienced, in either direction of rankings.

Listen to the question and answer at the 19:29 minute mark:

Featured Image by Shutterstock/Krakenimages.com

Move over, text: Video is the new medium of our lives

The other day I idly opened TikTok to find a video of a young woman refinishing an old hollow-bodied electric guitar.

It was a montage of close-up shots—looking over her shoulder as she sanded and scraped the wood, peeled away the frets, expertly patched the cracks with filler, and then spray-painted it a radiant purple. She compressed days of work into a tight 30-second clip. It was mesmerizing.

Of course, that wasn’t the only video I saw that day. In barely another five minutes of swiping around, I saw a historian discussing the songs Tolkien wrote in The Lord of the Rings; a sailor puzzling over a capsized boat he’d found deep at sea; a tearful mother talking about parenting a child with ADHD; a Latino man laconically describing a dustup with his racist neighbor; and a linguist discussing how Gen Z uses video-game metaphors in everyday life.

I could go on. I will! And so, probably, will you. This is what the internet looks like now. It used to be a preserve of text and photos—but increasingly, it is a forest of video.

This is one of the most profound technology shifts that will define our future: We are entering the age of the moving image.

For centuries, when everyday people had to communicate at a distance, they really had only two options. They could write something down; they could send a picture. The moving image was too expensive to shoot, edit, and disseminate. Only pros could wield it.

The smartphone, the internet, and social networks like TikTok have rapidly and utterly transformed this situation. It’s now common, when someone wants to hurl an idea into the world, not to pull out a keyboard and type but to turn on a camera and talk. For many young people, video might be the prime way to express ideas.

As media thinkers like Marshall McLuhan have intoned, a new medium changes us. It changes the way we learn, the way we think—and what we think about. When mass printing emerged, it helped create a culture of news, mass literacy, and bureaucracy, and—some argue—the very idea of scientific evidence. So how will mass video shift our culture?

For starters, I’d argue, it is helping us share knowledge that used to be damnably hard to capture in text. I’m a long-distance cyclist, for example, and if I need to fix my bike, I don’t bother reading a guide. I look for a video explainer. If you’re looking to express—or absorb—knowledge that’s visual, physical, or proprioceptive, the moving image nearly always wins. Athletes don’t read a textual description of what they did wrong in the last game; they watch the clips. Hence the wild popularity, on video platforms, of instructional video—makeup tutorials, cooking demonstrations. (Or even learn-to-code material: I learned Python by watching coders do it.)

Video also is no longer about mere broadcast, but about conversation—it’s a way to respond to others, notes Raven Maragh-Lloyd, the author of Black Networked Resistance and a professor of film and media studies at Washington University. “We’re seeing a rise of audience participation,” she notes, including people doing “duets” on TikTok or response videos on YouTube. Everyday creators see video platforms as ways to talk back to power.

“My students were like, ‘If there’s a video over seven seconds, we’re not watching it.’”

Brianna Wiens, Waterloo University

There’s also an increasingly sophisticated lexicon of visual styles. Today’s video creators riff on older film aesthetics to make their points. Brianna Wiens, an assistant professor of digital media and rhetoric at Waterloo University, says she admired how a neuroscientist used stop-motion video, a technique from the early days of film, to produce TikTok discussions of vaccines during the height of the covid-19 pandemic. Or consider the animated GIF, which channels the “zoetrope” of the 1800s, looping a short moment in time to examine over and over.

Indeed, as video becomes more woven into the vernacular of daily life, it’s both expanding and contracting in size. There are streams on Twitch where you can watch someone for hours—and viral videos where someone compresses an idea into mere seconds. Those latter ones have a particular rhetorical power because they’re so ingestible. “I was teaching a class called Digital Lives, and my students were like, If there’s a video over seven seconds, we’re not watching it,” Wiens says, laughing.

Are there dangers ahead as use of the moving image grows? Possibly. Maybe it will too powerfully reward people with the right visual and physical charisma. (Not necessarily a novel danger: Text and radio had their own versions.) More subtly, video is technologically still adolescent. It’s not yet easy to search, or to clip and paste and annotate and collate—to use video for quietly organizing our thoughts, the way we do with text. Until those tool sets emerge (and you can see that beginning), its power will be limited. Lastly, maybe the moving image will become so common and go-to that’ll kill off print culture.

Media scholars are not terribly stressed about this final danger. New forms of media rarely kill off older ones. Indeed, as the late priest and scholar Walter Ong pointed out, creating television and radio requires writing plenty of text—all those scripts. Today’s moving-media culture is possibly even more saturated with writing. Videos on Instagram and TikTok often include artfully arranged captions, “diegetic” text commenting on the action, or data visualizations. You read while you watch; write while you shoot.

“We’re getting into all kinds of interesting hybrids and relationships,” notes Lev Manovich, a professor at the City University of New York. The tool sets for sculpting and editing video will undoubtedly improve too, perhaps using AI to help auto-edit, redact, summarize. 

One firm, Reduct, already offers a clever trick: You alter a video by editing the transcript. Snip out a sentence, and it snips out the related visuals. Public defenders use it to parse and edit police videos. They’re often knee-deep in the stuff—the advent of body cameras worn by officers has produced an ocean of footage, as Reduct’s CEO, Robert Ochshorn, tells me. 

Meanwhile, generative AI will make it easier to create a film out of pure imagination. This means, of course, that we’ll see a new flood of visual misinformation. We’ll need to develop a sharper culture of finding the useful amid the garbage. It took print a couple of centuries to do that, as scholars of the book will tell you—centuries during which the printing press helped spark untold war and upheaval. We’ll be living through the same process with the moving image.

So strap yourselves in. Whatever else happens, it’ll be interesting. 

Clive Thompson is the author of Coders: The Making of a New Tribe and the Remaking of the World.

Google’s Revamped Documentation Shows 4 Reasons To Refresh Content via @sejournal, @martinibuster

Google significantly revamped its documentation about ranking pages that contain video content. While the changelog lists three areas that changed, a review of the content provides a case study of four considerations for digital marketers and publishers when refreshing content to improve relevance for site visitors—and Google.

What Changed

The documentation that was updated relates to ranking web pages that contain videos. The purpose of the documentation is to communicate best practices for optimizing videos for higher visibility in Google’s search results.

Google’s changelog indicated that there were three major changes to the Video SEO best practices page.

  • Clarified video indexing criteria
  • Updated technical requirements
  • Added a new section about dedicated watch pages for each video

This is what the changelog shows what was changed:

“Improving the Video SEO documentation

What: Overhauled the video SEO best practices. Notably, we clarified the video indexing criteria and technical requirements, added a new watch page section, and expanded our examples.

Why: Based on feedback submissions, we revisited our video SEO guidance to clarify what’s eligible for a video result and how site owners can make it easier for Google to find their videos.”

Four Reasons To Refresh Content

There’s a common misinterpretation that encourages changing content annually because “Google loves fresh content,” which is a gross misunderstanding of the Freshness Algorithm. Content shouldn’t be changed without purpose—otherwise, it’s just “rearranging the furniture” instead of truly “redesigning the space.”

Google’s reasons for updating the content offer a mini case study of three things publishers and businesses should consider when freshening up their content.

These are the three reasons for changing the Video SEO content:

  1. Remove Outdated Content
  2. Improved Information Density
  3. Add Fresh Information
  4. Update For Brevity And Clarity

1. Remove Outdated Content

The old version of the documentation was written when video as web content was a “growing format” and the changes reflect that the times have changed, rendering the old content out of date.

“Video is a growing format for content creation and consumption on the web, and Google indexes videos from millions of different sites to serve to users. “

Videos in content are not a growing format. The editors of the web page were right to remove that passage because it no longer made any sense.

Takeaway: Always keep up to date with how your readers perceive the topic. Failure to do this will make the content look less authoritative and trustworthy.

2. Improved Information Density

Information density in this context describes the ability of content to communicate ideas and topics with the least amount of words and with the highest amount of clarity.

An opening sentence should reflect what the entire topic of the web page is about but the original opening sentence did a poor job of communicating that. It referenced that “Video is a growing format” which is a statement that absolutely did not reflect the web page topic.

This is the new opening sentence:

“If you have videos on your site, following these video SEO best practices can help more people find your site through video results on Google.”

The new sentence accurately describes the topic of the entire web page is about in only 23 words.  Here’s something really cool: The second sentence remains exactly the same between the old and revised versions.

Takeaway: The lesson here is to revise what needs to be revised and don’t make changes when the original works just fine.

3. Add Fresh Information

An important change that all publishers should consider is to update content with fresh content that reflects how topics evolve over time. Products, laws, how consumers use services and products, everything undergoes some kind of change over time.

Google added content about tools available in Google Search Console that enable publishers to monitor the performance of their video content pages.

4. Update For Brevity And Clarity

The third reason for changing some of the content was to make it more concise, easier to read with simplified language. One of the subtle changes they made was change the phrase “landing page” to “watch page.” This seemingly small change clarifies the meaning of the sentence by making it super clear that they are referring to a page where videos are watched. Previously the documentation made zero references to watch page and now it makes 21 references to that phrase, introducing consistency in the message of the web page.

Many Reasons To Update Content

Every publisher should consider reviewing their content on a daily basis, whether that’s once a year for a smaller site, or chunking it up and tackling different sections on a monthly basis, a content review is a great way to keep content relevant to users and to discover new topics for content. Sometimes it’s better to break out a topic from a web page and create a dedicated page for it.

Read the updated documentation:
Video SEO best practices

Compare it to the old documentation at Archive.org:
Video SEO best practices

Featured Image by Shutterstock/Cast Of Thousands

Google Cautions On Improper 404 Handling via @sejournal, @martinibuster

Google’s John Mueller addressed whether numerous 404 errors negatively impact rankings and provided a clear explanation of the best practices for handling them.

404 (Not Found) Status Code

404 is the code that a server sends when a browser or a crawler requests a web page that the server couldn’t find. It only means that the page was not found.

The official W3C documentation doesn’t use the word “error” in its definition of 404. That said, the 400 series of codes (400, 404, 410, etc.) are classified as Client Error Responses. A client is a browser or a crawler, so a client error response means that the server is telling the browser or crawler that their request is in error. It doesn’t mean that the website is in error.

This is the official W3C definition of a 404 Page Not Found response:

“The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists. A 404 status code does not indicate whether this lack of representation is temporary or permanent; the 410 (Gone) status code is preferred over 404 if the origin server knows, presumably through some configurable means, that the condition is likely to be permanent.”

Will 404 Errors Affect Rankings?

The person asking the question wanted to know if a lot of 404 responses will affect rankings. Google’s John Mueller answered the question then he explained the right way to “fix” 404 error responses and cautioned about when not to “fix” them. I put “fix” in quotation marks because 404 responses are not always something that needs fixing.

Here’s the question:

“My website has a lot of 404s. Would I lose my site’s rankings if I don’t redirect them?”

John Mueller answered:

“First off, the 404s wouldn’t affect the rest of your site’s rankings.”

Addressing 404s With Redirects

Mueller next discussed the use of redirects for stopping 404 responses from happening. A redirect is a server response that tells the client that the web page they are requesting has been moved to another URL. A 301 redirect tells the browser or crawler that the URL has permanently moved to another URL.

When To Use Redirects For 404s

Redirecting a web page that no longer exists to another web page is sometimes the right way to handle 404 page not found responses.

This is how Mueller explains the proper use of redirects for “fixing” 404 responses:

“Redirects can play a role in dealing with old pages, but not always. For example, if you have a genuine replacement product, such as a new cup that functionally replaces a cup which is no longer produced, then redirecting is fine.”

When Not To Use Redirects For 404s

Next he explained when not to use redirects for 404s, explaining that it’s a crummy experience to show a web page that is irrelevant to what the site visitors are expecting to see.

Mueller explains:

“On the other hand, if you just have similar pages, then don’t redirect. If the user clicked on your site in search of a knife, they would be frustrated to only see spoons. It’s a terrible user-experience, and doesn’t help in search. “

It’s Okay To Show 404 Responses

Mueller next explained that it’s okay to show 404 responses because it’s the right response for when a browser or crawler asks for a page that doesn’t exist on a server anymore.

He explained:

“Instead, return an HTTP 404 result code. Make a great 404 page. Maybe even make a 404 page that explains why spoons are superior to knives, if you can make that argument. Just don’t blindly redirect to a similar page, a category page, or your homepage. If you’re unsure, don’t redirect. Accept that 404s are fine, they’re a normal part of a healthy website.”

Always Investigate Error Responses

Something that Mueller didn’t mention is that 404 responses should always be investigated. Don’t stop investigating just because the page doesn’t exist and there’s no other page to redirect it to. Sometimes there’s a real problem that needs solving.

404 By Internal Links

For example, some 404s are caused by broken internal linking where a URL is misspelled. You can “fix” that by redirecting the wrong URL to the correct URL but that’s not fixing the problem because the real problem is the broken link itself.

404 Caused By Outgoing Links

Some 404s are caused by linking to pages that no longer exist. Linking to pages that don’t exist makes it look like the page is abandoned. It’s a poor user experience to link to a non-existent web page and there is never a “normal part of a healthy website.” So either link to the right page, link to something else or don’t link to anything at all.

404s Caused By Inbound Links

There are another type of 404 responses that Mueller didn’t talk about that need looking into. Sometimes sites misspell a URL and when that happens the right response would be a 301 to the correct response. You can try contacting the site to ask them to fix their mistake but it’s easier to just add the redirect and move on with your life.

Listen to the question and answer at the 2:08 minute mark:

Featured Image by Shutterstock/Krakenimages.com

Beyond gene-edited babies: the possible paths for tinkering with human evolution

In 2016, I attended a large meeting of journalists in Washington, DC. The keynote speaker was Jennifer Doudna, who just a few years before had co-invented CRISPR, a revolutionary method of changing genes that was sweeping across biology labs because it was so easy to use. With its discovery, Doudna explained, humanity had achieved the ability to change its own fundamental molecular nature. And that capability came with both possibility and danger. One of her biggest fears, she said, was “waking up one morning and reading about the first CRISPR baby”—a child with deliberately altered genes baked in from the start.  

As a journalist specializing in genetic engineering—the weirder the better—I had a different fear. A CRISPR baby would be a story of the century, and I worried some other journalist would get the scoop. Gene editing had become the biggest subject on the biotech beat, and once a team in China had altered the DNA of a monkey to introduce customized mutations, it seemed obvious that further envelope-pushing wasn’t far off. 

If anyone did create an edited baby, it would raise moral and ethical issues, among the profoundest of which, Doudna had told me, was that doing so would be “changing human evolution.” Any gene alterations made to an embryo that successfully developed into a baby would get passed on to any children of its own, via what’s known as the germline. What kind of scientist would be bold enough to try that? 

Two years and nearly 8,000 miles in an airplane seat later, I found the answer. At a hotel in Guangzhou, China, I joined a documentary film crew for a meeting with a biophysicist named He Jiankui, who appeared with a retinue of advisors. During the meeting, He was immensely gregarious and spoke excitedly about his research on embryos of mice, monkeys, and humans, and about his eventual plans to improve human health by adding beneficial genes to people’s bodies from birth. Still imagining that such a step must lie at least some way off, I asked if the technology was truly ready for such an undertaking. 

“Ready,” He said. Then, after a laden pause: “Almost ready.”

Why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes.

Four weeks later, I learned that he’d already done it, when I found data that He had placed online describing the genetic profiles of two gene-edited human fetuses—that is, ”CRISPR babies” in gestation—as well an explanation of his plan, which was to create humans immune to HIV. He had targeted a gene called CCR5, which in some people has a variation known to protect against HIV infection. It’s rare for numbers in a spreadsheet to make the hair on your arms stand up, although maybe some climatologists feel the same way seeing the latest Arctic temperatures. It appeared that something historic—and frightening—had already happened. In our story breaking the news that same day, I ventured that the birth of genetically tailored humans would be something between a medical breakthrough and the start of a slippery slope of human enhancement. 

For his actions, He was later sentenced to three years in prison, and his scientific practices were roundly excoriated. The edits he made, on what proved to be twin girls (and a third baby, revealed later), had in fact been carelessly imposed, almost in an out-of-control fashion, according to his own data. And I was among a flock of critics—in the media and academia—who would subject He and his circle of advisors to Promethean-level torment via a daily stream of articles and exposés. Just this spring, Fyodor Urnov, a gene-editing specialist at the University of California, Berkeley, lashed out on X, calling He a scientific “pyromaniac” and comparing him to a Balrog, a demon from J.R.R. Tolkien’s The Lord of the Rings. It could seem as if He’s crime wasn’t just medical wrongdoing but daring to take the wheel of the very processes that brought you, me, and him into being. 

Futurists who write about the destiny of humankind have imagined all sorts of changes. We’ll all be given auxiliary chromosomes loaded with genetic goodies, or maybe we’ll march through life as a member of a pod of identical clones. Perhaps sex will become outdated as we reproduce exclusively through our stem cells. Or human colonists on another planet will be isolated so long that they become their own species. The thing about He’s idea, though, is that he drew it from scientific realities close at hand. Just as some gene mutations cause awful, rare diseases, others are being discovered that lend a few people the ability to resist common ones, like diabetes, heart disease, Alzheimer’s—and HIV. Such beneficial, superpower-like traits might spread to the rest of humanity, given enough time. But why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes. That is, in theory, the easiest way to go about making such changes—it’s just one cell to start with. 

Editing human embryos is restricted in much of the world—and making an edited baby is flatly illegal in most countries surveyed by legal scholars. But advancing technology could render the embryo issue moot. New ways of adding CRISPR to the bodies of people already born—children and adults—could let them easily receive changes as well. Indeed, if you are curious what the human genome could look like in 125 years, it’s possible that many people will be the beneficiaries of multiple rare, but useful, gene mutations currently found in only small segments of the population. These could protect us against common diseases and infections, but eventually they could also yield frank improvements in other traits, such as height, metabolism, or even cognition. These changes would not be passed on genetically to people’s offspring, but if they were widely distributed, they too would become a form of human-directed self-evolution—easily as big a deal as the emergence of computer intelligence or the engineering of the physical world around us.

I was surprised to learn that even as He’s critics take issue with his methods, they see the basic stratagem as inevitable. When I asked Urnov, who helped coin the term “genome editing” in 2005, what the human genome could be like in, say, a century, he readily agreed that improvements using superpower genes will probably be widely introduced into adults—and embryos—as the technology to do so improves. But he warned that he doesn’t necessarily trust humanity to do things the right way. Some groups will probably obtain the health benefits before others. And commercial interests could eventually take the trend in unhelpful directions—much as algorithms keep his students’ noses pasted, unnaturally, to the screens of their mobile phones. “I would say my enthusiasm for what the human genome is going to be in 100 years is tempered by our history of a lack of moderation and wisdom,” he said. “You don’t need to be Aldous Huxley to start writing dystopias.”

Editing early

At around 10 p.m. Beijing time, He’s face flicked into view over the Tencent videoconferencing app. It was May 2024, nearly six years after I had first interviewed him, and he appeared in a loftlike space with a soaring ceiling and a wide-screen TV on a wall. Urnov had warned me not to speak with He, since it would be like asking “Bernie Madoff to opine about ethical investing.” But I wanted to speak to him, because he’s still one of the few scientists willing to promote the idea of broad improvements to humanity’s genes. 

Of course, it’s his fault everyone is so down on the idea. After his experiment, China formally made “implantation” of gene-edited human embryos into the uterus a crime. Funding sources evaporated. “He created this blowback, and it brought to a halt many people’s research. And there were not many to begin with,” says Paula Amato, a fertility doctor at Oregon Health and Science University who co-leads one of only two US teams that have ever reported editing human embryos in a lab.  “And the publicity—nobody wants to be associated with something that is considered scandalous or eugenic.”

After leaving prison in 2022, the Chinese biophysicist surprised nearly everyone by seeking to make a scientific comeback. At first, he floated ideas for DNA-based data storage and “affordable” cures for children who have muscular dystrophy. But then, in summer 2023, he posted to social media that he intended to return to research on how to change embryos with gene editing, with the caveat that “no human embryo will be implanted for pregnancy.” His new interest was a gene called APP, or amyloid precursor protein. It’s known that people who possess a very rare version, or “allele,” of this gene almost never develop Alzheimer’s disease

In our video call, He said the APP gene is the main focus of his research now and that he is determining how to change it. The work, he says, is not being conducted on human embryos, but rather on mice and on kidney cells, using an updated form of CRISPR called base editing, which can flip individual letters of DNA without breaking the molecule. 

“We just want to expand the protective allele from small amounts of lucky people to maybe most people,” He told me. And if you made the adjustment at the moment an egg is fertilized, you would only have to change one cell in order for the change to take hold in the embryo and, eventually, everywhere in a person’s brain. Trying to edit an individual’s brain after birth “is as hard a delivering a person to the moon,” He said. “But if you deliver gene editing to an embryo, it’s as easy as driving home.” 

In the future, He said, human embryos will “obviously” be corrected for all severe genetic diseases. But they will also receive “a panel” of “perhaps 20 or 30” edits to improve health. (If you’ve seen the sci-fi film Gattaca, it takes place in a world where such touch-ups are routine—leading to stigmatization of the movie’s hero, a would-be space pilot who lacks them.) One of these would be to install the APP variant, which involves changing a single letter of DNA. Others would protect against diabetes, and maybe cancer and heart disease. He calls these proposed edits “genetic vaccines” and believes people in the future “won’t have to worry” about many of the things most likely to kill them today.  

Is He the person who will bring about this future? Last year, in what seemed to be a step toward his rehabilitation, he got a job heading a gene center at Wuchang University of Technology, a third-tier institution in Wuhan. But He said during our call that he had already left the position. He didn’t say what had caused the split but mentioned that a flurry of press coverage had “made people feel pressured.” One item, in a French financial paper, Les Echos, was titled “GMO babies: The secrets of a Chinese Frankenstein.” Now he carries out research at his own private lab, he says, with funding from Chinese and American supporters. He has early plans for a startup company. Could he tell me names and locations? “Of course not,” he said with a chuckle. 

little girl holding a snake

MICHAEL BYERS

It could be there is no lab, just a concept. But it’s a concept that is hard to dismiss. Would you give your child a gene tweak—a swap of a single genetic letter among the 3 billion that run the length of the genome—to prevent Alzheimer’s, the mind thief that’s the seventh-leading cause of death in the US? Polls find that the American public is about evenly split on the ethics of adding disease resistance traits to embryos. A sizable minority, though, would go further. A 2023 survey published in Science found that nearly 30% of people would edit an embryo if it enhanced the resulting child’s chance of attending a top-ranked college. 

The benefits of the genetic variant He claims to be working with were discovered by the Icelandic gene-hunting company deCode Genetics. Twenty-six years ago, in 1998, its founder, a doctor named Kári Stefánsson, got the green light to obtain medical records and DNA from Iceland’s citizens, allowing deCode to amass one of the first large national gene databases. Several similar large biobanks now operate, including one in the United Kingdom, which recently finished sequencing the genomes of 500,000 volunteers. These biobanks make it possible to do computerized searches to find relationships between people’s genetic makeup and real-life differences like how long they live, what diseases they get, and even how much beer they drink. The result is a statistical index of how strongly every possible difference in human DNA affects every trait that can be measured. 

In 2012, deCode’s geneticists used the technique to study a tiny change in the APP gene and determined that the individuals who had it rarely developed Alzheimer’s. They otherwise seemed healthy. In fact, they seemed particularly sharp in old age and appeared to live longer, too. Lab tests confirmed that the change reduces the production of brain plaques, the abnormal clumps of protein that are a hallmark of the disease. 

“This is beginning to be about the essence of who we are as a species.”

Kári Stefánsson, founder and CEO, deCode genetics

One way evolution works is when a small change or error appears in one baby’s DNA. If the change helps that person survive and reproduce, it will tend to become more common in the species—eventually, over many generations, even universal. This process is slow, but it’s visible to science. In 2018, for example, researchers determined that the Bajau, a group indigenous to Indonesia whose members collect food by diving, possess genetic changes associated with bigger spleens. This allows them to store more oxygenated red blood cells—an advantage in their lives. 

Even though the variation in the APP gene seems hugely beneficial, it’s a change that benefits old people, way past their reproductive years. So it’s not the kind of advantage natural selection can readily act on. But we could act on it. That is what technology-assisted evolution would look like—seizing on a variation we think is useful and spreading it. “The way, probably, that enhancement will be done will be to look at the population, look at people who have enhanced capabilities—whatever those might be,” the Israeli medical geneticist Ephrat Levy-Lahad said during a gene-editing summit last year. “You are going to be using variations that already exist in the population that you already have information on.”

One advantage of zeroing in on advantageous DNA changes that already exist in the population is that their effects are pretested. The people located by deCode were in their 80s and 90s. There didn’t seem to be anything different about them—except their unusually clear minds. Their lives—as seen from the computer screens of deCode’s biobank—served as a kind of long-term natural experiment. Yet scientists could not be fully confident placing this variant into an embryo, since the benefits or downsides might differ depending on what other genetic factors are already present, especially other Alzheimer’s risk genes. And it would be difficult to run a study to see what happens. In the case of APP, it would take 70 years for the final evidence to emerge. By that time, the scientists involved would all be dead. 

When I spoke with Stefánsson last year, he made the case both for and against altering genomes with “rare variants of large effect,” like the change in APP. “All of us would like to keep our marbles until we die. There is no question about it. And if you could, by pushing a button, install the kind of protection people with this mutation have, that would be desirable,” he said. But even if the technology to make this edit before birth exists, he says, the risks of doing so seem almost impossible to gauge: “You are not just affecting the person, but all their descendants forever. These are mutations that would allow for further selection and further evolution, so this is beginning to be about the essence of who we are as a species.”

Editing everyone

Some genetic engineers believe that editing embryos, though in theory easy to do, will always be held back by these grave uncertainties. Instead, they say, DNA editing in living adults could become easy enough to be used not only to correct rare diseases but to add enhanced capabilities to those who seek them. If that happens, editing for improvement could spread just as quickly as any consumer technology or medical fad. “I don’t think it’s going to be germline,” says George Church, a Harvard geneticist often sought out for his prognostications. “The 8 billion of us who are alive kind of constitute the marketplace.” For several years, Church has been circulating what he calls “my famous, or infamous, table of enhancements.” It’s a tally of gene variants that lend people superpowers, including APP and another that leads to extra-hard bones, which was found in a family that complained of not being able to stay afloat in swimming pools. The table is infamous because some believe Church’s inclusion of the HIV-protective CCR5 variant inspired He’s effort to edit it into the CRISPR babies.

Church believes novel gene treatments for very serious diseases, once proven, will start leading the way toward enhancements and improvements to people already born. “You’d constantly be tweaking and getting feedback,” he says—something that’s hard to do with the germline, since humans take so long to grow up. Changes to adult bodies would not be passed down, but Church thinks they could easily count as a form of heredity. He notes that railroads, eyeglasses, cell phones—and the knowledge of how to make and use all these technologies—are already all transmitted between generations. “We’re clearly inheriting even things that are inorganic,” he says. 

The biotechnology industry is already finding ways to emulate the effects of rare, beneficial variants. A new category of heart drugs, for instance, mimics the effect of a rare variation in a gene, called PCSK9, that helps maintain cholesterol levels. The variation, initially discovered in a few people in the US and Zimbabwe, blocks the gene’s activity and gives them ultra-low cholesterol levels for life. The drugs, taken every few weeks or months, work by blocking the PCSK9 protein. One biotech company, though, has started trying to edit the DNA of people’s liver cells (the site of cholesterol metabolism) to introduce the same effect permanently. 

For now, gene editing of adult bodies is still challenging and is held back by the difficulty of “delivering” the CRISPR instructions to thousands, or even billions of cells—often using viruses to carry the payloads. Organs like the brain and muscles are hard to access, and the treatments can be ordeals. Fatalities in studies aren’t unheard-of. But biotech companies are pouring dollars into new, sleeker ways to deliver CRISPR to hard-to-reach places. Some are designing special viruses that can home in on specific types of cells. Others are adopting nanoparticles similar to those used in the covid-19 vaccines, with the idea of introducing editors easily, and cheaply, via a shot in the arm. 

At the Innovative Genomics Institute, a center established by Doudna in Berkeley, California, researchers anticipate that as delivery improves, they will be able to create a kind of CRISPR conveyor belt that, with a few clicks of a mouse, allows doctors to design gene-editing treatments for any serious inherited condition that afflicts children, including immune deficiencies so uncommon that no company will take them on. “This is the trend in my field. We can capitalize on human genetics quite quickly, and the scope of the editable human will rapidly expand,” says Urnov, who works at the institute. “We know that already, today—and forget 2124, this is in 2024—we can build enough CRISPR for the entire planet. I really, really think that [this idea of] gene editing in a syringe will grow. And as it does, we’re going to start to face very clearly the question of how we equitably distribute these resources.” 

For now, gene-editing interventions are so complex and costly that only people in wealthy countries are receiving them. The first such therapy to get FDA approval, a treatment for sickle-cell disease, is priced at over $2 million and requires a lengthy hospital stay. Because it’s so difficult to administer, it’s not yet being offered in most of Africa, even though that is where sickle-cell disease is most common. Such disparities are now propelling efforts to greatly simplify gene editing, including a project jointly paid for by the Gates Foundation and the National Institutes of Health that aims to design “shot in the arm” CRISPR, potentially making cures scalable and “accessible to all.” A gene editor built along the lines of the covid-19 vaccine might cost only $1,000. The Gates Foundation sees the technology as a way to widely cure both sickle-cell and HIV—an “unmet need” in Africa, it says. To do that, the foundation is considering introducing into people’s bone marrow the exact HIV-defeating genetic change that He tried to install in embryos. 

Then there’s the risk that gene terrorists, or governments, could change people’s DNA without their permission or knowledge.

Scientists can foresee great benefits ahead—even a “final frontier of molecular liberty,” as Christopher Mason, a “space geneticist” at Weill Cornell Medicine in New York, characterizes it. Mason works with newer types of gene editors that can turn genes on or off temporarily. He is using these in his lab to make cells resistant to radiation damage. The technology could be helpful to astronauts or, he says, for a weekend of “recreational genomics”—say, boosting your repair genes in preparation to visit the site of the Chernobyl power plant. The technique is “getting to be, I actually think it is, a euphoric application of genetic technologies,” says Mason. “We can say, hey, find a spot on the genome and flip a light switch on or off on any given gene to control its expression at a whim.”  

Easy delivery of gene editors to adult bodies could give rise to policy questions just as urgent as the ones raised by the CRISPR babies. Whether we encourage genetic enhancement—in particular, free-market genome upgrades—is one of them. Several online health influencers have already been touting an unsanctioned gene therapy, offered in Honduras, that its creators claim increases muscle mass. Another risk: If changing people’s DNA gets easy enough, gene terrorists or governments could do it without their permission or knowledge. One genetic treatment for a skin disease, approved in the US last year, is formulated as a cream—the first rub-on gene therapy (though not a gene editor). 

Some scientists believe new delivery tools should be kept purposefully complex and cumbersome, so that only experts can use them—a biological version of “security through obscurity.” But that’s not likely to happen. “Building a gene editor to make these changes is no longer, you know, the kind of technology that’s in the realm of 100 people who can do it. This is out there,” says Urnov. “And as delivery improves, I don’t know how we will be able to regulate that.”

man sitting and reading with man behind him

MICHAEL BYERS

In our conversation, Urnov frequently returned to that list of superpowers—genetic variants that make some people outliers in one way or another. There is a mutation that allows people to get by on five hours of sleep a night, with no ill effects. There is a woman in Scotland whose genetic peculiarity means she feels no pain and is perpetually happy, though also forgetful. Then there is Eero Mäntyranta, the cross-country ski champion who won three medals at the 1964 Winter Olympics and who turned out to have an inordinate number of red blood cells thanks to an alteration in a gene called the EPO receptor. It’s basically a blueprint for anyone seeking to join the Enhanced Games, the libertarian plan for a pro-doping international sports competition that critics call “borderline criminal” but which has the backing of billionaire Peter Thiel, among others. 

All these are possibilities for the future of the human genome, and we won’t even necessarily need to change embryos to get there. Some researchers even expect that with some yet-to-be-conceived technology, updating a person’s DNA could become as simple as sending a document via Wi-Fi, with today’s viruses or nanoparticles becoming anachronisms like floppy disks. I asked Church for his prediction about where gene-editing technology is going in the long term. “Eventually you’d get shot up with a whole bunch of things when you’re born, or it could even be introduced during pregnancy,” he said. “You’d have all the advantages without the disadvantages of being stuck with heritable changes.” 

And that will be evolution too.

Want to understand the future of technology? Take a look at this one obscure metal.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

On a sunny morning in late spring, I found myself carefully examining an array of somewhat unassuming-looking rocks at the American Museum of Natural History. 

I’ve gotten to see some cutting-edge technologies as a reporter, from high-tech water treatment plants to test nuclear reactors. Peering at samples of dusty reddish monazite and speckled bastnäsite, I saw the potential for innovation there, too. That’s because all the minerals spread out across the desk contain neodymium, a rare earth metal that’s used today in all sorts of devices, from speakers to wind turbines. And it’s likely going to become even more crucial in the future. 

By the time I came to the museum to see some neodymium for myself, I’d been thinking (or perhaps obsessing) about the metal for months—basically since I’d started reporting a story for our upcoming print issue that is finally out online. The story takes a look at what challenges we’ll face with materials for the next century, and neodymium is center stage. Let’s take a look at why I spent so long thinking about this obscure metal, and why I think it reveals so much about the future of technology. 

In the new issue of our print magazine, MIT Technology Review is celebrating its 125th anniversary. But rather than look back to our 1899 founding, the team decided to look forward to the next 125 years. 

I’ve been fascinated with topics like mining, recycling, and alternative technologies since I’ve been reporting on climate. So when I started thinking about the distant future, my mind immediately went to materials. What kind of stuff will we need? Will there be enough of it? How does tech advancement change the picture?

Zooming out to the 2100s and beyond changed the stakes and altered how I thought about some of the familiar topics I’ve been reporting on for years. 

For example, we have enough of the stuff we need to power our world with renewables. But in theory, there is some future point at which we could burn through our existing resources. What happens then? As it turns out, there’s more uncertainty about the amount of resources available than you might imagine. And we can learn a lot from previous efforts to project when the supply of fossil fuels will begin to run out, a concept known as peak oil. 

We can set up systems to reuse and recycle the metals that are most important for our future. These facilities could eventually help us mine less and make material supply steadier and even cheaper. But what happens when the technology these facilities are designed to recycle inevitably changes, possibly rendering old setups obsolete? Predicting what materials will be important, and adjusting efforts to make and reuse them, is complicated to say the least. 

To try to answer these massive questions, I took a careful look at one particular metal: neodymium. It’s a silvery-white rare earth metal, central to powerful magnets that are at the heart of many different technologies, both in the energy sector and beyond. 

Neodymium can stand in for many of the challenges and opportunities we face with materials in the coming century. We’re going to need a lot more of it in the near future, and we could run into some supply constraints as we race to mine enough to meet our needs. It’s possible to recycle the metal to cut down on the extraction needed in the future, and some companies are already trying to set up the infrastructure to do so. 

The world is well on its way to adapting to conditions that are a lot more neodymium-centric. But at the same time, efforts are already underway to build technologies that wouldn’t need neodymium at all. If companies are able to work out an alternative, it could totally flip all our problems, as well as efforts to solve them, upside down. 

Advances in technology can shift the materials we need, and our material demands can push technology to develop in turn. It’s a loop, one that we need to attempt to understand and untangle as we move forward. I hope you’ll read my attempt to start doing that in my feature story here


Now read the rest of The Spark

Related reading

For a more immediate look at the race to produce rare earth metals, check out this feature story by Mureji Fatunde from January. 

I started thinking more deeply about material demand when I was reporting stories about recycling, including this 2023 feature on the battery recycling company Redwood Materials. 

For one example of how companies are trying to develop new technologies that’ll change the materials we need in the future, check out this story about rare-earth-free magnets from earlier this year. 

Another thing

“If we rely on hope, we give up agency. And that may be seductive, but it’s also surrender.”

So writes Lydia Millet, author of over a dozen books, in a new essay about the emotions behind fighting for a future beyond climate change. It was just published online this week. It’s also featured in our upcoming print issue, and I’d highly recommend it. 

Keeping up with climate  

For a look inside what it’s really like to drive a hydrogen car, this reporter rented one and took it on a road trip, speaking to drivers along the way. (The Verge)

→ Here’s why electric vehicles are beating out hydrogen-powered ones in the race to clean up transportation. (MIT Technology Review)

As temperatures climb, we’ve got a hot steel problem on our hands. Heat can cause steel, as well as other materials like concrete, to expand or warp, which can cause problems from slowing down trains to reducing the amount of electricity that power lines can carry. (The Atlantic)

Oakland is the first city in the US running all-electric school buses. And the vehicles aren’t only ferrying kids around; they’re also able to use their batteries to help the grid when it’s needed. (Electrek)

Form Energy plans to build the largest battery installation in the world in Maine. The system, which will use the company’s novel iron-air chemistry, will be capable of storing 8,500 megawatt-hours’ worth of energy. (Canary Media)

→ We named Form one of our 15 Climate Tech companies to watch in 2023. (MIT Technology Review)

In one of the more interesting uses I’ve seen for electric vehicles, Brussels has replaced horse-drawn carriages with battery-powered ones. They look a little like old-timey cars, and operators say business hasn’t slowed down since the switch. (New York Times)

Homeowners are cashing in on billions of dollars in tax credits in the US. The money, which rewards use of technologies that help make homes more energy efficient and cut emissions, is disproportionately going to wealthier households. (E&E News)

Airlines are making big promises about using new jet fuels that can help cut emissions. Much of the industry aims to reach 10% alternative fuel use by the end of the decade. Actual rates hit 0.17% in 2023. (Bloomberg)

Solar farms can’t get enough sheep—they’re great landscaping partners. Soon, 6,000 sheep will be helping keep the grass in check between panels in what will be the largest solar grazing project in the US. (Canary Media)