WordPress 6.1 Is Jam-Packed With Performance Upgrades via @sejournal, @martinibuster

WordPress 6.1, arriving November 2022, is jam-packed with performance improvements, signaling that this may be the update that many have been waiting for.

WordPress performance scores from the beginning of 2022 until the end of September have been relatively stable, going up a few percentage points in the spring then dropping in the summer, ending September roughly 3 percentage points higher than from the start of the year.

WordPress Core Web Vitals Performance in 2022

WordPress Core Web Vitals Performance graph.

Most other platforms have enjoyed more substantial performance improvements month after month in 2022:

  • Wix up by 13 points
  • Duda up by 8 points
  • Squarespace up by 7 points
  • Joomla up by 7 points
  • Drupal up by 5 points

WordPress publishers hoping for performance improvements had little good news this year.

But that’s about to change in a big way.

WordPress 6.1 comes packed with substantial improvements, some of which are referred to as “massive.”

“Massive Improvement” To WP_Query Performance

An important performance boost is the addition of caching to WP_Query in the database.

What that means is that every time a visitor requests a webpage that request will remain in a cache, a memory of what was retrieved from the database.

When another visitor requests the same webpage, instead of making another request to the database WordPress will instead retrieve the webpage data from the cache (memory) instead of retrieving (making a query) from the database. That results in faster performance.

The WordPress core contributor who worked on this improvement recently tweeted:

“In WordPress 6.1, there is a massive improvement to database performance.

Database queries in WP_Query are now cached.

A ticket I have been working on for 5+ years was merged.

This should result in billions of less repeated database queries”

Improvement to REST API Database Queries

When a webpage is requested, the webpage will make different requests for various components of the page. This improvement streamlines that process by reducing the number of database queries (requests for parts of a webpage).

WordPress explains:

“When running profiling tools against the responses of REST API requests, it was discovered that post controllers request a lot of linked data to each post.

For example, when returning a post in a REST API response, linked data such as author (user), featured image, and parent post were all requested.

As these linked items were not primed in caches, it could mean that for each post in the REST API response there would be 3 separate database queries: one for the user, one for the featured image, and another for the parent post.

In WordPress 6.1 all the caches are primed in a single database query “

Additional Caching Improvements

There are other cache related improvements that speed up the delivery of webpages.

Improvements to Cache API

These are a couple of improvements.

In simple terms, one improvement that is called “Check cache key types” addresses a problem created by plugins.

This fixes an issue that causes a failure that can result in slightly weird behavior or outright failure.

This improvement makes it easier for plugin developers to notice the problem.

The developer notes on this fix states:

“This commit introduces a quick type check on the given cache keys and adds a _doing_it_wrong() message that should help plugin developers to notice these issues quicker.”

The second improvement is called, Remove `@access private` from cache priming functions.

A plain English explanation for this improvement is that it provides theme and plugin developers the opportunity to access and use certain functions that will result in less database queries which in turn will speed up site performance.

Post, Post Types Improvements

Improves performance for websites with a large amount of custom taxonomies (like tags and categories).

Multisite improvements

Reduces database queries in multisite environments.

Media Handling Improvement

This is a performance boost from how images are deferred in order to keep them from slowing webpage retrieval and display, resulting in performance boosts.

WordPress describes it like this:

“Recently I have been playing around with another special attribute to the img tag which is basically decoding=”async”.

After implementing async decoding to the images in a page, the page load became, even more, faster and images are decoded asynchronously by the browser, loading the contents almost instantly and also reducing page render time. This is a huge performance booster to any webpages which has a lot of images (so basically most sites).”

Improved PHP performance for Core Blocks Registration

This improves how “block registration” is handled. This is an improvement to how block are identified and handled.

WordPress describes it:

“…reduce filesystem reads and processing of block.json files, which should benefit all WordPress sites and improve performance…”

New Site Health Checks

WordPress 6.1 comes with two new site health checks. While this isn’t a performance improvement itself, it does help publishers identify whether they can improve their site peformance with the use of a Persistent Object Cache and/or a Full Page Cache.

Persistent Object Cache

The Persistent Object Cache holds frequently requested webpage parts in a cache (in memory), speeding up the display of a webpage and reducing server load.

Full Page Cache

A Full Page Cache is a cache of the entire webpage, essentially making a webpage behave more like a static HTML page. A full page cache is generally not useful on a site that is dynamic and contains personalization.

Cron API

This seems like kind of a bug fix but it’s also an improvement to wp-cron.php, which is a task scheduler for things like backups, scheduled posts or updates.

The update will speed up performance by making it compatible (again) with LiteSpeed Web Server plus LSAPI. Technical details here.

Many More Performance Improvements

There are many more performance improvements but the above listed updates are the most notable.

I addition to performance improvements, WordPress 6.1 will also arrive with improvements to accessibility, the block editor, more flexibility and options for theme developers, updated external libraries to latest versions, new functions, new filters and even a new oEmbed provider for podcasts.

Updates are generally incremental but WordPress 6.1 is shaping up to be a fairly substantial one, look for it to be released on November 1, 2022.

Citation

Performance Field Guide for WordPress 6.1

Content Personalization: What Is It? via @sejournal, @rio_seo

Are customers finding relevant, informative content when they search for your business or visit your website?

How do you create a more seamless experience devoid of friction from the start of the sales journey?

Creating meaningful experiences through personalized content is a great place to start.

If you’re not consistently testing, analyzing, and refining your customer experience strategy, you risk losing your current and potential customer base.

One of the most integral components of the customer experience is content marketing.

Yet, many content marketers neglect to create relevant and useful content, instead focusing on how the content benefits their business rather than the customer.

In this post, we’ll explore what exactly personalized content is, how it benefits the end user, examples of personalization, and how to create a successful content personalization strategy.

Let’s get started.

What Is Personalized Content?

Customers crave personalization in every aspect of life – from their shopping preferences to the types of food they eat and the home decor styles they desire.

They are more likely to spend their time and money on products and services that align with their preferences, wants, and needs.

For example, say you’re shopping for black winter boots on a retailer’s website and view multiple product pages featuring different boots, but don’t actually purchase anything.

When you exit the page, you’re later sent a promotional email for 20% off the retailer’s winter jackets.

In this instance, you might ignore the retailer’s email and even unsubscribe entirely from its email list, as you are being served irrelevant content.

This example halts the user’s journey rather than moving them further down the sales funnel.

It would have been a more worthwhile strategy to deliver engaging content based on the customer’s predetermined shopping preferences and the items they are actually looking for.

The Case For Building A Content Personalization Strategy

Content personalization strategy entails leveraging online consumer data insights to deliver relevant content.

By consistently monitoring and analyzing this data, brands can, in turn, better understand their end users’ interests and motivations.

Surfacing relevant and timely information improves the online user experience, leading to higher conversions and sales.

Research shows that 80% of consumers are more likely to buy from a company that provides a tailored experience, furthering the need for a personalized content marketing strategy.

The results of content personalization are tangible for businesses too.

Ninety-seven percent of marketers report a measurable lift from their personalization efforts.

Additionally, a separate study found that 51% of retailers with an end-to-end personalization strategy earned 300% ROI or more.

Knowing personalization can improve conversion rates, how can content marketers improve this effort? We’ll explore that next.

How Can I Personalize My Content?

Creating individualized content sounds ideal in theory, but how can your business effectively take on this endeavor?

As a savvy marketer, you should understand the demographic factors and ideal target personas that make up your audience.

Your audience probably has different wants and needs.

Thinking back to the previous winter boots example and applying it to your audience, different members of your audience likely have varying budgets, style preferences, and uses for the boots.

This is where audience segmentation comes into play.

Whether your business uses Google Analytics, another web analytics service, or a personalization software solution, you can break down your audience segments into groups.

By breaking down your audience segments and behaviors, you will better understand the types of content each group will engage with most and what will resonate best.

These user behaviors insights include:

Generally, four methods can be used for audience segmentation, which we will explain further below.

Demographic Personalization

Demographic personalization entails segmenting your audience based on their demographic makeup and other behavioral factors. This may include targeting a customer based on their:

  • Age.
  • Geography.
  • Language.
  • Gender.
  • Job title.
  • Devices used.
  • Browser.
  • Screen resolution.
  • Device category (desktop, mobile, tablet, etc.).
  • And more.

Demographic personalization can help provide more relevant information, but it shouldn’t be the only way your business segments your audience.

Persona-Based Personalization

Every business should have a strong understanding of its ideal buyer persona – from what your target customer looks like to how they shop, work, and behave.

Persona-based personalization goes a level deeper than just understanding your audience’s demographics.

It entails understanding purchase drivers, pain points and challenges, and the user’s role in the purchasing decision.

For more complex purchases, there are likely several key personas you’ll want to develop content for.

Personalizing content to each key decision-maker allows you to connect with a wider audience of stakeholders and address their concerns more effectively.

For example, a chief financial offer (CFO) may want to learn how you can solve their problems for less. A manager may focus on ease of use, training, and implementation.

Each persona will have a different pain point. It’s up to your business to explain how you can solve these diverse pain points for each stakeholder.

To obtain this deeper level of information, ask your customers to fill out a brief online survey post-purchase.

Keep the online survey short; each question asked should have a purpose for evaluating either the customer or your business.

You can also build customer profiles through your email marketing efforts. Ask your customers to opt into your emails during your checkout process.

Allow customers to select their email preferences, from the type of content they want to receive from your business to the frequency of emails they’d like to receive from you.

These insights will help you discern the types of content your customers want to receive from you.

Buyer-Journey Personalization

Delivering content based on where users are in the sales funnel is crucial.

For example, if a customer found your business through search, they’re likely in the awareness stage and comparing you to competitors.

They’re seeking more information to help guide their purchase decision at this stage.

A business may benefit from sharing content in the form of a blog post, video, or social content in the awareness stage.

If a customer already has made previous purchases with you, they’ll want more personalized content.

In the previous retailer example, if the customer bought black boots from you before, perhaps they’ll be enticed to purchase from you again with a 15% off SMS message.

When your digital marketing team creates compelling content that anticipates and matches the buyer’s interest and stage of the sales journey, you increase the chances of conversion and drive more qualified leads.

Content insights will also enable digital marketers and sales teams to better understand what content is most impactful, so you can better tailor your content calendar and frame your sales approach when it is time to connect.

Individual-Specific Personalization

The three aforementioned approaches to personalized content will help elevate your personalization strategy. However, you’re still crafting marketing content for a larger target audience.

Customers want to feel like more than just a number.

An Adobe survey found that 42% of consumers say seeing personalized content from a business is somewhat or very important. In the same survey, 35% of consumers stated personalized experiences improve their perception of the business.

It’s clear consumers no longer accept one-size-fits-all content experiences.

Segmenting individual consumers may seem an arduous task to accomplish manually, which is why businesses rely primarily on machine learning and AI technology to accomplish this task.

Through machine and AI learning, content is delivered using first- and third-party data to best serve the consumer’s needs.

This type of customization ensures the consumer is only presented with digital content that is relevant to them. This may look like special offers, dedicated landing pages, specific product recommendations, personal emails, and more.

What Are A Few Examples Of Content Personalization?

Many of the largest, most recognizable industry innovators shape their user experiences around personalization.

Netflix is a common household name and a well-known service to many.

As Netflix shares, personalization plays a large role in its mission.

“Personalized recommendations on the Netflix Homepage are based on a user’s viewing habits and the behavior of similar users. These recommendations, organized for efficient browsing, enable users to discover the next great video to watch and enjoy without additional input or an explicit expression of their intents or goals.”

Google Discover is yet another tool that relies heavily on personalization and curates a feed of content based on a user’s previous searches.

Content that surfaces is unique to the individual and what Google’s automated systems believe to be a good match for the individual’s interests.

For example, if you often search for sports scores or the odds of your favorite football team winning its next game, you’ll likely have a feed filled with sports-related content.

Both of these companies utilize complex machine learning and algorithms to drive their personalization efforts.

While most businesses can’t execute their personalization strategies at the same level as Netflix or Google, personalization solutions can help bridge this gap.

Personalization Isn’t Going Anywhere

The demand for personalization is on the rise, and more marketers are recognizing the benefit of focusing their efforts on improving their customer’s experience.

Netflix, Spotify, Google, Nike, Amazon, and more large companies are prime examples of businesses that excel at personalizing content. Other businesses are on board, too.

A third of organizations are already spending more than half their marketing budget on personalizing digital content. And 97% of organizations plan to maintain or increase their personalization budget over the next five years.

Meet customers’ needs, discontinue broad-based content, and develop or ramp up your content personalization efforts to improve your customer experience (and your ROI).

More resources:


Featured Image: Dean Drobot/Shutterstock

AMP: Is It A Google Ranking Factor? via @sejournal, @MattGSouthern

AMP is an HTML framework that helps desktop-optimized sites deliver ultra-fast mobile versions of webpages.

AMP is a creation of Google, which has led to claims it gives pages a ranking advantage in mobile search over non-AMP pages.

When you think about it, AMP ticks several boxes that suggest it could be a ranking factor:

  • Developed by Google ✅
  • Makes websites more mobile-friendly ✅
  • Improves page speed ✅

Despite actively encouraging people to use it, Google has debunked claims that AMP is a ranking factor.

Case closed, right?

It’s easy to say AMP doesn’t give a site an advantage in rankings and leave it at that.

But we can’t write it off and ignore the impact it has on other elements that do matter for SEO.

Here’s what the evidence says about AMP’s impact on search results and its connection to other ranking factors.

And if you have more questions about what is and isn’t a Google Ranking Factor, we answer them all in the latest Google Ranking Factors: Fact Or Fiction ebook.

The Claim: AMP As A Ranking Factor

The claim here is straightforward – AMP gives pages a ranking boost in Google’s search results.

Discussions about AMP as a ranking factor began when Google launched the technology in 2018.

Why?

People think AMP is a ranking factor because Google has a stake in its success as a technology.

Google is responsible for creating AMP and actively encourages using it as part of a more significant effort to speed up the web.

In theory, Google could increase the adoption rate of AMP by turning it into a ranking signal.

The ranking boost would be a reward for using Google’s new technology. Of course, that would be unfair to any site not using AMP.

If Google used AMP to rank search results, you could argue it would be forcing sites to use its technology to stay relevant.

Thankfully, that’s not how search works.

But AMP isn’t irrelevant to SEO by any stretch.

Let’s look at the evidence on how AMP impacts SEO.

The Evidence Against AMP As A Ranking Factor

This one is pretty easy – Google has confirmed that AMP is not a ranking factor. Again. And again.

In Google’s Advanced SEO guide, the company says it ranks all pages using the same signals:

“While AMP itself isn’t a ranking factor, speed is a ranking factor for Google Search. Google Search applies the same standard to all pages, regardless of the technology used to build the page.”

This quote touches on something we mentioned earlier about AMP impacting other things, like page speed, which are confirmed ranking factors.

Sites that use AMP can potentially benefit from these other signals.

As of July 2018, page speed has been a ranking factor for mobile searches.

Because AMP loads pages instantly, it can help sites send stronger ranking signals regarding mobile page speed.

The increased speed has the potential to lead to better rankings. However, sites can generate the same signals without AMP.

Core Web Vitals

Google’s Core Web Vitals became ranking factors with the rollout of the Page Experience update in June 2021.

Leading up to the update’s launch, Google’s communication to site owners has always been that AMP can help achieve ideal Core Web Vitals scores.

“There is a high likelihood that AMP pages will meet the thresholds. AMP is about delivering high-quality, user-first experiences; its initial design goals are closely aligned with what Core Web Vitals measure today.

This means that sites built using AMP likely can easily meet Web Vitals thresholds.”

Google presented data showing that AMP domains were five times more likely to pass Core Web Vitals than non-AMP domains.

Passing Google’s Core Web Vitals thresholds can improve a site’s search rankings.

Again, as with the page speed ranking boost, you can achieve this without AMP.

Decline Of AMP

AMP used to carry various perks that could enhance how a page appears in search results.

For example, Google’s Top Stories carousel appears at the top of search results when looking for news stories and used to only accept AMP pages.

Top Stories eligibility was a ranking advantage unique to AMP for a while.

That changed in June 2021 with the rollout of the Page Experience update, which now makes it possible for non-AMP pages to appear in the Top Stories carousel.

Another unique feature of AMP pages was that a lightning bolt icon appeared in search results to indicate which pages offered faster experiences.

Google has done away with that icon. Now, AMP pages are indistinguishable from regular pages in search results.

AMP Decline Continues

Since the first version of this ebook was released, AMP has continued to decline steadily.

In response to complaints from publishers and readers alike, an update to Google News on mobile bypasses AMP URLs and sends traffic directly to publishers’ websites.

Google has a long history of introducing new tools, hyping them up, and retiring them years later. Examples include failed social media site Google+ and Skype competitor Google Hangouts.

AMP isn’t retired yet, but it would be on-brand for Google to say that this web technology it developed is no longer relevant.

It’s not only Google saying it, but other web companies are dropping AMP from their platforms.

Twitter no longer supports AMP links and treats them like Google News by redirecting visitors to the publishers’ domains.

As more companies continue to limit the prevalence of AMP, there’s no reason why you should have to do anything different to your website.

If you’re currently using AMP and are happy with how your webpages perform in search results, then continue doing what you’re doing.

Should the day come when Google shutters the AMP project, it’s not as though your website will stop working. AMP is HTML code crawled and indexed like anything else on the web.

Our Verdict: AMP Is Not A Ranking Factor

AMP: Is It A Google Ranking Factor?

Google has confirmed multiple times that AMP is not a Google ranking factor.

Further, it no longer has unique advantages that could impact click-through rates, such as a distinctive icon and Top Stories exclusivity.

AMP can positively impact other ranking factors (e.g., speed), but it is not a factor on its own.

Find out more about ranking factors: Google Ranking Factors: Fact Or Fiction.


Featured image: Paulo Bobita/Search Engine Journal

Ranking Factors: Fact Or Fiction? Let’s Bust Some Myths! [Ebook]

The messy morality of letting AI make life-and-death decisions

In a workshop in Rotterdam in the Netherlands, Philip Nitschke—“Dr. Death” or “the Elon Musk of assisted suicide” to some—is overseeing the last few rounds of testing on his new Sarco machine before shipping it to Switzerland, where he says its first user is waiting. 

This is the third prototype that Nitschke’s nonprofit, Exit International, has 3D-printed and wired up. Number one has been exhibited in Germany and Poland. “Number two was a disaster,” he says. Now he’s ironed out the manufacturing errors and is ready to launch: “This is the one that will be used.”

A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke’s 25-year campaign to “demedicalize death” through technology. Sealed inside the machine, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button? 

Here’s what will happen: The Sarco will fill with nitrogen gas. Its occupant will pass out in less than a minute and die by asphyxiation in around five. 

A recording of that short, final interview will then be handed over to the Swiss authorities. Nitschke has not approached the Swiss government for approval, but Switzerland is one of a handful of countries that have legalized assisted suicide. It is permitted as long as people who wish to die perform the final act themselves. 

Nitschke wants to make assisted suicide as unassisted as possible, giving people who have chosen to kill themselves autonomy, and thus dignity, in their final moments. “You really don’t need a doctor to die,” he says. 

Because the Sarco uses nitrogen, a widely available gas, rather than the barbiturates that are typically used in euthanasia clinics, it does not require a physician to administer an injection or sign off on lethal drugs. 

At least that’s the idea. Nitschke has not yet been able to sidestep the medical establishment fully. Switzerland requires that candidates for euthanasia demonstrate mental capacity, Nitschke says, which is typically assessed by a psychiatrist. “There’s still a belief that if a person is asking to die, they’ve got some sort of undiagnosed mental illness,” he says. “That it’s not rational for a person to seek death.”

He believes he has a solution, however. Exit International is working on an algorithm that Nitschke hopes will allow people to perform a kind of psychiatric self-assessment on a computer. In theory, if a person passed this online test, the program would provide a four-digit code to activate the Sarco. “That’s the goal,” says Nitschke. “Having said all that, the project is proving very difficult.” 

Nitschke’s mission may seem extreme—even outrageous—to some. And his belief in the power of algorithms may prove to be overblown. But he is not the only one looking to involve technology, and AI in particular, in life-or-death decisions.

Yet where Nitschke sees AI as a way to empower individuals to make the ultimate choice by themselves, others wonder if AI can help relieve humans from the burden of such choices. AI is already being used to triage and treat patients across a growing number of health-care fields. As algorithms become an increasingly important part of care, we must ensure that their role is limited to medical decisions, not moral ones.

Medical care is a limited resource. Patients must wait for appointments to get tests or treatment. Those in need of organ transplants must wait for suitable hearts or kidneys. Vaccines must be rolled out first to the most vulnerable (in countries that have them). And during the worst of the pandemic, when hospitals faced a shortage of beds and ventilators, doctors had to make snap decisions about who would receive immediate care and who would not—with tragic consequences. 

The covid crisis brought the need for such choices into harsh focus—and led many to wonder if algorithms could help. Hospitals around the world bought new or co-opted existing AI tools to assist with triage. Some hospitals in the UK that had been exploring the use of AI tools to screen chest x-rays jumped on those tools as a fast, cheap way to identify the most severe covid cases. Suppliers of this tech, such as Qure.ai, based in Mumbai, India, and Lunit, based in Seoul, Korea, took on contracts in Europe, the US, and Africa. Diagnostic Robotics, an Israeli firm that supplies AI-based triage tools to hospitals in Israel, India, and the US, has said it saw a sevenfold jump in demand for its technology in the first year of the pandemic. Business in health-care AI has been booming ever since. 

This rush to automate raises big questions with no easy answers. What kinds of decision is it appropriate to use an algorithm to make? How should these algorithms be built? And who gets a say in how they work? 

Rhema Vaithianathan, the director of the Centre for Social Data Analytics and a professor at the Auckland University of Technology in New Zealand, who focuses on tech in health and welfare, thinks it is right that people are asking AI to help make big decisions. “We should be addressing problems that clinicians find really hard,” she says. 

One of the projects she is working on involves a teen mental-health service, where young people are diagnosed and treated for self-harming behaviors. There is high demand for the clinic, and so it needs to maintain a high turnover, discharging patients as soon as possible so that more can be brought in. 

Doctors face the difficult choice between keeping existing patients in care and treating new ones. “Clinicians don’t discharge people because they’re super scared of them self-harming,” says Vaithianathan. “That’s their nightmare scenario.”

Even when AI seems accurate, scholars and regulators alike call for caution.

Vaithianathan and her colleagues have tried to develop a machine-learning model that can predict which patients are most at risk of future self-harming behavior and which are not, using a wide range of data, including health records and demographic information, to give doctors an additional resource in their decision-­making. “I’m always looking for those cases where a clinician is struggling and would appreciate an algorithm,” she says. 

The project is in its early stages, but so far the researchers have found that there may not be enough data to train a model that can make accurate predictions. They will keep trying. The model does not have to be perfect to help doctors, Vaithianathan says. 

They are not the only team trying to predict the risk of discharging patients. A review published in 2021 highlighted 43 studies by researchers claiming to use machine-learning models to predict whether patients will be readmitted or die after they leave hospitals in the US. None were accurate enough for clinical use, but the authors look forward to a time when such models “improve quality of care and reduce health-care costs.”  

And yet even when AI seems accurate, scholars and regulators alike call for caution. For one thing, the data that algorithms follow and the way they follow it are human artifacts, riddled with prejudice. Health data is overpopulated by people who are white and male, for example, which skews its predictive power. And the models offer a veneer of objectivity that can lead people to pass the buck on ethical decisions, trusting the machine rather than questioning its output. 

This ongoing problem is a theme in David Robinson’s new book, Voices in the Code, about the democratization of AI. Robinson, a visiting scholar at the Social Science Matrix at the University of California, Berkeley, and a member of the faculty of Apple University, tells the story of Belding Scribner. In 1960 Scribner, a nephrologist in Seattle, inserted a short Teflon tube known as a shunt into some of his patients’ arms to prevent their blood from clotting while they underwent dialysis treatment. The innovation allowed people with kidney disease to stay on dialysis indefinitely, transforming kidney failure from a fatal condition into a long-term illness.  

When word got out, Scribner was inundated with requests for treatment. But he could not take everyone. Whom should he help and whom should he turn away? He soon realized that this wasn’t a medical decision but an ethical one. He set up a committee of laypeople to decide. Of course, their choices weren’t perfect. The prejudices at the time led the committee to favor married men with jobs and families, for example. 

The way Robinson tells it, the lesson we should take from Scribner’s work is that certain processes—bureaucratic, technical, and algorithmic—can make difficult questions seem neutral and objective. They can obscure the moral aspects of a choice—and the sometimes awful consequences.

“Bureaucracy itself can serve as a way of converting hard moral problems into boring technical ones,” Robinson writes. This phenomenon predates computers, he says, “but software-based systems can accelerate and amplify this trend. Quantification can be a moral anesthetic, and computers make that anesthetic easier than ever to administer.”

Whatever the process, we need to let that moral anesthetic wear off and examine the painful implications of the decision at hand. For Scribner, that meant asking an open panel of laypeople—instead of a group of ostensibly objective doctors meeting behind closed doors—whom to save. Today, it could mean asking for high-stakes algorithms to be audited. For now, the auditing of algorithms by independent parties is more wish-list item than standard practice. But, again using the example of kidney disease, Robinson shows how it can be done. 

By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. But some people were unhappy with how the algorithm had been designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm had been designed to allocate kidneys in a way that maximized years of life saved. This favored younger, wealthier, and whiter patients, Grawe and other patients argued.

Such bias in algorithms is common. What’s less common is for the designers of those algorithms to agree that there is a problem. After years of consultation with laypeople like Grawe, the designers found a less biased way to maximize the number of years saved—by, among other things, considering overall health in addition to age. One key change was that the majority of donors, who are often people who have died young, would no longer be matched only to recipients in the same age bracket. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribner’s committee, the algorithm still wouldn’t make decisions that everyone would agree with. But the process by which it was developed is harder to fault. 

“I didn’t want to sit there and give the injection. If you want it, you press the button.”

Philip Nitschke

Nitschke, too, is asking hard questions. 

A former doctor who burned his medical license after a years-long legal dispute with the Australian Medical Board, Nitschke has the distinction of being the first person to legally administer a voluntary lethal injection to another human. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australia’s federal government overturned it, Nitschke helped four of his patients to kill themselves.

The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: “If I were to keep a pet animal in the same condition I am in, I would be prosecuted.”  

Nitschke wanted to support his patients’ decisions. Even so, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. “I didn’t want to sit there and give the injection,” he says. “If you want it, you press the button.”

The machine wasn’t much to look at: it was essentially a laptop hooked up to a syringe. But it achieved its purpose. The Sarco is an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can carry out a psychiatric assessment will be the next step.

But there’s a good chance those hopes will be dashed. Creating a program that can assess someone’s mental health is an unsolved problem—and a controversial one. As Nitschke himself notes, doctors do not agree on what it means for a person of sound mind to choose to die. “You can get a dozen different answers from a dozen different psychiatrists,” he says. In other words, there is no common ground on which an algorithm could even be built. 

But that’s not the takeaway here. Like Scribner, Nitschke is asking what counts as a medical decision, what counts as an ethical one, and who gets to choose. Scribner thought that laypeople—representing society as a whole—should choose who received dialysis, because when patients have more or less equal chances of survival, who lives and who dies is no longer a technical question. As Robinson describes it, society must be responsible for such decisions, although the process can still be encoded in an algorithm if it’s done inclusively and transparently. For Nitschke, assisted suicide is also an ethical decision, one that individuals must make for themselves. The Sarco, and the theoretical algorithm he imagines, would only protect their ability to do so.

AI will become increasingly useful, perhaps essential, as populations boom and resources stretch. Yet the real work will be acknowledging the awfulness and arbitrariness of many of the decisions AI will be called on to make. And that’s on us. 

For Robinson, devising algorithms is a bit like legislation: “In a certain light, the question of how best to make software code that will govern people is just a special case of how best to make laws. People disagree about the merits of different ways of making high-stakes software, just as they disagree about the merits of different ways of making laws.” And it is people—in the broadest sense—who are ultimately responsible for the laws we have.

How copying plants could produce the fuel of the future

5.05 And that wraps up our first-ever ClimateTech conference! I hope you’ve learned as much as I have about what humanity is cooking up to beat back climate change. I’m looking forward to the strides we’ll discuss next year and feel more optimistic about our ability to confront this massive challenge. Signing off, and see you next year!


4.53 Fun fact: imitating photosynthesis to make fuel is one of the first topics our head climate and energy editor, James Temple, covered as a journalist. In the ten years since he wrote about the tech’s first prototype, artificial leaves have developed significantly but still have “no industry,” Houles says. Now that much of the basic science is figured out, she emphasizes industry partners are necessary if artificial leaves are going to impact the energy grid.


4.44 The device Houle is working on resembles solar panels, but takes the “next step,” as she puts it, of using the charged particles that devices solar panels produce to “drive chemistry.”


4.40 Plants make energy from carbon dioxide, water, and sunlight. 40 years ago, Houle thought, “Gee, we know so much about natural photosynthesis. We should be able to do it ourselves.” So she’s worked to use inorganic materials to do the same thing.


4.34 Our last guest (can you believe it) is Frances Houle. She’s working to develop new fuels inspired by how photosynthesis works. Houle is a senior scientist at Lawrence Berkeley National Laboratory and the deputy director of the Liquid Sunlight Alliance, a Caltech-based collaborative research effort. Read more about the development of artificial leaves here.


4.32 White says the future of long-hyped fusion research is brighter than ever because of funding invested in the past 10 years in basic science research into plasma, theoretical physics, and super-computing resources.


4.28 Plasma is the scorching hot state of matter in which fusion reactions occur. Magnets that can confine plasma help us harness the power produced by fusion reactors. Last year, we profiled one of the fusion startups White highlights, Commonwealth. Commonwealth’s uniquely powerful magnets are a big part of why White thinks the company will help us realize the promise of fusion energy in 15 years.


4.17 Fusion reactors break “heavier nuclei into lighter nuclei” releasing enormous amounts of energy. White says if you could harness the energy produced by breaking up the positively charged nuclei of the element tritium, “you could power a city like Boston for a year with just a pickup truck full of fusion fuel.”


4. 10 Next is Anne White, who will talk with us about “the energy within the stars.” Plasma! Professor White is the head of nuclear science and engineering here at MIT. We selected practical fusion reactors, which White works on, as one of 2022’s breakthrough technologies.


4.06 When it comes to the oil and gas industry, Araque says he’s talked to “all the majors over the years but they’re just waiting to see what happens” with geothermal, which is a newer fuel source. He says partnerships with traditional fuel companies will be necessary to scale geothermal.


4.01 The deeper you drill into the Earth’s core, the hotter it gets, and the more energy you produce. To get to “geothermal prime time,” where geothermal is powering industries and the energy grid at a large scale, Araque says drilling has to get “hotter and deeper.” Casey wrote last year about the potential for geothermal energy to get there and new funding directed toward its expansion.


3.54 Araque wants to tap into the energy beneath our feet. During the Earth’s formation, thermal energy was trapped in the planet’s crust. By drilling down “less than the distance of a half marathon,” he argues, we can get “more energy for a fraction of the land” fracking uses.


3. 48 First up is Carlos Araque. He is the CEO and co-founder of Quaise Energy. This startup aims to retrofit fossil fuel-guzzling power plants to unlock deep geothermal energy.


3.43 We’ve now arrived at ClimateTech’s final session. It’s Hana again—an MIT Technology Review fellow. I’ll walk you through our speakers’ approaches to overcoming the global climate crisis. Their battle plans include pulling heat from the earth’s depths, using plasma to generate unlimited energy, and copying how plants power themselves using the sun. These ideas might fail—or be how we save the planet.


3.12 And that’s it for this session—we’ll be back after a short break to discuss moon-shot climate innovations!


3.10 Jackson says FedEx set an emissions goal for 2040, 10 years before those in Paris Accords, because the company “looked at it” and “thought [they] could do it sooner.” But still, there were inherent challenges: “We can’t go out and buy those vehicles right now at the quantities we need—they’re not being produced,” he says of an electric fleet.


3.03 “It’s a great business to be investing in sustainability,” Jackson says. He says that if the Inflation Reduction Act is able to foster more electric vehicle production, that will help the company reach their 2040 carbon neutral goals.


2.59 “Aviation is the biggest component of our footprint,” Jackson says, and it’s not just true for FedEx but for the entire sector. (We published a story earlier this year about the challenges in reducing aviation emissions.)
FedEx is looking for sustainable aviation fuels, as a gigaton of carbon is emitted per year from aviation.


2.51 Last year, FedEx announced a goal of having carbon neutral operations by 2040, including an all electric fleet. “We were pleased that there was measurable goals” included in the Inflation Reduction Act toward transportation electrification, Jackson says.


2.44 Now on stage we have speaker Mitch Jackson, staff VP of environmental affairs and chief sustainability officer at FedEx.


2.40 Metro is currently working on a universal basic mobility project to provide “free, frequent, reliable, and safe” transit, Reynolds says—starting with areas in south LA, where there are the biggest disparities. She sees this as not just a climate intervention but an anti-poverty intervention. It includes protected bike lanes and bus-only lanes, as well as investments in community infrastructure and community committees to help create the program.


2.31 The idea of “innovation” shouldn’t overlook the individuals on the ground doing the hard work of already managing these existing systems. Reynolds says cities’ traditional way of dealing with expansion and increased population is to add more concrete—”and that doesn’t scale.”


2.25  “Telling people that they have to drive less is controversial,” Reynolds says. And how much people have to drive is an equity issue, with more need to commute in lower income communities, where people are forced to spend more time on the road—which includes all the drawbacks and risks associated, like traffic accidents.

We covered LA’s changing transportation landscape last year.


2.19 An audience member asks a question about avoiding dendrite formation in batteries, which grows exponentially with fast charging, like with lithium-metal batteries. “Dealing with lithium-metal dendrites is the key challenge” for the company, Holme says. The company has spent five years picking the right material to prevent these problems, then seven years to perfect the manufacturing process.

That’s all the time we have with Holme! Very exciting technology. Now Arieff welcomes Seleta Reynolds to the stage. Reynolds is chief innovation officer of the Los Angeles County Metropolitan Transportation Authority, which oversees bus, light rail, and subway systems within LA County.


2.12 Holme says policy has an important role to set the right incentives, and worries that incentives are still being given to fossil fuels. “A level playing field is at least the lowest bar” in terms of policy, Holme says.


2.10 “Over the last few years, things have been changing much faster than people predicted,” Holme says. But he expects that a bottleneck for battery production could appear for mining materials like nickel, which would need to ramp up.

QuantumScape’s goal is to deliver a prototype that demonstrates the main proof of principle later this year. They have submitted some products to third-party testing, which Holme says backed up the positive results they have seen internally in the company.


2.05 “To achieve these advantages, we’ve had to change a lot of the battery architecture,” Holme says. QuantumScape is working on a lithium-metal battery, which eliminates the traditional anode material in lithium-ion batteries. That technology is expected to be able to increase the range for electric vehicles and give it a faster charge. He also says their battery will be greener to produce, since roughly 40% of the emissions from creating a battery are from the anode material.


2.00 Electric vehicles are now the most dominant market for batteries today, surpassing the production of batteries for things like smartphones, laptops, and tablets. This sector is also affected by countries’ policy decisions around transportation electrification. “A large majority of Americans want to drive EVs, but see some impediments,” Holme says, such as range anxiety and worries over the cost of the vehicles. The “battery is single biggest line item” in electric vehicle, he notes.


1.56 Holme says that transportation accounts for the single largest sector responsible for emissions every year:  “I don’t think we’ll get to a zero carbon future without addressing transportation.” Luckily, electric vehicle adoption has been ramping up over the last few years, and the cost of creating batteries for electric vehicles has been decreasing exponentially since 1996.


1.51 Now on stage is Tim Holme, CTO and cofounder of QuantumScape, a company working on developing solid-state lithium metal batteries for use in electric vehicles. We wrote about lithium-metal batteries as one of the breakthrough technologies last year, which you can read more about here.


1.49 On stage is Allison Arieff, editorial director of print at MIT Technology Review. She will be speaking with representatives from companies working on next-generation batteries for electric vehicles, city transportation, and logistics.


1.44 Hello, we’re back with the third session of today’s ClimateTech talks—I’m Tammy, a reporting fellow at MIT Technology Review, and I’ll be filling you in on this segment, which focuses on climate-minded transportation innovations.


12.30 Okay, it’s time for lunch! I’m now going to hand over to my colleagues Tammy and Hana, who will talk you through the rest of the sessions when we resume afterwards. Thank you for reading, I hope you enjoy the rest of the conference!


12.20 We’re now hearing from Kent Larson, an architect, entrepreneur, and academic. He is Director of City Science at MIT Media Lab. His research focuses on compact transformable housing, ultralight autonomous mobility systems, sensing and algorithms to recognize and respond to complex human behavior, and advanced modeling, simulation, and tangible interfaces for urban design.

AI modeling tools are particularly useful because they allow diverse groups of stakeholders to come and meet and explore scenarios, he says, pointing out that open mic-style meetings are often dominated by the loudest, angriest voices.

“I think this process could be used for controversial topics in our cities, related to zoning or affordable housing, parking, issues like that,” he adds.


12.05 Next up, we have Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering at MIT. He teaches a variety of undergraduate and graduate courses in thermo-fluids engineering and nuclear reactor engineering.

“We’re going to need a lot of clean energy to reduce the emissions, this is kind of obvious,” he says. “We’ll also need a lot of energy to actually power our adaptation solutions…and to defend geopolitical instability and competition for resources.”

One possible solution could come in the form of a nuclear battery, a nuclear micro reactor that uses air from the atmosphere for cooling, rather than a water source. One major advantage is that it’s transportable in standard containers, and can operate for five to 10 years and provide that energy on demand.


12.00 Andreas Rico, a graduate student from Mexico City at the MIT Media Lab with a background in Robotics, AI, and Innovation, is now talking about communities without infrastructure.

“The richest 10% [of the world] is accountable for 50% of the global CO2 emissions. And sadly, on top of this, the impacts of climate change are not equal either,” he says. “Typically, the regions that tend to be the most vulnerable are not the ones that are producing the most CO2.”

Securing energy, food and waste resources for vulnerable communities is crucial, he says, particularly ones that are low cost, lightweight, and easily distributed.


11.58 So, how do we encourage people to shift towards a plant-based diet, she asks? The solution should, and can, involve delicious plant based alternatives and fresh local food, but that won’t necessarily be enough. Policy and strategic nudges may be required, but they’re often both unpopular and controversial. Read more about how reducing your meat consumption really can help the climate.

Shifting towards a plant-based diet is crucial for environmental sustainability, but can also improve animal welfare, public health, food security and create new local economic opportunities. Plus, it’s a change that individuals can make on their own, she adds.


11.50 Now, Alex Berke, a PhD student in the MIT Media Lab’s City Science group, is talking about low-carbon diets.

Animal products contribute the majority of food related emissions, and are much more resource intensive than plant based foods, she says. Producing animal products contributes more than 56% of food related emissions, and takes up more than 75% of global farmlands, yet only contributes about 37% of the protein and 18% of calories to the global food supply.

The world is producing more animal products that people need, she adds, which is particularly problematic in the US and other affluent countries where meat is consumption is the highest.


11.40 Now Maitane Iruretagoyena, a technical associate at the City Science Group, has taken to the stage.

“We want to create more vibrant, productive, and creative spaces,” she says. One way to do this, she explains, is through transformable wall systems that integrate furniture storage, lighting, office and entertainment systems.

“Sometimes the living room could be transformed to a bathroom,” she adds. “So the rooms are created on demand they can they function that you need.”


11.30 Next up is Ronan Doorley, an engineer and data scientist at MIT City Science, talking about hybrid working and proximity to the workplace from where you live.

Commuting contributes to emissions. AI models can provide interesting insights into how to reduce the amount of time people spend commuting by creating a kind of a proximity matrix of job to job skill, similarity, and work, he says.

“By modeling and simulating the likely job transitions that would occur in any particular industry development scenario, we can actually start to understand how many of the newly created jobs could be attained by the local population versus how many would have to be filled from the outside community, which would likely lead to more commuting and potentially even displacement pressure on the local population,” he explains. Another important element to reducing the amount of car journeys people make is to make key amenities, such as healthcare, easily accessible by foot, or bicycle.


11.27 We love cities here at MIT Technology Review—so much so we wrote a whole issue about how technology is shaping cities in June last year, and another more recently about urbanism

Gabrielle Merite and Andre Vitorio wrote a really interesting piece for us in April 2021 exploring how megacities could lead the fight against climate change—because reducing emissions in a few of the world’s most populous cities (including New York, Los Angeles, and Shenzhen) could have an outsized impact on climate change.


11.22 Onstage now is Luis Alonso, a research scientist in the City Science group and Principal Investigator of the Andorra Living Lab Project.

When we talk about deep building retrofitting, we’re talking about increasing the thermal isolation, and improving exterior cladding, he says. This will reduce the amount of operational energy required.


11.20 Welcome back to the day’s second session, and the topic is how to create realistic ways for cities to adapt to the changing climate, while also creating new spaces and opportunities for people to thrive. 


10.36 We’re now taking a short break, so we should be back in half an hour. Catch you in a bit!


10.30 “The internet is running out of data centers, and data centers are running out of electricity,” she says. When Google started its sustainability journey in 2007, it realized that cutting down on electricity use was the best way to cut its carbon footprint, she adds.

Businesses should feel accountability for their emissions and carbon footprint, she says. “I do think we have a responsibility, right? From a governance perspective, how do we participate in the changes that we want and we need, but also make sure that we create the incentives and the market signals for that?” she asks. “It’s also about how do we reduce the barriers of entry for other buyers to be able to join such efforts as well.”


10.20 Renewable energy sources are going to be a key piece of the puzzle for 24/7 electricity, says Texier. But we also know that to maximize those resources, we’re going to need the storage capacity at low cost, she adds, pointing out that we already know that wind and solar supply can be unpredictable. Like our other speakers, she notes how we’re going to need to reduce emissions, but we’re also going to need to remove emissions that are already in the atmosphere to make a real long-term difference in the future.


10.12 Two years ago, Google committed to achieving a goal of “24/7 carbon-free energy” by 2030. The company says this means  “matching each hour of our electricity consumption with carbon-free electricity sources on every grid where we operate.” Read more about its plans to support emerging technologies that can provide carbon-free power around the clock. 


10.10 Next up, Maud Texier, who is head of clean energy & carbon development at Google. She leads a team responsible for developing and scaling 24/7 carbon-free energy for Google’s global infrastructure worldwide with various initiatives including new technologies, clean energy supply, and policy efforts.


10.06 “In some ways, climate is more recession-proof because the government has basically written a $370 billion check for it,” says Ransohoff. Read our climate team James and Casey’s takes on the biggest technology wins in the breakthrough climate bill when it was announced back in July.

“I think that applies to some parts of climate, and doesn’t apply to others,” she adds. “But I think we are slowly moving to a world where there is more certainty, and that gives the private markets, investors, and entrepreneurs more certainty that if they build something, there will be customers and they will be able to make money.”


10.00 The fundamental principles of net zero are to measure emissions, reduce it as much as you possibly can, and then deal with the rest, she says.

“I think, I think that carbon removal is a fundamental part of net zero. But I don’t think it’s fully been metabolized by corporates or companies yet, and part of the work of building the ecosystem is figuring out how do we do that,” she adds.


09.50 Stripe’s carbon removal started as an experiment, but positive customer feedback pushed it towards a proper project, Ransohoff says, adding that around 25,000 businesses are contributing to Stripe’s carbon projects. Around 8-10% of companies sign up to this when they first join Stripe, which acknowledges shifting attitudes towards businesses really wanting to contribute to the climate change fight, she says. Lowering friction for them is a really important part of that—making it as easy as possible.


09.45 Our next discussion is with Nan Ransohoff, head of climate at financial company Stripe. Read more about how the company is putting its money where its mouth is to fund carbon removal. 


09.43 The top things ordinary people can do to help climate initiatives are:

1) Cleaning up the energy coming into their homes

2) Switching to an electric vehicle or a bike

3) Changing your consumption habits (eating less beef, lamb and cheese).


09.40 “A lot of the biggest actors that benefit a lot from the way the world currently works, they need to step up their game,” says Panchadsaram.

The best example of this is Tesla, he says, which has a six to eight year headstart over Volkswagen, Ford, GM, and a bunch of Chinese companies, who are now all racing to catch up with the company.

“They’re all-in on electric,” he explains. “If you’re a large company that’s behind in this way, you need to hurry up really fast—you can see the hunger than GM and Ford have.”


09.35 We’re going to need a lot more innovations to reach our 2030 goals, and we’ll need to think seriously about scaling up and investing in the technologies we’re going to need in 2040, 2050, and beyond, he says.

“I think it is taking longer these days to build companies and businesses,” he explains. “The software one-hit wonder doesn’t exist as often any more, and so the venture capitalism industry is extending its timeline, but if you’re a fund focused on the clean tech transition and acceleration you do likely have longer time horizons.”


09.30 “It’s really neat that energy security today is relying on clean, green things, and that’s truly the credit of the market in bringing down these costs,” says Panchadsaram.

“I think if you’re in a space that’s delivering energy, that’s moving people, feeding people, those customers will always be there—no matter what happens.”


09.27 A new wave of entrepreneurs are completely focused on bringing down the cost curve, and know they need to bring in capital, he says. “Last year $57 billion went into clean tech jobs, and that is so inspiring.”

One thing that could distract us is things that only tackle a small part of the problem, he adds. Instead, multiple solutions will become more common across industries like energy, transportation, and food. “These industries are so big, and we need to take every chance we can [to work out what’s going to work].”


09.20 Speed and scale is an action plan for tackling the climate crisis, says Panchadsaram. There’s a lot of optimism, he says, but notes how the plan to reduce carbon emissions by 59 billion tons a year is a tough goal to reach.
“Energy provides how we move, how we eat, and so the scale of the problem requires all of these accelerators,” he says.


09.15 Our first session opens with a discussion with Ryan Panchadsaram, who is an engineer and investor focused on solving systemic, societal challenges at venture capital firm Kleiner Perkins.


09.10 Hello, and welcome back to ClimateTech day two! We’ve got a great lineup of speakers ahead of us today. I’m Rhiannon, a reporter at MIT Technology Review, and I’ll be covering this morning’s sessions with you up until lunch.

Like yesterday, we’re opening with a few words from James Temple, our senior climate change and energy editor.


Come back to this page for rolling updates throughout the day as we kick off the second day of ClimateTech, MIT Technology Review’s first ever conference dedicated to finding solutions for climate change.

Programming starts at 9am ET on Thursday October 13, and you can follow along here to find out what’s being said on stage.

Today we’ll be hearing from a wide range of different people working in this space, including top clean energy execs at Big Tech firms, climate scientists, government officials, startup founders, and of course, MIT Technology Review reporters and editors. Tune in at 9am!

The Download: AI’s life-and-death decisions, and plant-based steak

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The messy morality of letting AI make life-and-death decisions

In a workshop in the Netherlands, Philip Nitschke is overseeing testing on his new assisted suicide machine. Sealed inside the coffin-sized pod, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button? The machine will then fill with nitrogen gas, causing the occupant to pass out in less than a minute and die by asphyxiation in around five.

Despite a 25-year campaign to “demedicalize death” through technology, Nitschke has not been able to sidestep the medical establishment fully. Switzerland, which has legalized assisted suicide, requires that candidates for euthanasia demonstrate mental capacity, which is typically assessed by a psychiatrist.

A solution could come in the form of an algorithm that Nitschke hopes will allow people to perform a kind of psychiatric self-assessment. While his mission may seem extreme—even outrageous—to some, he is not the only one looking to involve technology, and AI in particular, in life-or-death decisions. Read the full story.

—Will Douglas Heaven

This fascinating piece is from our forthcoming mortality-themed issue, available from 26 October. If you want to read it when it comes out, you can subscribe to MIT Technology Review for as little as $80 a year.

Impossible Foods has a big new offering in the works: filet mignon

Progress is being made on a truly impossible-seeming area of plant-based meat products: steak. And not just any steak—filet mignon.

At MIT Technology Review’s ClimateTech event on Wednesday, Impossible Foods founder Pat Brown shared that while he couldn’t give an exact date for when the company’s steak product will be ready for consumers to purchase, there is a prototype—and he tried it out himself earlier this year. Read the full story diving into the biggest challenges of replicating the crème de la crème of steaks from plants, and tune in to our live blog covering the second day of ClimateTech later this morning.

Elsewhere at Climate Tech, our climate reporter Casey Crownhart moderated a session on  “Solving the Hard-to-Solve Sectors,” digging into the industries that are crucial to combating climate change, but tend to be overlooked.

She dived into the nitty gritty of what these sectors are, what’s so hard about them, and the approaches companies are taking to clean them up in The Spark, her weekly newsletter giving you the inside track on all the latest climate innovations. Read this week’s edition, and sign up to receive it in your inbox every Wednesday. 

Human brain cells transplanted into baby rats’ brains grow and form connections

Human neurons transplanted into a rat’s brain continue to grow, forming connections with the animals’ own brain cells and helping guide their behavior, new research has shown.

In a study published in the journal Nature yesterday, lab-grown clumps of human brain cells were transplanted into the brains of newborn rats. They grew and integrated with the rodents’ own neural circuits, eventually making up around one-sixth of their brains. It’s a development that could shed light on human neuropsychiatric disorders. Read the full story.

—Jessica Hamzelou

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How China’s chipmakers are preparing for US sanctions 
Stockpiling components and planning to train AI models overseas are just some of the tools in their arsenal. (Wired $)
+ Samsung has been granted a year-long exemption from the rules. (WSJ $)
+ The regulations come at a very trying time for the industry. (Bloomberg $)

2 A robotic exoskeleton adapts to wearers to help them walk faster
Traditional exoskeletons are expensive and bulky, but this one is essentially a little robotic boot. (MIT Technology Review

3 Amazon’s dream home is a surveillance nightmare
Its products gather swathes of data, detailing your routines and habits. (WP $)
+ Ring’s new TV show is a brilliant but ominous viral marketing ploy. (MIT Technology Review)

4 Alex Jones must pay the Sandy Hook victims’ families $1 billion
It’s a record-breaking amount for a defamation lawsuit. (Vox)

5 Ukraine’s Starlink systems are coming back online
The devices have suffered outages in the past few days, leaving soldiers without any way to communicate. (FT $)
+ Odessa’s officials have removed Elon Musk’s picture from a billboard. (Motherboard)
+ Russia’s train reliance is part of its problem during the war. (The Atlantic $)

6 The US midterms have a misinformation problem
Multilingual fact-checking groups are stepping up to try to combat the falsehoods. (NYT $)
+ Why midterm “October surprises” are rarely the revelations they seem. (Vox)

7 A long-standing malaria mystery has been solved 🦟
Experts simply couldn’t work out where mosquitoes went during hot weather. (Economist $)
+ The new malaria vaccine will save countless lives. (MIT Technology Review)

8 Fake vaccination certificates are circulating in India
It doesn’t bode well for the country’s claims of high vaccination rates. (Rest of World)

9 Even AI doesn’t like math
Some language models are failing to get to grips with tricky problems. (IEEE Spectrum)
+ A new AI tool can detect sepsis. (Undark)
+ DeepMind’s game-playing AI has beaten a 50-year-old record. (MIT Technology Review)

10 Consumer tech is going solar powered 
If this Swedish startup has their way, that is. (The Next Web)

Quote of the day

“Compare that to Lord of the Rings, when they scan your eyeballs just to get in!”

—Charlie Vickers, the actor who plays Halbrand in The Rings of Power, discusses the intense biometric lengths that showmakers went to in order to keep the Tolkien show a secret with the Guardian.

The big story

The uneasy coexistence of Yandex and the Kremlin

August 2020

While Moscow was under coronavirus lockdown between March and June 2020, the Russian capital emptied out—apart from the streams of cyclists in the trademark yellow uniform of Yandex’s food delivery service.

Often referred to in the West as Russia’s Google, Yandex is really more like Google, Amazon, Uber, and maybe a few other companies combined. It’s not really part of Russia’s Silicon Valley, as much as it’s a Russian Silicon Valley unto itself.  

But Yandex’s success has come at a price. The Kremlin has long viewed the internet as a battlefield in its escalating tensions with the West and has become increasingly concerned that a company like Yandex, with the heaps of data it has on Russian citizens, could one day fall into foreign hands. In a world increasingly concerned with protecting borders and regulating the tech industry, Yandex’s dilemma may not be just a Russian story. Read the full story.

—Evan Gershkovich

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Hey, geese like baseball too! (thanks Craig!)
+ Here’s all the summer movies you may have missed the first time around.
+ Guys, drop everything—it’s squirrel awareness month.
+ This clip reminds me how much I need to up my pool game.
+ John Lennon insisting all four Beatles were bald will never not be funny.

5 Tools to Extract Data from Google’s SERPs

Google’s search result pages offer clues as to why one URL ranks higher than another. The clues can help advance your own strategy.

Here are five handy tools to analyze search results on Google.

Search Result Tools

1. Quickly find domain rankings. Google’s results are not numbered. Thus it’s difficult to know where your domain (or a competitor’s) ranks.

SEO Search Simulator is a free Chrome extension to reproduce search results from any country. My favorite feature is the ability to locate a domain’s position for specific queries.

To use:

  • Open the extension.
  • Type your search query.
  • Paste the domain to see its listing and position in search results.

You can check several domains simultaneously for a single search phrase. You can also save searches to retain the domains for ongoing monitoring.

Screenshot of SEO Search SimulatorScreenshot of SEO Search Simulator

SEO Search Simulator reproduces search results from any country. This example is for the query “seo consulting.”

2. Copy URLs from search results. Google redirects URLs from its index to search results pages. We cannot copy (and extract) a URL without opening it.

That’s the purpose of SEO Minion, another free Chrome extension. It copies the entire visible (desktop) results of a query with a single click, a serious time saver. To adjust the number of visible URLs:

  • Click Google’s “Settings” icon at the top-right of the screen.
  • Go to “See all settings.”
  • Select the number of results per page (up to 100).

SEO Minion can also copy URLs from other sections, such as video carousels, product URLs from shopping results, and local businesses’ URLs from 3-packs.

Screenshot of SEO Minion Screenshot of SEO Minion

SEO Minion copies the entire visible (desktop) results of a query with a single click.

3. Export top-ranking URLs with data. SEO Minon is helpful. But further analysis beyond the URL requires details such as title tags and backlinks.

SE Ranking’s “SERP Analyzer” pulls high-ranking pages (up to 50) from any search result into an Excel file containing the URL, title, meta description, position, backlinks, and estimated organic traffic from all the keywords.

The export lets you see the top-ranking URLs and key metrics side-by-side.

SE Ranking’s tool is not free. But it analyzes mobile and desktop results, while SEO Minion is desktop only.

“SERP analysis” in SE Ranking pulls high-ranking URLs from any search result into an Excel file containing the URL, title, meta description, position, number of backlinks, and estimated organic traffic from all the keywords (not shown).

4. Copy Autocomplete suggestions. Google’s autocomplete is a valuable source of keyword and content ideas. Unfortunately, there’s no easy way to copy those suggestions other than manually — search a phrase, then copy what autocomplete suggests.

SERP Help is a free Google extension that adds those suggestions to a “bucket” to then copy in total. The bucket will save all of your autocomplete keywords.

SERP Help places autocomplete suggestions in a “bucket” for easy copying.

5. Copy related search queries. Two additional sections of Google’s search results beyond autocomplete provide keyword ideas: “People also ask” and “Related searches.”

Keywords Everywhere is a handy extension to copy those related keywords. The premium version adds each query’s volume and cost per click (from Google Ads’ Keyword Planner).

Screenshot of Keywords Everywhere pagesScreenshot of Keywords Everywhere pages

Keywords Everywhere copies the queries in “Related searches” (at left) and “People also ask.”

Extract and Analyze

The above tools make it easy to extract URLs and related data from top search results. From there, you can analyze those pages and expand keywords.

Microsoft Bing Is Getting An AI Image Generator via @sejournal, @MattGSouthern

Microsoft Bing is getting an AI image generator in the coming weeks, which allows users to turn text into digital art.

Let’s say a picture of a Shiba Inu as an astronaut would go perfectly with a blog post you’re writing.

You turn to the search engines for a free-to-use image, but you can’t find one that matches your criteria.

With the new Image Creator tool coming to Microsoft Bing, you can generate the exact image you need by inputting descriptive text.

See an example of Image Creator in action below:

Microsoft Bing Is Getting An AI Image GeneratorImage Credit: Screenshot from microsoft.com/en-us/microsoft-365/blog/2022/10/12/new-tools-from-microsoft-365-edge-and-bing-bridge-the-gap-between-productivity-and-creativity/, October 2022.

Image Creator is powered by DALL-E 2 image generator technology developed by OpenAI.

In a blog post, Microsoft says Image Creator can assist searchers with creating images that don’t exist yet.

All you have to do is type in an image description, and Image Creator will generate it for you.

When Image Creator becomes available, you can access it by navigating to the Bing Images tab and clicking on “Image Creator” or the Image Creator icon in the sidebar in Microsoft Edge.

Availability

Microsoft is taking a “measured” approach with the rollout of Image Creator, starting with a limited preview in select locations.

The gradual launch is due to how new the DALL-E 2 technology is.

Microsoft is exercising caution out of a commitment to responsible AI, the company says:

“It’s important, with early technologies like DALL∙E 2, to acknowledge that this is new and we expect it to continue to evolve and improve. We take our commitment to responsible AI seriously. To help prevent DALL∙E 2 from delivering inappropriate results across the Designer app and Image Creator, we are working together with our partner OpenAI, who developed DALL∙E 2, to take the necessary steps and will continue to evolve our approach.”

Image Creator will employ techniques to prevent misuse, including query blocking on sensitive topics and filters to limit the generation of images that violate Bing’s policies.

Microsoft will take feedback from the limited preview to improve Image Creator before rolling it out to everyone.


SourceMicrosoft

Featured Image: Zhuravlev Andrey/Shutterstock