Website Migration SEO Best Practices To Preserve Rankings And Avoid Common Pitfalls via @sejournal, @AdamHeitzman

There are several reasons to migrate your website. They might be to enhance website security through another hosting provider, update your brand image, or improve your user experience.

Whatever the reason, you’ll need to follow a migration process that allows you to update your site without compromising your SEO rankings, traffic, and revenue.

Many SEO professionals face a common fear when it comes to website migration: the loss of valuable traffic, rankings, and users during the transition.

A migration strategy prevents data loss, performance dips, and downtime when you move your site to a new domain, hosting platform, or CMS.

In this guide, I’ll walk you through the 16 best practices for a successful website migration.

The Website Migration Process: A 16-Step Checklist

I’ll break down all the tasks into three phases: planning, pre-migration, and post-migration phase.

Phase 1: Planning

As with any multi-dimensional project, your website migration needs thorough planning. Complete these steps before you begin development:

1. Define Project Scope

First, take inventory of all your webpages. What are you trying to achieve with the migration, and how much of the site will it affect?

Your website migration is set to fail if you don’t have a well-defined goal and scope.

2. Set Migration Team And Responsibilities

Who will lead the project? The in-house SEO team or an outsourced SEO agency?

If you’re managing the project internally, be sure to set up a project management framework so everyone can keep tabs on tasks that belong to them.

3. Manage Cross-Department And Stakeholder Expectations

Which other departments will the project impact? Identify all stakeholders involved in the migration process.

Alongside the SEO and dev teams, this often includes designers, IT staff, quality assurance testers, content managers, product designers, and marketers.

It all depends on what your company is and how many department needs to be involved in an activity like website migration.

You can schedule a meeting to explain why you need it and what you aim to achieve. This is also an excellent opportunity to gather feedback on their specific needs, which could help you choose a better domain (or CMS) that serves your future needs.

Tip: Be transparent about the migration process and what they should expect in terms of functionality, design, and performance.

4. Define Your Timelines And Launch Date

Ideally, your new site should go live when site traffic is at its lowest, and your team has the capacity to respond to any unforeseen hiccups.

Choose a day that has the least impact on your business. This can be during off-hours or when you have little to no low business activity.

You can also do it batch by batch rather than migrate all your pages at once.

Phase 2: Pre-Migration Stage

This is an extension of the planning stage. It’s where you set the foundation for a smooth transition and minimize during or post-migration risks.

5. Run A Detailed Risk Assessment

Audit your webpages to spot potential issues before, during, and after migration. This could be traffic loss, disrupted user flow, broken links, or other technical SEO issues.

Rank these risks by their impact on your business using a simple low-medium-high scale, then tackle the most critical first.

Then, create an effective mitigation plan to minimize the severity of the risks. Some effective techniques are:

  • Backup all content, meta-data, and configurations before migration.
  • Use a risk assessment matrix to measure the risks involved in the website migration process and its impact on your business goals.
  • Use a staging environment to test changes before it goes live.
  • Implement 301 redirects for all the changed URLs to maintain their SEO value.

6. Define Performance Benchmarks And Set Up Tracking

Build a detailed picture of your site’s existing performance so you have something to compare your new site with.

At this stage, it’s a good idea to back up your existing site in case you need to restore it during the migration process. You’ll want to pull benchmark data for the following:

  • Core Web Vitals: Use Google’s PageSpeedInsights to measure metrics like loading speed, user interactivity, and visual stability.
  • Server Response Time: Tools like GTMetrix let you see vitals such as the time to first byte (TTFB) and the overall performance of your website server.
  • Database Performance Metrics: Analyze database performance to ensure that it can handle the expected traffic loads post-migration. This may require reviewing query execution times, indexing, etc.
  • Usability Testing: Manually check out all the key areas (conversion elements) to ensure they’re working effectively. You can run A/B tests to identify conversion-deficient areas or pain points that need to be addressed in the new design.
  • Overall Website Performance: Use Google Analytics, Search Console, or third-party tools like Ahrefs to evaluate your current organic traffic, rankings, indexed pages, crawl errors, indexing rate, and backlinks.

In addition to benchmarking, this is a good time to double-check that you’re tracking all your target keywords in your rank-tracking tool.

This is (very) important as it helps you know how much traffic/rankings you lost/gained after the migration.

7. Ensure Dev and SEO Are Aligned On Performance

If your migration involves a technical overhaul (a CMS change, server migration, or redesign), let your developers know the SEO performance metrics that are important to you. This helps minimize the migration’s impact on your SEO performance.

These migrations might require a complete mapping of old URLs to new ones – give these details to your developers, as missing redirects will tank your rankings and affect organic traffic.

Other issues to look out for are:

Ideally, review your entire SEO performance before and after the migration is complete so that you can compare results.

8. Document Everything

Document all your active server settings and configurations before you migrate your website. This includes the details about the DNS settings, CDN configurations, and hosting environment.

Server documentation has several benefits:

  • It ensures the new server environment mirrors the old one and reduces the risks of inconsistencies that may cause performance issues.
  • If any problems arise post-migration, detailed server documentation makes it easier for relevant teams to find, diagnose, and resolve them.
  • It acts as a comprehensive reference for future migrations or server updates.

Configuration management tools like Ansible, Puppet, or Terraform are popular tools for documenting server configurations in a reusable, readable, and version-controlled format.

Version control tools like Git can also be used to track changes made and collaborate effectively with teams during the migration.

9. Set Up A Testing Environment

It’s best to develop your site in a closed testing environment on a separate server to verify the new site functions properly before launching. This is called a staging website.

It is a clone of your website where you can try out all functionalities – themes, calls to action,  plugins, etc. – without disrupting operations on the live website.

A staging website also allows you to identify and fix any potential issues or conflicts before the migration process.

Of course, you’ll need to block public access to the testing environment and prevent search engines from indexing your new pages too soon.

You can either create a robot.txt file to block search engines like Google, add the noindex tag to your new URLs, or set up password protection (.htpasswd).

It looks like this:

Once you have this all set up, run a technical SEO audit on the staging site to find and fix issues before migrating to your live site.

Watch out for little foxes like broken site links, missing alt-texts, meta titles, duplicate URLs, etc., and fix them before migrating to your live website.

Third-party SEO tools like Ahrefs, Semrush, or Screaming Frog can help you achieve this.

Tip: Ensure you have a recent backup of your website after the clone. This way, you have a version to revert to if needed.

10. Conduct A Content Inventory

Now, it’s time to build a complete overview of your existing content and pages.

A content inventory lets you identify if anything goes missing during the migration. Moreover, creating a list of URLs will draw your attention to any existing bugs (like broken links or bad redirects) for you to address before the migration.

You can use a combination of methods to create a comprehensive list of your pages and content assets, such as:

  • Pulling all your URLs with a site crawler.
  • Downloading page data from your CMS.
  • Exporting URLs from Google Search Console.

Don’t forget that site crawlers don’t always pick up orphan pages in a standard crawl if they are not linked internally.

Also, be sure to identify your most valuable pages when assembling your list of URLs. High-value pages are pages with high-intent content, high conversion rates, high traffic levels, and quality backlinks.

This data helps you know which pages to prioritize for minimal impact during the migration.

Finally, if you plan to add additional pages and categories to your new site, this is the time to check that they can fit into your existing architecture. If not, you may need to reconsider your website structure.

Tip: Use a visual sitemap to visualize your current website architecture and find gaps in your content strategy that need to be filled.

Again, ensure you’re tracking the keywords your high-value pages are ranking for. As mentioned above, this helps you to benchmark your performance before and after the migration process.

11. Create A Redirect Map

Next, use your URL list to determine the pages to move to the new site, which ones you can merge, and the ones to drop.

Use redirect links to connect your old content and the new one. So, when users try to access the content on your old page, they’re automatically redirected to the latest version of it.

For your redirect plan, start with the most important pages you identified in the previous step.

Implement a 301 redirect for each page you’re moving to the new site. For context, a 301 redirect indicates that a page has been moved permanently from one URL to another. The authority on the old page is also transferred to the new site.

Technically, your URLs should point to the most relevant page on the new site. For example, let’s assume you have a website (example.com) and want to redirect to a new location:

  • Old URL: example.com/old-page
  • New URL: example.com/new-page

In this case, you’ll set up a 301 redirect from “example.com/old-page” to “example.com/new-page.”

Also, avoid redirecting old URLs to your new homepage. This can result in soft 404 errors, which will dilute overall link equity and harm your rankings.

Instead, if you can’t identify an appropriate new page to point to, either create a new page or remove your old URL.

Using our previous example, let’s say you have another URL (example.com/blog); you need to redirect it to the newer version of the page (example.com/new-page/blog).

  • Old URL: example.com/blog
  • New URL: example.com/new-page/blog

Then: “example.com/new-page/blog” should 301 redirect to “example.com/new-page.”

This way, each domain is redirected to the most relevant page on your website.

When you’ve completed your redirect map, share a list of your new URLs with your broader marketing team so they can update their campaign links once the new site launches.

Tip: Cross-check your redirects on your staging website to be sure they’re not redirecting to another URL that’s redirected.

This is called chain redirects, and it confuses search engine bots when they try to crawl and index the page.

For best results, ensure your redirects are one-to-one and are ideally not going through redirect chains.

12. Run Pre-Launch Checks

Before launching your new site, you’ll need to run various checks in your testing environment to ensure the new site functions as expected.

Work with your SEO team to confirm if the following elements meet the required standards:

  • Test Redirects: Ensure that all redirects are correctly implemented and functioning properly. This includes checking the 301 redirects from the old URLs to the new ones to avoid redirect chains or soft 404s.
  • XML Sitemap: Verify that the new site contains a valid XML sitemap. Use tools like XML Sitemap Validator to check for errors and compliance with search engine requirements.
  • Robots.txt File: If you’ve added a robots.txt file in the staging file, reconfigure it to include pages you want to index on the search engines.
  • URL Structure: Review the URLs to ensure they’re optimized with relevant keywords and follow SEO best practices.
  • Canonical URLs: Do all your canonical URLs point to the relevant variant of the page? Consolidate any duplicates to maintain a clear content hierarchy.
  • Structured Data: Implement structured data (Schema markup) to help search engines understand your pages’ content and deliver rich results to users.
  • Status Codes: Check the HTTP status code for all pages to ensure they return the correct responses. Address any 404 errors or other related issues immediately.
  • Internal Linking: Internal links create connections between your webpages and distribute link equity. Review all internal links to ensure that they are intact and redirect users to the right content.
  • Title Tags, Header Tags, And Meta Descriptions: Are they optimized for the right keywords? Do they follow SEO best practices? Review and update them accordingly.

Tip: Use third-party SEO tools like Ahrefs or Semrush to run a quick audit of your website to find any SEO issues that will become roadblocks during and after migration.

Phase 3: Launch

Now that you’ve crossed all your Ts and dotted your Is, you can proceed to launch your website.

13. Launch And Promote Your New Site

When your site goes live, you’ll need to lift any user and search engine restrictions you set up during the development phase. Remember, the quicker you implement the migration, the better.

Your site will temporarily go offline during the migration, so minimizing downtime will reduce the impact on traffic and revenue.

Remove all the limitations you’ve placed on the staging site so that search engines and users can access it. This includes robots.txt disallows, no index tags, and password protection.

Also, ensure your DNS records point to the right servers after migration.

It’s also a good idea to plan PPC campaigns to coincide with the site launch. Not only will this spread the word about your new site, but it can also help make up for the short-term organic traffic dips you experienced during the migration process.

Phase 4: Post-Migration

Lastly, re-check all the changes to ensure everything is working as intended. Sometimes, the migration may not go as planned, and this is where you decide whether to tweak your edits or roll back to the previous website.

Let’s run through a few of the vitals:

14. Optimize Your Website Performance

Monitor the new website closely to verify if everything is working as it should and check critical aspects of the site, including your new robot.txt file, XML sitemap, redirects, usability, and analytics.

Also, check your website performance compared with your old site. It’s natural to experience a dip in performance for a while, but as the weeks go by, your rankings and traffic should return to their original levels.

To optimize your website:

  • Use website caching to store content/data so users can access it faster.
  • Use a CDN (Content Delivery Network) to reduce latency and improve content delivery time.

For best results, minify your JavaScript, CSS, and HTML files to improve Content Delivery Network (CDN) performance. You can also use IPV6 (Internet Protocol version 6) to improve routing efficiency and reduce content delivery times.

  • Optimize your database systems to improve query speed, user experience, and overall website performance. Best practices such as proper indexing, data tables optimization, query structuring, materialized views, vertical scaling, etc., can help the database handle complex requests without downtime.

Learn more about database performance tuning.

15. Update Technical Documentation

Now that you’ve launched your website, you need to revise your technical documents to ensure it’s accurate and up-to-date.

This documentation guides the migration process and can mitigate risks such as duplicate/lost content, missing/broken links, etc.

Also, it allows for faster collaboration between cross-functional teams and stakeholders. Your documentation should include:

  • A spreadsheet or map of your 301 redirects, including the old and new links.
  • An updated XML sitemap that lists all necessary pages. This is important for search engine indexing.
  • Title tags and meta descriptions for all pages on your site.
  • A report showing performance metrics before and after the migration. You can also add your primary keyword rankings as well.
  • The state of your robots.txt file to ensure only the right pages are indexed on the search results.

16. Implement Basic Maintenance Practices

Lastly, set preventive maintenance practices to keep your website functioning properly after the backup. This helps you catch and resolve potential issues before they become a bigger problem.

Use third-party SEO tools like Google Search Console, Screaming Frog, Ahrefs, and Semrush as monitoring systems to track your performance metrics and detect anomalies that affect web performance.

These tools also give insight into your user activity on your website and your overall performance strength on the search results.

Tip: Since migration impacts search and user experience, recrawl your new site to ensure it’s performing as required. And if your key performance indicators (KPIs) don’t improve, troubleshoot the causes. These could include:

  • Pages on your old site are still being indexed.
  • Slow load times.
  • Bad redirects.
  • Missing internal links.

Activate specific alerts on Google Search Console to highlight issues that could affect performance so you can fix them as soon as they occur.

Final Thoughts

Website migration can be a complex undertaking.

Following the above steps will set you up for success and allow you to prepare, execute, and monitor your migration while minimizing associated risks.

More Resources:


Featured Image: Sammby/Shutterstock

FTC: GoDaddy Hosting Was “Blind” To Security Threats via @sejournal, @martinibuster

The United States Federal Trade Commission (FTC) charged GoDaddy with violations of the Federal Trade Commission Act for allegedly maintaining “unreasonable” security practices that led to multiple security breaches. The FTC’s proposed settlement order will require GoDaddy to take reasonable steps to tighten security and engage third-party security assessments.

FTC Charged GoDaddy With Security Failures

The FTC complaint charged GoDaddy with misrepresenting itself as a secure web host through marketing on its website, in emails and it’s “Trust Center”, alleging that GoDaddy provided customers with “lax data security” in its web hosting environment.

The FTC complaint (PDF) stated:

“Since at least 2015, GoDaddy has marketed itself as a secure choice for customers to host their websites, touting its commitment to data security and careful threat monitoring practices in multiple locations, including its main website for hosting services, its “Trust Center,” and in email and online marketing.

In fact, GoDaddy’s data security program was unreasonable for a company of its size and complexity. Despite its representations, GoDaddy was blind to vulnerabilities and threats in its hosting environment. Since 2018, GoDaddy has violated Section 5 of the FTC Act by failing to implement standard security tools and practices to protect the environment where it hosts customers’ websites and data, and to monitor it for security threats.”

Proposed Settlement

The FTC is proposing that GoDaddy implement a security program to settle charges that it failed to secure its web hosting services, endangering their customers and the people who visited their customer’s compromised websites during major security breaches between 2019 and 2022.

The settlement proposes the following to settle the charges with GoDaddy:

“Prohibit GoDaddy from making misrepresentations about its security and the extent to which it complies with any privacy or security program sponsored by a government, self-regulatory, or standard-setting organization, including the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks;

Require GoDaddy to establish and implement a comprehensive information-security program that protects the security, confidentiality, and integrity of its website-hosting services; and

Mandate that GoDaddy hire an independent third-party assessor who conducts an initial and biennial review of its information-security program.”

Read the FTC statement:

FTC Takes Action Against GoDaddy for Alleged Lax Data Security for Its Website Hosting Services

Featured Image by Shutterstock/Photo For Everything

OpenAI Secretly Funded Benchmarking Dataset Linked To o3 Model via @sejournal, @martinibuster

Revelations that OpenAI secretly funded and had access to the FrontierMath benchmarking dataset are raising concerns about whether it was used to train its reasoning o3 AI reasoning model, and the validity of the model’s high scores.

In addition to accessing the benchmarking dataset, OpenAI funded its creation, a fact that was withheld from the mathematicians who contributed to developing FrontierMath. Epoch AI belatedly disclosed OpenAI’s funding only in the final paper published on Arxiv.org, which announced the benchmark. Earlier versions of the paper omitted any mention of OpenAI’s involvement.

Screenshot Of FrontierMath Paper

Closeup Of Acknowledgement

Previous Version Of Paper That Lacked Acknowledgement

OpenAI 03 Model Scored Highly On FrontierMath Benchmark

The news of OpenAI’s secret involvement are raising questions about the high scores achieved by  the o3 reasoning AI model and causing disappointment with the FrontierMath project. Epoch AI responded with transparency about what happened and what they’re doing to check if the o3 model was trained with the FrontierMath dataset.

Giving OpenAI access to the dataset was unexpected because the whole point of it is to  test AI models but that can’t be done if the models know the questions and answers beforehand.

A post in the r/singularity subreddit expressed this disappointment and cited a document that claimed that the mathematicians didn’t know about OpenAI’s involvement:

“Frontier Math, the recent cutting-edge math benchmark, is funded by OpenAI. OpenAI allegedly has access to the problems and solutions. This is disappointing because the benchmark was sold to the public as a means to evaluate frontier models, with support from renowned mathematicians. In reality, Epoch AI is building datasets for OpenAI. They never disclosed any ties with OpenAI before.”

The Reddit discussion cited a publication that revealed OpenAI’s deeper involvement:

“The mathematicians creating the problems for FrontierMath were not (actively)[2] communicated to about funding from OpenAI.

…Now Epoch AI or OpenAI don’t say publicly that OpenAI has access to the exercises or answers or solutions. I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation.”

Tamay Besiroglu (LinkedIn Profile), associated director at Epoch AI, acknowledged that OpenAI had access to the datasets but also asserted that there was a “holdout” dataset that OpenAI didn’t have access to.

He wrote in the cited document:

“Tamay from Epoch AI here.

We made a mistake in not being more transparent about OpenAI’s involvement. We were restricted from disclosing the partnership until around the time o3 launched, and in hindsight we should have negotiated harder for the ability to be transparent to the benchmark contributors as soon as possible. Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset. We own this error and are committed to doing better in the future.

Regarding training usage: We acknowledge that OpenAI does have access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities. However, we have a verbal agreement that these materials will not be used in model training.

OpenAI has also been fully supportive of our decision to maintain a separate, unseen holdout set—an extra safeguard to prevent overfitting and ensure accurate progress measurement. From day one, FrontierMath was conceived and presented as an evaluation tool, and we believe these arrangements reflect that purpose. “

More Facts About OpenAI & FrontierMath Revealed

Elliot Glazer (LinkedIn profile/Reddit profile), the lead mathematician at Epoch AI confirmed that OpenAI has the dataset and that they were allowed to use it to evaluate OpenAI’s o3 large language model, which is their next state of the art AI that’s referred to as a reasoning AI model. He offered his opinion that the high scores obtained by the o3 model are “legit” and that Epoch AI is conducting an independent evaluation to determine whether or not o3 had access to the FrontierMath dataset for training, which could cast the model’s high scores in a different light.

He wrote:

“Epoch’s lead mathematician here. Yes, OAI funded this and has the dataset, which allowed them to evaluate o3 in-house. We haven’t yet independently verified their 25% claim. To do so, we’re currently developing a hold-out dataset and will be able to test their model without them having any prior exposure to these problems.

My personal opinion is that OAI’s score is legit (i.e., they didn’t train on the dataset), and that they have no incentive to lie about internal benchmarking performances. However, we can’t vouch for them until our independent evaluation is complete.”

Glazer had also shared that Epoch AI was going to test o3 using a “holdout” dataset that OpenAI didn’t have access to, saying:

“We’re going to evaluate o3 with OAI having zero prior exposure to the holdout problems. This will be airtight.”

Another post on Reddit by Glazer described how the “holdout set” was created:

“We’ll describe the process more clearly when the holdout set eval is actually done, but we’re choosing the holdout problems at random from a larger set which will be added to FrontierMath. The production process is otherwise identical to how it’s always been.”

Waiting For Answers

That’s where the drama stands until the Epoch AI evaluation is completed which will indicate whether or not OpenAI had trained their AI reasoning model with the dataset or only used it for benchmarking it.

Featured Image by Shutterstock/Antonello Marangi

Squarespace’s Tools Aid Ecommerce Novices

New ecommerce entrepreneurs are not typically software engineers or graphic designers. Many online stores therefore look alike, since founders, as skilled as they are, rely on basic templates.

Content management systems and ecommerce platforms provide flexible and customizable theme options, but most require development or design expertise.

Some platforms — Shopify and others — allow users who can write code to manipulate themes to a single pixel. Conversely, exceptional site design tools such as Webflow don’t require code but still have a steep learning curve with complex Figma- or Photoshop-like interfaces.

Squarespace

Entrepreneurs may not think of Squarespace when opening an online shop, but I recently worked with the platform and concluded two helpful features — Blueprint AI and the grid-based editor — make it a “just right” solution for founders with limited products who want much visual control.

Blueprint AI

Squarespace launched Blueprint AI last year to assist merchants in building a semi-custom website. The AI guides the user through a series of questions, designing the website as it goes.

The process is interactive, real-time, and editable, generating an attractive and branded site.

Screenshot of the Squarespace page for selecting a template or the Blueprint AI tool.

Squarespace offers templates and a guided design option called Blueprint AI.

Here is a glimpse into the process.

First, Blueprint AI wants to know the name of the business and its personality. I called this example “Latin Quote Art Shop” and chose the brand personality as “Bold,” one of seven options.

  • Professional
  • Playful
  • Sophisticated
  • Friendly
  • Bold
  • Quirky
  • Innovative

Next, I selected sections for the home page and chose each layout. My Latin quotes example will have an introduction and a product area, among other sections.

Users can add pages during the guided Blueprint AI design or later.

It is easy to add pages after Blueprint AI builds its initial design. For example, I could add a “shop” page containing the product catalog and pages for “about” and “contact.”

Blueprint AI offers a limited selection of color palettes organized with labels from the brand personality. This step was disappointing given the capabilities of generative AI, which could easily provide more colors and even suggest them based on text descriptions.

The color palette is limited but easy to use.

Next, I selected a font from a tiny list, confirming again that Blueprint AI is a guided tool, not generative.

The tool, however, becomes part of the Squarespace editor. For example, Blueprint AI surmised my Latin Quotes Art Shop offered art supplies because it recommended the headline “Unleash your inner artist.” But the shop’s purpose was to sell wall art featuring Latin quotes.

Screenshot of the text-prompt page in Blueprint AI

Once in the editor, a user can prompt Blueprint AI to generate text.

I then prompted, “Write a blurb about the Latin Quote Art Shop, which sells framed wall art that features famous Latin quotes like “Aut viam inveniam aut faciam” and nine iterations. The AI eventually produced “Let words inspire.”

Squarespace editor

The second impressive feature was the Squarespace editor. In 2020, Squarespace updated the tool and released the Fluid Engine component.

Fluid Engine has many options for colors and layouts, but its grid system might be the most helpful.

A user can select, drag, and resize any element or section of a page and have it “snap” to the grid. The resulting layout will automatically fit desktop and mobile layouts, precisely positioned.

Screenshot of the Squarespace design grid system

The underlying design grid is simple and useful.

I routinely work in multiple ecommerce platforms — Shopify, WordPress, Ghost, Wix, others. The Squarespace editor is among the best.

The Squarespace editor produces layouts that are both unique and conventional.

Just Right

In short, Squarespace may be a “just right” fit for ecommerce startups needing a lot of visual control for a limited product catalog. The platform’s AI tool and snap-in-place grid are much easier than code-based theme modifiers or complicated editors.

What to expect from Neuralink in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In November, a young man named Noland Arbaugh announced he’d be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom.

The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musk’s brain-interface company.

The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaugh’s livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people’s daily ability to roam the web and play games, giving them what the company has called “digital freedom.”

But this is not yet a commercial product. The current studies are small-scale—they are true experiments, explorations of how the device works and how it can be improved. For instance, at some point last year, more than half the electrode-studded “threads” inserted into Aurbaugh’s brain retracted, and his control over the device worsened; Neuralink rushed to implement fixes so he could use his remaining electrodes to move the mouse.

Neuralink did not reply to emails seeking comment, but here is what our analysis of its public statements leads us to expect from the company in 2025.

More patients

How many people will get these implants? Elon Musk keeps predicting huge numbers. In August, he posted on X: “If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within five years, millions within 10 years.”

In reality, the actual pace is slower—a lot slower. That’s because in a study of a novel device, it’s typical for the first patients to be staged months apart, to allow time to monitor for problems. 

Neuralink has publicly announced that two people have received an implant: Arbaugh and a man referred to only as “Alex,” who received his in July or August. 

Then, on January 8, Musk disclosed during an online interview that there was now a third person with an implant. “We’ve got now three patients, three humans with Neuralinks implanted, and they are all working …well,” Musk said. During 2025, he added, “we expect to hopefully do, I don’t know, 20 or 30 patients.”  

Barring major setbacks, expect the pace of implants to increase—although perhaps not as fast as Musk says. In November, Neuralink updated its US trial listing to include space for five volunteers (up from three), and it also opened a trial in Canada with room for six. Considering these two studies only, Neuralink would carry out at least two more implants by the end of 2025 and eight by the end of 2026.

However, by opening further international studies, Neuralink could increase the pace of the experiments.

Better control

So how good is Arbaugh’s control over the mouse? You can get an idea by trying a game called Webgrid, where you try to click quickly on a moving target. The program translates your speed into a measure of information transfer: bits per second. 

Neuralink claims Arbaugh reached a rate of over nine bits per second, doubling the old brain-interface record. The median able-bodied user scores around 10 bits per second, according to Neuralink.

And yet during his livestream, Arbaugh complained that his mouse control wasn’t very good because his “model” was out of date. It was a reference to how his imagined physical movements get mapped to mouse movements. That mapping degrades over hours and days, and to recalibrate it, he has said, he spends as long as 45 minutes doing a set of retraining tasks on his monitor, such as imagining moving a dot from a center point to the edge of a circle.

Noland Arbaugh stops to calibrate during a livestream on X
@MODDEDQUAD VIA X

Improving the software that sits between Arbaugh’s brain and the mouse is a big area of focus for Neuralink—one where the company is still experimenting and making significant changes. Among the goals: cutting the recalibration time to a few minutes. “We want them to feel like they are in the F1 [Formula One] car, not the minivan,” Bliss Chapman, who leads the BCI software team, told the podcaster Lex Fridman last year.

Device changes

Before Neuralink ever seeks approval to sell its brain interface, it will have to lock in a final device design that can be tested in a “pivotal trial” involving perhaps 20 to 40 patients, to show it really works as intended. That type of study could itself take a year or two to carry out and hasn’t yet been announced.

In fact, Neuralink is still tweaking its implant in significant ways—for instance, by trying to increase the number of electrodes or extend the battery life. This month, Musk said the next human tests would be using an “upgraded Neuralink device.”

The company is also still developing the surgical robot, called R1, that’s used to implant the device. It functions like a sewing machine: A surgeon uses R1 to thread the electrode wires into people’s brains. According to Neuralink’s job listings, improving the R1 robot and making the implant process entirely automatic is a major goal of the company. That’s partly to meet Musk’s predictions of a future where millions of people have an implant, since there wouldn’t be enough neurosurgeons in the world to put them all in manually. 

“We want to get to the point where it’s one click,” Neuralink president Dongjin Seo told Fridman last year.

Robot arm

Late last year, Neuralink opened a companion study through which it says some of its existing implant volunteers will get to try using their brain activity to control not only a computer mouse but other types of external devices, including an “assistive robotic arm.”

We haven’t yet seen what Neuralink’s robotic arm looks like—whether it’s a tabletop research device or something that could be attached to a wheelchair and used at home to complete daily tasks.

But it’s clear such a device could be helpful. During Aurbaugh’s livestream he frequently asked other people to do simple things for him, like brush his hair or put on his hat.

Arbaugh demonstrates the use of Imagined Movement Control.
@MODDEDQUAD VIA X

And using brains to control robots is definitely possible—although so far only in a controlled research setting. In tests using a different brain implant, carried out at the University of Pittsburgh in 2012, a paralyzed woman named Jan Scheuermann was able to use a robot arm to stack blocks and plastic cups about as well as a person who’d had a severe stroke—impressive, since she couldn’t actually move her own limbs.

There are several practical obstacles to using a robot arm at home. One is developing a robot that’s safe and useful. Another, as noted by Wired, is that the calibration steps to maintain control over an arm that can make 3D movements and grasp objects could be onerous and time consuming.

Vision implant

In September, Neuralink said it had received “breakthrough device designation” from the FDA for a version of its implant that could be used to restore limited vision to blind people. The system, which it calls Blindsight, would work by sending electrical impulses directly into a volunteer’s visual cortex, producing spots of light called phosphenes. If there are enough spots, they can be organized into a simple, pixelated form of vision, as previously demonstrated by academic researchers.

The FDA designation is not the same as permission to start the vision study. Instead, it’s a promise by the agency to speed up review steps, including agreements around what a trial should look like. Right now, it’s impossible to guess when a Neuralink vision trial could start, but it won’t necessarily be this year. 

More money

Neuralink last raised money in 2023, collecting around $325 million from investors in a funding round that valued the company at over $3 billion, according to Pitchbook. Ryan Tanaka, who publishes a podcast about the company, Neura Pod, says he thinks Neuralink will raise more money this year and that the valuation of the private company could double.

Fighting regulators

Neuralink has attracted plenty of scrutiny from news reporters, animal-rights campaigners, and even fraud investigators at the Securities and Exchange Commission. Many of the questions surround its treatment of test animals and whether it rushed to try the implant in people.

More recently, Musk has started using his X platform to badger and bully heads of state and was named by Donald Trump to co-lead a so-called Department of Government Efficiency, which Musk says will “get rid of nonsensical regulations” and potentially gut some DC agencies. 

During 2025, watch for whether Musk uses his digital bullhorn to give health regulators pointed feedback on how they’re handling Neuralink.

Other efforts

Don’t forget that Neuralink isn’t the only company working on brain implants. A company called Synchron has one that’s inserted into the brain through a blood vessel, which it’s also testing in human trials of brain control over computers. Other companies, including Paradromics, Precision Neuroscience, and BlackRock Neurotech, are also developing advanced brain-computer interfaces.

Special thanks to Ryan Tanaka of Neura Pod for pointing us to Neuralink’s public announcements and projections.

Interest in nuclear power is surging. Is it enough to build new reactors?

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Lately, the vibes have been good for nuclear power. Public support is building, and public and private funding have made the technology more economical in key markets. There’s also a swell of interest from major companies looking to power their data centers. 

These shifts have been great for existing nuclear plants. We’re seeing efforts to boost their power output, extend the lifetime of old reactors, and even reopen facilities that have shut down. That’s good news for climate action, because nuclear power plants produce consistent electricity with very low greenhouse-gas emissions.

I covered all these trends in my latest story, which digs into what’s next for nuclear power in 2025 and beyond. But as I spoke with experts, one central question kept coming up for me: Will all of this be enough to actually get new reactors built?

To zoom in on some of these trends, let’s take a look at the US, which has the largest fleet of nuclear reactors in the world (and the oldest, with an average age of over 42 years).

In recent years we’ve seen a steady improvement in public support for nuclear power in the US. Today, around 56% of Americans support more nuclear power, up from 43% in 2020, according to a Pew Research poll.

The economic landscape has also shifted in favor of the technology. The Inflation Reduction Act of 2022 includes tax credits specifically for operating nuclear plants, aimed at keeping them online. Qualifying plants can receive up to $15 per megawatt-hour, provided they meet certain labor requirements. (For context, in 2021, its last full year of operation, Palisades in Michigan generated over 7 million megawatt-hours.) 

Big Tech has also provided an economic boost for the industry—tech giants like Microsoft, Meta, Google, and Amazon are all making deals to get in on nuclear.

These developments have made existing (or recently closed) nuclear power plants a hot commodity. Plants that might have been candidates for decommissioning just a few years ago are now candidates for license extension. Plants that have already shut down are seeing a potential second chance at life.

There’s also the potential to milk more power out of existing facilities through changes called uprates, which basically allow existing facilities to produce more energy by tweaking existing instruments and power generation systems. The US Nuclear Regulatory Commission has approved uprates totaling six gigawatts over the past two decades. That’s a small but certainly significant fraction of the roughly 97 gigawatts of nuclear on the grid today. 

Any reactors kept online, reopened, or ramped up spell good news for emissions. But expanding the nuclear fleet in the US will require not just making the most of existing assets, but building new reactors. 

We’ll probably also need new reactors just to maintain the current fleet, since so many reactors are scheduled to be retired in the next couple of decades. Will the enthusiasm for keeping old plants running also translate into building new ones? 

In much of the world (China being a notable exception), building new nuclear capacity has historically been expensive and slow. It’s easy to point at Plant Vogtle in the US: The third and fourth reactors at that facility began construction in 2009. They were originally scheduled to start up in 2016 and 2017, at a cost of around $14 billion. They actually came online in 2023 and 2024, and the total cost of the project was north of $30 billion.

Some advanced technology has promised to fix the problems in nuclear power. Small modular reactors could help cut cost and construction times, and next-generation reactors promise safety and efficiency improvements that could translate to cheaper, quicker construction. Realistically, though, getting these first-of-their-kind projects off the ground will still require a lot of money and a sustained commitment to making them happen. “The next four years are make or break for advanced nuclear,” says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy.  

There are a few factors that could help the progress we’ve seen recently in nuclear extend to new builds. For one, public support from the US Department of Energy includes not only tax credits but public loans and grants for demonstration projects, which can be a key stepping stone to commercial plants that generate electricity for the grid. 

Changes to the regulatory process could also help. The Advance Act, passed in 2024, aims at sprucing up the Nuclear Regulatory Commission (NRC) in the hopes of making the approval process more efficient (currently, it can take up to five years to complete). 

“If you can see the NRC really start to modernize toward a more efficient, effective, and predictable regulator, it really helps the case for a lot of these commercial projects, because the NRC will no longer be seen as this barrier to innovation,” says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank. We should start to see changes from that legislation this year, though what happens could depend on the Trump administration.

The next few years are crucial for next-generation nuclear technology, and how the industry fares between now and the end of the decade could be very telling when it comes to how big a role this technology plays in our longer-term efforts to decarbonize energy. 


Now read the rest of The Spark

Related reading

For more on what’s next for nuclear power, check out my latest story.

One key trend I’m following is efforts to reopen shuttered nuclear plants. Here’s how to do it.  

Kairos Power is working to build molten-salt-cooled reactors, and we named the company to our list of 10 Climate Tech Companies to watch in 2024.  

Another thing 

Devastating wildfires have been ravaging Southern California. Here’s a roundup of some key stories about the blazes. 

→ Strong winds have continued this week, bringing with them the threat of new fires. Here’s a page with live updates on the latest. (Washington Post)

→ Officials are scouring the spot where the deadly Palisades fire started to better understand how it was sparked. (New York Times)

→ Climate change didn’t directly start the fires, but global warming did contribute to how intensely they burned and how quickly they spread. (Axios

→The LA fires show that controlled burns aren’t a cure-all when it comes to preventing wildfires. (Heatmap News)

→ Seawater is a last resort when it comes to fighting fires, since it’s corrosive and can harm the environment when dumped on a blaze. (Wall Street Journal)

Keeping up with climate  

US emissions cuts stalled last year, despite strong growth in renewables. The cause: After staying flat or falling for two decades, electricity demand is rising. (New York Times)

With Donald Trump set to take office in the US next week, many are looking to state governments as a potential seat of climate action. Here’s what to look for in states including Texas, California, and Massachusetts. (Inside Climate News)

The US could see as many as 80 new gas-fired power plants built by 2030. The surge comes as demand for power from data centers, including those powering AI, is ballooning. (Financial Times)

Global sales of EVs and plug-in hybrids were up 25% in 2024 from the year before. China, the world’s largest EV market, is a major engine behind the growth. (Reuters)

A massive plant to produce low-emissions steel could be in trouble. Steelmaker SSAB has pulled out of talks on federal funding for a plant in Mississippi. (Canary Media)

Some solar panel companies have turned to door-to-door sales. Things aren’t always so sunny for those involved. (Wired)

Deciding the fate of “leftover” embryos

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Over the past few months, I’ve been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a person’s uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesn’t work. But often it does. For the article, I explored what happens to the healthy embryos that are left over.

I spoke to Lisa Holligan, who had IVF in the UK around five years ago. Holligan donated her “genetically abnormal” embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesn’t know what to do with it.

She’s not the only one struggling with the decision. “Leftover” embryos are kept frozen in storage tanks, where they sit in little straws, invisible to the naked eye, their growth paused in a state of suspended animation. What happens next is down to personal choice—but that choice can be limited by a complex web of laws and ethical and social factors.

These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. Intended parents will sign a form indicating what they would like to happen to those embryos. Typically, that means deciding early on whether they might like any embryos they don’t end up using to be destroyed or donated, either to someone else trying to conceive or for research.

But it can be really difficult to make these decisions before you’ve even started treatment. People seeking fertility treatment will usually have spent a long time trying to get pregnant. They are hoping for healthy embryos, and some can’t imagine having any left over—or how they might feel about them.

For a lot of people, embryos are not just balls of cells. They hold the potential for life, after all. Some people see them as children, waiting to be born. Some even name their embryos, or call them their “freezer babies.” Others see them as the product of a long, exhausting, and expensive IVF journey.

Holligan says that she initially considered donating her embryo to another person, but her husband disagreed. He saw the embryo as their child and said he wouldn’t feel comfortable with giving it up to another family. “I started having these thoughts about a child coming to me when they’re older, saying they’ve had a terrible life, and [asking] ‘Why didn’t you have me?’” she told me.

Holligan lives in the UK, where you can store your embryos for up to 55 years. Destroying or donating them are also options. That’s not the case in other countries. In Italy, for example, embryos cannot be destroyed or donated. Any that are frozen will remain that way forever, unless the law changes at some point.

In the US, regulations vary by state. The patchwork of laws means that one state can bestow a legal status on embryos, giving them the same rights as children, while another might have no legislation in place at all.

No one knows for sure how many embryos are frozen in storage tanks, but the figure is thought to be somewhere between 1 million and 10 million in the US alone. Some of these embryos have been in storage for years or decades. In some cases, the intended parents have deliberately chosen this, opting to pay hundreds of dollars per year in fees.

But in other cases, clinics have lost touch with their clients. Many of these former clients have stopped paying for the storage of their embryos, but without up-to-date consent forms, clinics can be reluctant to destroy them. What if the person comes back and wants to use those embryos after all?

“Most clinics, if they have any hesitation or doubt or question, will err on the side of holding on to those embryos and not discarding them,” says Sigal Klipstein, a reproductive endocrinologist at InVia Fertility Center in Chicago, who also chairs the ethics committee of the American Society for Reproductive Medicine. “Because it’s kind of like a one-way ticket.”

Klipstein thinks one of the reasons why some embryos end up “abandoned” in storage is that the people who created them can’t bring themselves to destroy them. “It’s just very emotionally difficult for someone who has wanted so much to have a family,” she tells me.

Klipstein says she regularly talks to her patients about what to do with leftover embryos. Even people who make the decision with confidence can change their minds, she says. “We’ve all had those patients who have discarded embryos and then come back six months or a year later and said: ‘Oh, I wish I had those embryos,’” she tells me. “Those [embryos may have been] their best chance of pregnancy.”

Those who do want to discard their embryos have options. Often, the embryos will simply be exposed to air and then disposed of. But some clinics will also offer to transfer them at a time or place where a pregnancy is extremely unlikely to result. This “compassionate transfer,” as it is known, might be viewed as a more “natural” way to dispose of the embryo.

But it’s not for everyone. Holligan has experienced multiple miscarriages and wonders if a compassionate transfer might feel similar. She wonders if it might just end up “putting [her] body and mind through unnecessary stress.”

Ultimately, for Holligan and many others in a similar position, the choice remains a difficult one. “These are … very desired embryos,” says Klipstein. “The purpose of going through IVF was to create embryos to make babies. And [when people] have these embryos, and they’ve completed their family plan, they’re in a place they couldn’t have imagined.”


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Our relationship with embryos is unique, and a bit all over the place. That’s partly because we can’t agree on their moral status. Are they more akin to people or property, or something in between? Who should get to decide their fate? While we get to the bottom of these sticky questions, millions of embryos are stuck in suspended animation—some of them indefinitely.

It is estimated that over 12 million babies have been born through IVF. The development of the Nobel Prize–winning technology behind the procedure relied on embryo research. Some worry that donating embryos for research can be onerous—and that valuable embryos are being wasted as a result.

Fertility rates around the world are dropping below the levels needed to maintain stable populations. But IVF can’t save us from a looming fertility crisis. Gender equality and family-friendly policies are much more likely to prove helpful

Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states. But in November of last year, some states voted to extend and protect access to abortion, and voters in Missouri supported overturning the state’s ban.

Last year, a ruling by the Alabama Supreme Court that embryos count as children ignited fears over access to fertility treatments in a state that had already banned abortion. The move could also have implications for the development of technologies like artificial uteruses and synthetic embryos, my colleague Antonio Regalado wrote at the time.

From around the web

It’s not just embryos that are frozen as part of fertility treatments. Eggs, sperm, and even ovarian and testicular tissue can be stored too. A man who had immature testicular tissue removed and frozen before undergoing chemotherapy as a child 16 years ago had the tissue reimplanted in a world first, according to the team at University Hospital Brussels that performed the procedure around a month ago. The tissue was placed into the man’s testicle and scrotum, and scientists will wait a year before testing to see if he is successfully producing sperm. (UZ Brussel)

The Danish pharmaceutical company Novo Nordisk makes half the world’s insulin. Now it is better known as the manufacturer of the semaglutide drug Ozempic. How will the sudden shift affect the production and distribution of these medicines around the world? (Wired)

The US has not done enough to prevent the spread of the H5N1 virus in dairy cattle. The response to bird flu is a national embarrassment, argues Katherine J. Wu. (The Atlantic)

Elon Musk has said that if all goes well, millions of people will have brain-computer devices created by his company Neuralink implanted within 10 years. In reality, progress is slower—so far, Musk has said that three people have received the devices. My colleague Antonio Regalado predicts what we can expect from Neuralink in 2025. (MIT Technology Review)

We need to protect the protocol that runs Bluesky

Last week, when Mark Zuckerberg announced that Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. It’s just the latest example of a billionaire flip-flop affecting our social lives on the internet. 

After January 6, 2021, Zuckerberg bragged to Congress about Facebook’s “industry-leading fact-checking program” and banned Donald Trump from the platform. But just two years later, he welcomed Trump back. And last year Zuckerberg was privately reassuring the conservative congressman Jim Jordan that Meta will no longer demote questionable content while it’s being fact-checked. 

Now, not only is Meta ending fact-checking completely; it is loosening rules around hate speech, allowing horrendous personal attacks on migrants and trans people, for example, on its platforms. 

And Zuckerberg isn’t the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as “the bedrock of a functioning democracy,” has suspended journalists, restored tens of thousands of banned users (including white nationalists), brought back political advertising, and weakened verification and harassment policies. 

Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns.

And this has led to a constantly shifting digital environment in which people can lose their communication pathways and livelihoods in a second, with no recourse, as opaque rules change. 

The internet doesn’t need to be like this. As luck would have it, a new way is emerging just in time. 

If you’ve heard of Bluesky, you’ve probably heard of it as a clone of Twitter where liberals can take refuge. But under the hood it’s structured fundamentally differently—in a way that could point us to a healthier internet for everyone, regardless of politics or identity. 

Just like email, Bluesky sits on top of an open protocol, in this case known as the AT Protocol. In practice, that means that anyone can build on it. Just as you wouldn’t need anyone’s permission to start a newsletter company built on email, people are starting to share remixed versions of their social media feeds, built on Bluesky. This sounds like a small thing, but think about all the harms enabled by social media companies’ algorithms in the last decade: insurrection, radicalization, self-harm, bullying. Bluesky enables users to collaborate on verification and moderation by sharing block lists and labels. Letting people shape their own experience of social media is nothing short of revolutionary. 

And importantly, if you decide that you don’t agree with Bluesky’s design and moderation decisions, you can build something else on the same infrastructure and use that instead. This is fundamentally different from the dominant, centralized social media that has prevailed until now.

At the core of Bluesky’s philosophy is the idea that instead of being centralized in the hands of one person or institution, social media governance should obey the principle of subsidiarity. The Nobel Prize–winning economist Elinor Ostrom found, through studying grassroots solutions to local environmental problems around the world, that some problems are best solved locally, while others are best solved at a higher level. 

In terms of content moderation, posts related to child sexual abuse or terrorism are best handled by professionals trained to help keep millions or billions safe. But a lot of decisions about speech can be solved in each community, or even user by user as people assemble Bluesky block lists. 

So all the right elements are currently in place at Bluesky to usher in this new architecture for social media: independent ownership, newfound popularity, a stark contrast with other dominant platforms, and right-minded leadership. But challenges remain, and we can’t count on Bluesky to do this right without support. 

Critics have pointed out that Bluesky has yet to turn a profit and is currently running on venture capital, the same corporate structure that brought us Facebook, Twitter, and other social media companies. As of now, there’s no option to exit Bluesky and take your data and network with you, because there are no other servers that run the AT Protocol. Bluesky CEO Jay Graber deserves credit for her stewardship so far, and for attempting to avoid the dangers of advertising incentives. But the process by which capitalism degrades tech products is so predictable that Cory Doctorow coined a now-popular term for it: enshittification.

That’s why we need to act now to secure the foundation of this digital future and make it enshittification-proof. This week, prominent technologists started a new project, which we at New_ Public are supporting, called Free Our Feeds. There are three parts: First, Free Our Feeds wants to create a nonprofit foundation to govern and protect the AT Protocol, outside of Bluesky the company. We also need to build redundant servers so all users can leave with their data or build anything they want—regardless of policies set by Bluesky. Finally, we need to spur the development of a whole ecosystem built on this tech with seed money and expertise. 

It’s worth noting that this is not a hostile takeover: Bluesky and Graber recognize the importance of this effort and have signaled their approval. But the point is, it can’t rely on them. To free us from fickle billionaires, some of the power has to reside outside Bluesky, Inc. 

If we get this right, so much is possible. Not too long ago, the internet was full of builders and people working together: the open web. Email. Podcasts. Wikipedia is one of the best examples—a collaborative project to create one of the web’s best free, public resources. And the reason we still have it today is the infrastructure built up around it: The nonprofit Wikimedia Foundation protects the project and insulates it from the pressures of capitalism. When’s the last time we collectively built anything as good?

We can shift the balance of power and reclaim our social lives from these companies and their billionaires. This is an opportunity to bring much more independence, innovation, and local control to our online conversations. We can finally build the “Wikipedia of social media,” or whatever we want. But we need to act, because the future of the internet can’t depend on whether one of the richest men on Earth wakes up on the wrong side of the bed. 

Eli Pariser is author of The Filter Bubble and Co-Director of New_ Public, a nonprofit R&D lab that’s working to reimagine social media. 

Deepti Doshi is a Co-Director of New_ Public and was a director at Meta.

The Download: how to save social media, and “leftover” embryos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

We need to protect the protocol that runs Bluesky

—Eli Pariser & Deepti Doshi

Last week, when Mark Zuckerberg announced Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. It’s just the latest example of a billionaire flip-flop affecting our social lives on the internet. 

Zuckerberg isn’t the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as “the bedrock of a functioning democracy,” has suspended journalists, restored tens of thousands of banned users, brought back political advertising, and weakened verification and harassment policies. 

Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns. The internet doesn’t need to be like this. But as luck would have it, a new way is emerging just in time. Read the full story.

Deciding the fate of “leftover” embryos

Over the past few months, I’ve been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a person’s uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesn’t work. But often it does. For the article, I explored what happens to the healthy embryos that are left over.

These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. But it can be really difficult to make these decisions before you’ve even started treatment, and some people can’t imagine having any left over—or how they might feel about them. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

MIT Technology Review Narrated: Palmer Luckey on the Pentagon’s future of mixed reality

Palmer Luckey, the founder of Oculus VR, has set his sights on a new mixed-reality headset customer: the Pentagon. If designed well, his company Anduril’s headset will automatically sort through countless pieces of information and flag the most important ones to soldiers in real time. But that’s a big “if.”

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Biden administration won’t force through a TikTok ban
But TikTok could choose to shut itself down on Sunday to prove a point. (ABC News)
+ A Supreme Court decision is expected later today. (NYT $)
+ Every platform has a touch of TikTok about it these days. (The Atlantic $)

2 Apple is pausing its AI news feature
Because it can’t be trusted to meld news stories together without hallucinating. (BBC)
+ The company is working on a fix to roll out in a future software update. (WP $)

3 Meta is preparing for Donald Trump’s mass deportations
By relaxing speech policies around immigration, Meta is poised to shape public opinion towards accepting Trump’s plans to tear families apart. (404 Media)

4 An uncrewed SpaceX rocket exploded during a test flight
Elon Musk says it was probably caused by a leak. (WSJ $)

5 The FBI believes that hackers accessed its agents’ call logs
The data could link investigators to their secret sources. (Bloomberg $)

6 What it’s like fighting fire with water
Dumping water on LA’s wildfires may be inelegant, but it is effective. (NY Mag $)
+ How investigators are attempting to trace the fires’ origins. (BBC)

7 The road to adapting Tesla’s charges for other EVs is far from smooth
But it is happening, slowly but surely. (IEEE Spectrum)
+ Donald Trump isn’t a fan of EVs, but the market is undoubtedly growing. (Vox)
+ Why EV charging needs more than Tesla. (MIT Technology Review)

8 Bionic hands are getting far more sensitive 🦾
A new study is shedding light on how to make them feel more realistic. (FT $)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)

9 Gen Z can’t get enough of astrology apps 🌌
Stargazing is firmly back ein vogue among the younger generations. (Economist $)

10 Nintendo has finally unveiled its long-awaited Switch 2 console
Only for it to look a whole lot like its predecessor. (WSJ $)
+ But it’ll probably sell a shedload of units anyway. (Wired $)

Quote of the day

“Going viral is like winning the lottery—nearly impossible to replicate.”

—Sarah Schauer, a former star on defunct video app Vine, offers creators left nervous by TikTok’s uncertain future in the US some advice, the Washington Post reports.

The big story

After 25 years of hype, embryonic stem cells are still waiting for their moment​

August 2023

In 1998, researchers isolated powerful stem cells from human embryos. It was a breakthrough, since these cells are the starting point for human bodies and have the capacity to turn into any other type of cell—heart cells, neurons, you name it.

National Geographic would later summarize the incredible promise: “the dream is to launch a medical revolution in which ailing organs and tissues might be repaired” with living replacements. It was the dawn of a new era. A holy grail. Pick your favorite cliché—they all got airtime.

Yet today, more than two decades later, there are no treatments on the market based on these cells. Not one. Our biotech editor Antonio Regalado set out to investigate why, and when that might change. Here’s what he discovered.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re planning on catching up with a friend this weekend—stop! You should be hanging out instead.
+ David Lynch was a true visionary; an innovative artist and master of the truly weird. The world is a duller place without him.
+ The very best instant noodles, ranked ($)
+  Congratulations to the highly exclusive Cambridge University Tiddlywinks Club, which is celebrating its 70th anniversary.

OpenAI has created an AI model for longevity science

When you think of AI’s contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year.

Now OpenAI says it’s getting into the science game too—with a model for engineering proteins.

The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cells—and that it has handily beat humans at the task.

The work represents OpenAI’s first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. As such, it is a step toward determining whether or not AI can make true discoveries, which some argue is a major test on the pathway to “artificial general intelligence.”

Last week, OpenAI CEO Sam Altman said he was “confident” his company knows how to build an AGI, adding that “superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” 

The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together.

That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023.

Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. 

It’s a phenomenon that researchers at Retro, and at richly funded companies like Altos Labs, see as the possible starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells.

But such cell “reprogramming” is not very efficient. It takes several weeks, and less than 1% of cells treated in a lab dish will complete the rejuvenation journey.

OpenAI’s new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model’s suggestions to change two of the Yamanaka factors to be more than 50 times as effective—at least according to some preliminary measures. 

“Just across the board, the proteins seem better than what the scientists were able to produce by themselves,” says John Hallman, an OpenAI researcher.

Hallman and OpenAI’s Aaron Jaech, as well as Rico Meinl from Retro, were the model’s lead developers.

Outside scientists won’t be able to tell if the results are real until they’re published, something the companies say they are planning. Nor is the model available for wider use—it’s still a bespoke demonstration, not an official product launch.

“This project is meant to show that we’re serious about contributing to science,” says Jaech. “But whether those capabilities will come out to the world as a separate model or whether they’ll be rolled into our mainline reasoning models—that’s still to be determined.”

The model does not work the same way as Google’s AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to.

The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that’s a lot of data, it’s just a fraction of what OpenAI’s flagship chatbots were trained on, making GPT-4b an example of a “small language model” that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the “few-shot” method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to.

Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they’re built from hundreds of amino acids, and each acid comes in 20 possible varieties).

OpenAI’s model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed.

an image of Fibroblasts on Day 1; an image of Cells reprogrammed with SOX@, KLF4, OCT4, and MYC on Day 10; and an image of cells reprogrammed with RetroSOX, RetroKLF, OCT4, and MYC on Day 10

OPENAI

“We threw this model into the lab immediately and we got real-world results,” says Retro’s CEO, Joe Betts-Lacroix. He says the model’s ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

Vadim Gladyshev, a Harvard University aging researcher who consults with Retro, says better ways of making stem cells are needed. “For us, it would be extremely useful. [Skin cells] are easy to reprogram, but other cells are not,” he says. “And to do it in a new species—it’s often extremely different, and you don’t get anything.” 

How exactly the GPT-4b arrives at its guesses is still not clear—as is often the case with AI models. “It’s like when AlphaGo crushed the best human at Go, but it took a long time to find out why,” says Betts-Lacroix. “We are still figuring out what it does, and we think the way we apply this is only scratching the surface.”

OpenAI says no money changed hands in the collaboration. But because the work could benefit Retro—whose biggest investor is Altman—the announcement may add to questions swirling around the OpenAI CEO’s side projects.

Last year, the Wall Street Journal said Altman’s wide-ranging investments in private tech startups amount to an “opaque investment empire” that is “creating a mounting list of potential conflicts,” since some of these companies also do business with OpenAI.

In Retro’s case, simply being associated with Altman, OpenAI, and the race toward AGI could boost its profile and increase its ability to hire staff and raise funds. Betts-Lacroix did not answer questions about whether the early-stage company is currently in fundraising mode. 

OpenAI says Altman was not directly involved in the work and that it never makes decisions based on Altman’s other investments.