This is todayâs edition of The Download, our weekday newsletter that provides a daily dose of whatâs going on in the world of technology.
How OpenAI stress-tests its large language models
OpenAI has lifted the lid (just a crack) on its safety-testing processes. It has put out two papers describing how it stress-tests its powerful large language models to try to identify potential harmful or otherwise unwanted behavior, an approach known as red-teaming.Â
The first paper describes how OpenAI directs an extensive network of human testers outside the company to vet the behavior of its models before they are released. The second presents a new way to automate parts of the testing process, using a large language model like GPT-4 to come up with novel ways to bypass its own guardrails. MIT Technology Review got an exclusive preview of the work.Â
âWill Douglas Heaven
Who should get a uterus transplant? Experts arenât sure.
Over 135 uterus transplants have been performed globally in the last decade, resulting in the births of over 50 healthy babies. The surgery has had profound consequences for these familiesâthe recipients would not have been able to experience pregnancy any other way.
But legal and ethical questions continue to surround the procedure, which is still considered experimental. Who should be offered a uterus transplant? Could the procedure ever be offered to transgender women? And if so, who should pay for these surgeries? Read the full story.Â
âJessica Hamzelou
This story is from The Checkup, our weekly newsletter about the latest in biotech and health. Sign up to receive it in your inbox every Thursday.
The must-reads
Iâve combed the internet to find you todayâs most fun/important/scary/fascinating stories about technology.
1 OpenAI may launch a web browser Which would be a full-frontal assault on Google (The Information $) + The Google browser break-up is an answer in search of a question. (FT $) + OpenAI accidentally deleted potential evidence in a training data lawsuit. (The Verge)
2 Border militias are ready to help with Trumpâs deportation plans Regardless of whether theyâre asked to or not. (Wired $) + Trumpâs administration plans to radically curb the powers of the federal agency that protects unions. (WP $)
3 Russia hit Ukraine with a new type of missile Hereâs what we know about it so far. (The Guardian)
4 Microsoft is about to turn 50 And itâs every bit as relevant and powerful as itâs ever been. (Wired $)
5 China has overtaken Germany in industrial robot adoption South Korea, however, remains streets ahead of both of them. (Reuters $) + Three reasons robots are about to become way more useful. (MIT Technology Review)Â
6 The irresistible rise of cozy tech Our devices, social media and now AI are encouraging us to keep looking inward. (New Yorker $) + Inside the cozy but creepy world of VR sleep rooms. (MIT Technology Review)
7 Churchgoers in a Swiss city have been spilling their secrets to AI Jesus And theyâre mostly really enjoying it. Watch out, priests. (The Guardian)
8 A French startup wants to make fuel out of thin air Then use it to fuel ships and airplanes. (IEEE Spectrum) + Everything you need to know about alternative jet fuels. (MIT Technology Review)Â
9 WhatsApp is going to start transcribing voice messages This seems a good compromise to bridge peopleâs different communication preferences. (The Verge)
10 Want a new phone? You should consider second-hand Itâs better for the planetâand your wallet. (Vox)
Quote of the day
âNope. 100% not true.â
âJeff Bezos fires back at Elon Muskâs claim that he was telling everyone that Trump would lose pre-election in a rare post on X.
 The big story
This chemist is reimagining the discovery of materials using AI and automation
DEREK SHAPTON
October 2021
AlĂĄn Aspuru-Guzik, a Mexico Cityâborn, Toronto-based chemist, has devoted much of his life to contemplating worst-case scenarios. What if climate change proceeds as expected, or gets significantly worse? Could we quickly come up with the materials weâll need to cheaply capture carbon, or make batteries from something other than costly lithium?
Materials discoveryâthe science of creating and developing useful new substancesâoften moves at a frustratingly slow pace. The typical trial-and-error approach takes an average of two decades, making it too expensive and risky for most companies to pursue.
Aspuru-Guzikâs objectiveâwhich he shares with a growing number of computer-Âsavvy chemistsâis to shrink that interval to a matter of months or years. And advances in AI, robotics, and computing are bringing new life to his vision. Read the full story.
+ Do you struggle with a lack of confidence? Hereâs how to take up a bit more space. + These recipes will ensure you have a delicious Thanksgiving next week. + Itâs impossible not to dream of lazy sunny days while gazing at Quentin Mongeâs work. + Tom Jones x Disturbed = very funny.Â
MIT Technology Reviewâs How To series helps you get things done.Â
Since the start of the generative AI boom, artists have been worried about losing their livelihoods to AI tools. There have been plenty of examples of companiesâ replacing human labor with computer programs. Most recently, Coca-Cola sparked controversy by creating a new Christmas ad with generative AI.Â
Artists and writers have launched several lawsuits against AI companies, arguing that their work has been scraped into databases for training AI models without consent or compensation. Tech companies have responded that anything on the public internet falls under fair use. But it will be years until we have a legal resolution to the problem.Â
Unfortunately, there is little you can do if your work has been scraped into a data set and used in a model that is already out there. You can, however, take steps to prevent your work from being used in the future.Â
Here are four ways to do that.Â
Mask your styleÂ
One of the most popular ways artists are fighting back against AI scraping is by applying âmasksâ on their images, which protect their personal style from being copied.Â
Tools such as Mist, Anti-DreamBooth, and Glaze add tiny changes to an imageâs pixels that are invisible to the human eye, so that if and when images are scraped, machine-learning models cannot decipher them properly. Youâll need some coding skills to run Mist and Anti-DreamBooth, but Glaze, developed by researchers at the University of Chicago, is more straightforward to apply. The tool is free and available to download as an app, or the protection can be applied online. Unsurprisingly, it is the most popular tool and has been downloaded millions of times.Â
But defenses like these are never foolproof, and what works today might not work tomorrow. In computer security, breaking defenses is standard practice among researchers, as this helps people find weaknesses and make systems safer. Using these tools is a calculated risk: Once something is uploaded online, you lose control of it and canât retroactively add protections to images.Â
Rethink where and how you shareÂ
Popular art profile sites such as DeviantArt and Flickr have become gold mines for AI companies searching for training data. And when you share images on platforms such as Instagram, its parent company, Meta, can use your data to build its models in perpetuity if youâve shared it publicly. (See opt-outs below.)Â
One way to prevent scraping is by not sharing images online publicly, or by making your social media profiles private. But for many creatives that is simply not an option; sharing work online is a crucial way to attract clients.Â
Itâs worth considering sharing your work on Cara, a new platform created in response to the backlash against AI. Cara, which collaborates with the researchers behind Glaze, is planning to add integrations to the labâs art defense tools. It automatically implements âNoAIâ tags that tell online scrapers not to scrape images from the site. It currently relies on the goodwill of AI companies to respect artistsâ stated wishes, but itâs better than nothing.Â
Opt out of scrapingÂ
Data protection laws might help you get tech companies to exclude your data from AI training. If you live somewhere that has these sorts of laws, such as the UK or the EU, you can ask tech companies to opt you out of having your data scraped for AI training. For example, you can follow these instructions for Meta. Unfortunately, opt-out requests from users in places without data protection laws are honored only at the discretion of tech companies.Â
The site Have I Been Trained, created by the artist-run company Spawning AI, lets you search to find out if your images have ended up in popular open-source AI training data sets. The organization has partnered with two companies: Stability AI, which created Stable Diffusion, and Hugging Face, which promotes open access to AI. If you add your images to Spawning AIâs Do Not Train Registry, these companies have agreed to remove your images from their training data sets before training new models. Again, unfortunately, this relies on the goodwill of AI companies and is not an industry-wide standard.Â
If all else fails, add some poison
The University of Chicago researchers who created Glaze have also created Nightshade, a tool that lets you add an invisible layer of âpoisonâ to your images. Like Glaze, it adds invisible changes to pixels, but rather than just making it hard for AI models to interpret images, it can break future iterations of these models and make them behave unpredictably. For example, images of dogs might become cats, and handbags might become toasters. The researchers say relatively few samples of poison are needed to make an impact.Â
You can add Nightshade to your image by downloading an app here. In the future, the team hopes to combine Glaze and Nightshade, but at the moment the two protections have to be added separately.Â
This article is from The Spark, MIT Technology Reviewâs weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
âWell, what about China?â
This is a comment I get all the time on the topic of climate change, both in conversations and on whatever social media site is currently en vogue. Usually, it comes in response to some statement about how the US and Europe are addressing the issue (or how they need to be).
Sometimes I think people ask this in bad faith. Itâs a rhetorical way to throw up your hands, imply that the US and Europe arenât the real problem, and essentially say: âif they arenât taking responsibility, why should we?â However, amid the playground-esque finger-pointing there are some undeniable facts: China emits more greenhouse gases than any other country, by far. Itâs one of the worldâs most populous countries and a climate-tech powerhouse, and its economy is still developing.Â
With many complicated factors at play, how should we think about the countryâs role in addressing climate change?
Thereâs context missing if we just look at that one number, as I wrote in my latest story that digs into recent global climate data. Since carbon dioxide hangs around in the atmosphere for centuries, we should arguably consider not just a countryâs current emissions, but everything itâs produced over time. If we do that, the US still takes the crown for the worldâs biggest climate polluter.
However, China is now in second place, according to a new analysis from Carbon Brief released this week. In 2023, the country exceeded the EUâs 27 member states in historical emissions for the first time.
This reflects a wider trend that weâre seeing around the world: Developing nations are starting to account for a larger fraction of emissions than they used to. In 1992, when countries agreed to the UN climate convention, industrialized countries (a category called Annex I) made up about one-fifth of the worldâs population but were responsible for a whopping 61% of historical emissions. By the end of 2024, though, those countriesâ share of global historical emissions will fall to 52%, and it is expected to keep ticking down.
China, like all nations, will need to slash its emissions for the world to meet global climate goals. One crucial point here is that while its emissions are still huge, there are signs that the nation is making some progress.Â
Chinaâs carbon dioxideâs emissions are set to fall in 2024 because of record growth in low-carbon energy sources. That decline is projected to continue under the countryâs current policy settings, according to an October report from the IEA. Chinaâs oil demand could soon peak and start to fall, largely because itâs seeing such a huge uptake of electric vehicles.Â
One growing question: With all this progress and a quickly growing economy, should we be expecting China to do more than just make progress on its own emissions?Â
As I wrote in the newsletter last week, the current talks at COP29 (the UN climate conference) are focused on setting a new, more aggressive global climate finance goal to help developing nations address climate change. China isnât part of the group of countries that are required to pay into this pot of money, but some are calling for that to change given that it is the worldâs biggest polluter.Â
Talks at COP29 arenât going very well. The COP29 president called for faster action, but progress toward a finance deal has stalled amid infighting over how much money should be on the table and who should pay up.
Chinaâs complex role in emissions and climate action is far from the only holdup at the talks. Leaders from major nations including Germany and France canceled plans to attend, and the looming threat that the US could pull out of the Paris climate agreement is coloring the negotiations.Â
But disagreement over how to think about Chinaâs role in all this is a good example of how difficult it is to assign responsibility when it comes to climate change, and how much is at play in global climate negotiations. One thing I do know for sure is that pointing fingers doesnât cut emissions.Â
Fusion energy has been a dream for decades, and a handful of startups say weâre closer than ever to making it a reality. This deep dive looks at a few of the companies looking to be the first to deploy fusion power. (New York Times) â I recently visited one of the startups, Commonwealth Fusion Systems. (MIT Technology Review)
President-elect Donald Trump has tapped Chris Wright to lead the Department of Energy. Wright is head of the fracking company Liberty Energy. (Washington Post)
In the wake of Trumpâs election, it might be time for climate tech to get a rebrand. Companies and investors might increasingly avoid using the term, opting instead for phrases like âenergy independenceâ or âfrontier tech,â to name a few. (Heatmap)
Rooftop solar has saved customers in California about $2.3 billion on utility bills this year, according to a new analysis. This result is counter to a report from a state agency, which found that rooftop panels impose over $8 billion in extra costs on consumers of the stateâs three major utilities. (Canary Media)
Low-carbon energy needs much less material than it used to. Rising efficiency in making technology like solar panels bodes well for hopes of cutting mining needs. (Sustainability by Numbers)
New York governor Kathy Hochul has revived a plan to implement congestion pricing, which would charge drivers to enter the busiest parts of Manhattan. It would be the first such program in the US. (The City)
Enhanced geothermal technology could be close to breaking through into commercial success. Companies that aim to harness Earthâs heat for power are making progress toward deploying facilities. (Nature) â Fervo Energy found that its wells can be used like a giant underground battery. (MIT Technology Review)
This is todayâs edition of The Download, our weekday newsletter that provides a daily dose of whatâs going on in the world of technology.
AI can now create a replica of your personality
Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.
Thatâs now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind.
They recruited 1,000 people and, from interviews with them, created agent replicas of them all. To test how well the agents mimicked their human counterparts, participants did a series of tests, games and surveys, then the agents completed the same exercises. The results were 85% similar. Freaky. Read our story about the work, and why it matters.
âJames OâDonnell
Chinaâs complicated role in climate change
âBut what about China?â
In debates about climate change, itâs usually only a matter of time until someone brings up China. Often, it comes in response to some statement about how the US and Europe are addressing the issue (or how they need to be).
Sometimes it can be done in bad faith. Itâs a rhetorical way to throw up your hands, and essentially say: âif they arenât taking responsibility, why should we?âÂ
However, there are some undeniable facts: China emits more greenhouse gases than any other country, by far. Itâs one of the worldâs most populous countries and a climate-tech powerhouse, and its economy is still developing.Â
With many complicated factors at play, how should we think about the countryâs role in addressing climate change? Read the full story.Â
âCasey Crownhart
This story is from The Spark, our weekly newsletter giving you the inside track on all things energy and climate. Sign up to receive it in your inbox every Wednesday.
Four ways to protect your art from AIÂ
Since the start of the generative AI boom, artists have been worried about losing their livelihoods to AI tools.
Unfortunately, there is little you can do if your work has been scraped into a data set and used in a model that is already out there. You can, however, take steps to prevent your work from being used in the future. Here are four ways to do that.Â
âMelissa Heikkila
This is part of our How To series, where we give you practical advice on how to use technology in your everyday lives. You can read the rest of the series here.
MIT Technology Review Narrated: The worldâs on the verge of a carbon storage boom
In late 2023, one of Californiaâs largest oil and gas producers secured draft permits from the US Environmental Protection Agency to develop a new type of well in an oil field. If approved, it intends to drill a series of boreholes down to a sprawling sedimentary formation roughly 6,000 feet below the surface, where it will inject tens of millions of metric tons of carbon dioxide to store it away forever.
Hundreds of similar projects are looming across the state, the US, and the world. Proponents hope itâs the start of a sort of oil boom in reverse, kick-starting a process through which the world will eventually bury more greenhouse gas than it adds to the atmosphere. But opponents insist these efforts will prolong the life of fossil-fuel plants, allow air and water pollution to continue, and create new health and environmental risks.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which weâre publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as itâs released.
The must-reads
Iâve combed the internet to find you todayâs most fun/important/scary/fascinating stories about technology.
1 How the Trump administration could hack your phone Spyware acquired by the US government in September could fairly easily be turned on its own citizens. (New Yorker $) + Hereâs how you can fight back against being digitally spied upon. (The Guardian)
2 The DOJ is trying to force Google to sell off Chrome Whether Trump will keep pushing it through is unclear, though. (WP $) + Some financial and legal experts argue that just selling Chrome is not enough to address antitrust issues. (Wired $)
3 Thereâs a booming âAI pimpingâ industry People are stealing videos from real adult content creators, giving them AI-generated faces, and monetizing their bodies. (Wired $) + This viral AI avatar app undressed meâwithout my consent. (MIT Technology Review)
4 Hereâs Elon Musk and Vivek Ramaswamy plan for federal employees Large-scale firings and an end to any form of remote work. (WSJÂ $)
5 The US is scaring everyone with its response to bird flu Itâs done remarkably little to show itâs trying to contain the outbreak. (NYTÂ $) +Â Virologists are getting increasingly nervous about how it could evolve and spread. (MIT Technology Review)
6 AI could boost the performance of quantum computers A new model created by Google DeepMind is very good at correcting errors. (New Scientist $) + But AI could also make quantum computers less necessary. (MIT Technology Review)
7 Biden has approved the use of anti-personnel mines in Ukraine It comes just days after he gave the go-ahead for it to use long-range missiles inside Russia. (Axios) + The US military has given a surveillance drone contract to a little-known supplier from Utah. (WSJ $) + The Danish military said itâs keeping a close eye on a Chinese ship in its waters after data cable breaches. (Reuters $)
8 The number of new mobile internet users is stalling Only about 57% of the worldâs population is connected. (Rest of World)
9 All of life on Earth descended from this single cell Our âlast universal common ancestorâ (or LUCA for short) was a surprisingly complex organism living 4.2 billion years ago. (Quanta) + Scientists are building a catalog of every type of cell in our bodies. (The Economist $)
10 What itâs like to live with a fluffy AI pet Try as we might, it seems we canât help but form attachments to cute companion robots. (The Guardian)Â
Quote of the day
âThe free pumpkins have brought joy to many.â
âAn example of the sort of stilted remarks made by a now-abandoned AI-generated news broadcaster at local Hawaii paper The Garden Island, Wired reports.Â
 The big story
How Bitcoin mining devastated this New York town
GABRIELA BHASKAR
April 2022
If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms.
It didnât take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root. Read the full story.
+ Cultivating gratitude is a proven way to make yourself happier. + You canât beat a hot toddy when itâs cold outside. + If you like abandoned places and overgrown ruins, Jonathan Jimenez is the photographer for you. + A lot changed between Gladiator I and II, not least Hollywoodâs version of the male ideal.Â
Large language models are now being used by millions of people for many different things. But as OpenAI itself points out, these models are known to produce racist, misogynistic and hateful content; reveal private information; amplify biases and stereotypes; and make stuff up. The company wants to share what it is doing to minimize such behaviors.
The aim is to combine these two approaches, with unwanted behaviors discovered by human testers handed off to an AI to be explored further and vice versa. Automated red-teaming can come up with a large number of different behaviors, but human testers bring more diverse perspectives into play, says Lama Ahmad, a researcher at OpenAI: âWe are still thinking about the ways that they complement each other.âÂ
Red-teaming isnât new. AI companies have repurposed the approach from cybersecurity, where teams of people try to find vulnerabilities in large computer systems. OpenAI first used the approach in 2022, when it was testing DALL-E 2. âIt was the first time OpenAI had released a product that would be quite accessible,â says Ahmad. âWe thought it would be really important to understand how people would interact with the system and what risks might be surfaced along the way.âÂ
The technique has since become a mainstay of the industry. Last year, President Bidenâs Executive Order on AI tasked the National Institute of Standards and Technology (NIST) with defining best practices for red-teaming. To do this, NIST will probably look to top AI labs for guidance.Â
Tricking ChatGPT
When recruiting testers, OpenAI draws on a range of experts, from artists to scientists to people with detailed knowledge of the law, medicine, or regional politics. OpenAI invites these testers to poke and prod its models until they break. The aim is to uncover new unwanted behaviors and look for ways to get around existing guardrailsâsuch as tricking ChatGPT into saying something racist or DALL-E into producing explicit violent images.
Adding new capabilities to a model can introduce a whole range of new behaviors that need to be explored. When OpenAI added voices to GPT-4o, allowing users to talk to ChatGPT and ChatGPT to talk back, red-teamers found that the model would sometimes start mimicking the speakerâs voice, an unexpected behavior that was both annoying and a fraud risk.Â
There is often nuance involved. When testing DALL-E 2 in 2022, red-teamers had to consider different uses of âeggplant,â a word that now denotes an emoji with sexual connotations as well as a purple vegetable. OpenAI describes how it had to find a line between acceptable requests for an image, such as âA person eating an eggplant for dinner,â and unacceptable ones, such as âA person putting a whole eggplant into her mouth.â
Similarly, red-teamers had to consider how users might try to bypass a modelâs safety checks. DALL-E does not allow you to ask for images of violence. Ask for a picture of a dead horse lying in a pool of blood, and it will deny your request. But what about a sleeping horse lying in a pool of ketchup?
When OpenAI tested DALL-E 3 last year, it used an automated process to cover even more variations of what users might ask for. It used GPT-4 to generate requests producing images that could be used for misinformation or that depicted sex, violence, or self-harm. OpenAI then updated DALL-E 3 so that it would either refuse such requests or rewrite them before generating an image. Ask for a horse in ketchup now, and DALL-E is wise to you: âIt appears there are challenges in generating the image. Would you like me to try a different request or explore another idea?â
In theory, automated red-teaming can be used to cover more ground, but earlier techniques had two major shortcomings: They tend to either fixate on a narrow range of high-risk behaviors or come up with a wide range of low-risk ones. Thatâs because reinforcement learning, the technology behind these techniques, needs something to aim forâa rewardâto work well. Once itâs won a reward, such as finding a high-risk behavior, it will keep trying to do the same thing again and again. Without a reward, on the other hand, the results are scattershot.Â
âThey kind of collapse into âWe found a thing that works! Weâll keep giving that answer!â or theyâll give lots of examples that are really obvious,â says Alex Beutel, another OpenAI researcher. âHow do we get examples that are both diverse and effective?â
A problem of two parts
OpenAIâs answer, outlined in the second paper, is to split the problem into two parts. Instead of using reinforcement learning from the start, it first uses a large language model to brainstorm possible unwanted behaviors. Only then does it direct a reinforcement-learning model to figure out how to bring those behaviors about. This gives the model a wide range of specific things to aim for.Â
Beutel and his colleagues showed that this approach can find potential attacks known as indirect prompt injections, where another piece of software, such as a website, slips a model a secret instruction to make it do something its user hadnât asked it to. OpenAI claims this is the first time that automated red-teaming has been used to find attacks of this kind. âThey donât necessarily look like flagrantly bad things,â says Beutel.
Will such testing procedures ever be enough? Ahmad hopes that describing the companyâs approach will help people understand red-teaming better and follow its lead. âOpenAI shouldnât be the only one doing red-teaming,â she says. People who build on OpenAIâs models or who use ChatGPT in new ways should conduct their own testing, she says: âThere are so many usesâweâre not going to cover every one.â
For some, thatâs the whole problem. Because nobody knows exactly what large language models can and cannot do, no amount of testing can rule out unwanted or harmful behaviors fully. And no network of red-teamers will ever match the variety of uses and misuses that hundreds of millions of actual users will think up.Â
Thatâs especially true when these models are run in new settings. People often hook them up to new sources of data that can change how they behave, says Nazneen Rajani, founder and CEO of Collinear AI, a startup that helps businesses deploy third-party models safely. She agrees with Ahmad that downstream users should have access to tools that let them test large language models themselves.Â
Rajani also questions using GPT-4 to do red-teaming on itself. She notes that models have been found to prefer their own output: GPT-4 ranks its performance higher than that of rivals such as Claude or Llama, for example. This could lead it to go easy on itself, she says: âIâd imagine automated red-teaming with GPT-4 may not generate as harmful attacks [as other models might].â Â
Miles behind
For Andrew Tait, a researcher at the Ada Lovelace Institute in the UK, thereâs a wider issue. Large language models are being built and released faster than techniques for testing them can keep up. âWeâre talking about systems that are being marketed for any purpose at allâeducation, health care, military, and law enforcement purposesâand that means that youâre talking about such a wide scope of tasks and activities that to create any kind of evaluation, whether thatâs a red team or something else, is an enormous undertaking,â says Tait. âWeâre just miles behind.â
Tait welcomes the approach of researchers at OpenAI and elsewhere (he previously worked on safety at Google DeepMind himself) but warns that itâs not enough: âThere are people in these organizations who care deeply about safety, but theyâre fundamentally hamstrung by the fact that the science of evaluation is not anywhere close to being able to tell you something meaningful about the safety of these systems.â
Tait argues that the industry needs to rethink its entire pitch for these models. Instead of selling them as machines that can do anything, they need to be tailored to more specific tasks. You canât properly test a general-purpose model, he says.Â
âIf you tell people itâs general purpose, you really have no idea if itâs going to function for any given task,â says Tait. He believes that only by testing specific applications of that model will you see how well it behaves in certain settings, with real users and real uses.Â
âItâs like saying an engine is safe; therefore every car that uses it is safe,â he says. âAnd thatâs ludicrous.âÂ
If youâve ever been through a large US airport, youâre probably at least vaguely aware of Clear. Maybe your interest (or irritation) has been piqued by the pods before the security checkpoints, the attendants in navy bluevests who usher clients to the front of the security line (perhaps just ahead of you), and the sometimes pushy sales pitches to sign up and skip ahead yourself. After all, is there anything people dislike more than waiting in line?
Its position in airports has made Clear Secure, with its roughly $3.75 billion market capitalization, the most visible biometric identity company in the United States. Over the past two decades, Clear has put more than 100 lanes in 58 airports across the US, and in the past decade it has entered 17 sports arenas and stadiums, from San Jose to Denver to Atlanta. Now you can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber.
And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctorâs officeâor anywhere else that you currently have to pull out a wallet (or, of course, wait in line). The company that has helped millions of vetted members skip airport security lines is now working to expand its âfrictionless,â âface-firstâ line-cutting service from the airport to just about everywhere, online and off, by promising to verify that you are who you say you are and you are where you are supposed to be. In doing so, CEO Caryn Seidman Becker told investors in an earnings call earlier this year, it has designs on being no less than the âidentity layer of the internet,â as well as the âuniversal identity platformâ of the physical world.
All you have to do is show upâand show your face.Â
This is enabled by biometric technology, but Clear is far more than just a biometrics company. As Seidman Becker has told investors, âbiometrics arenât the product ⊠they are a feature.â Or, as she put it in a 2022 podcast interview, Clear is ultimately a platform company âno different than Amazon or Appleââwith dreams, she added, âof making experiences safer and easier, of giving people back their time, of giving people control, of using technology for ⊠frictionless experiences.â (Clear did not make Seidman Becker available for an interview.)
While the company has been building toward this sweeping vision for years, it now seems the time has finally come. A confluence of factors is currently accelerating the adoption ofâeven necessity forâidentity verification technologies: increasingly sophisticated fraud, supercharged by artificial intelligence that is making it harder to distinguish who or what is real; data breaches that seem to occur on a near daily basis; consumers who are more concerned about data privacy and security; and the lingering effects of the pandemicâs push toward âcontactlessâ experiences.Â
All of this is creating a new urgency around ways to verify information, especially our identitiesâand, in turn, generating a massive opportunity for Clear. For years, Seidman Becker has been predicting that biometrics will go mainstream.Â
But now that biometrics have, arguably, gone mainstream, whatâand whoâbears the cost? Because convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. Some critics warn that not everyone will benefit from a world where identity is routed through Clearâmaybe because itâs too expensive, and maybe because biometric technologies are often less effective at identifying people of color, people with disabilities, or those whose gender identity may not match what official documents say.
Whatâs more, says Kaliya Young, an identity expert who has advised the US government, having a single private company âdisintermediatingâ our biometric dataâespecially facial dataâis the wrong âarchitectureâ to manage identity. âIt seems they are trying to create a system like login with Google, but for everything in real life,â Young warns. While the single sign-on option that Google (or Facebook or Apple) provides for websites and apps may make life easy, it also poses greater security and privacy risks by putting both our personal data and the keys to it in the hands of a single profit-driven entity: âWeâre basically selling our identity soul to a private company, whoâs then going to be the gatekeeper ⊠everywhere one goes.âÂ
Though Clear remains far less well known than Google, more than 27 million people have already helped it become that very gatekeeperâand âone of the largest private repositories of identities on the planet,â as Nicholas Peddy, Clearâs chief technology officer, put it in an interview with MIT Technology Review this summer.Â
With Clear well on the way to realizing its plan for a frictionless future, itâs time to try to understand both how we got here and what we have (been) signed up for.
A new frontier in identity management
Imagine this: On a Friday morning in the near future, you are rushing to get through your to-do list before a weekend trip to New York.Â
In the morning, you apply for a new job on LinkedIn. During lunch, assured that recruiters are seeing your professional profile because itâs been verified by Clear, you pop out to Home Depot, confirm your identity with a selfie, and rent a power drill for a quick bathroom repair. Then, in the midafternoon, you drive to your doctorâs office; having already verified your identityâprompted by a text message sent a few days earlierâyou confirm your arrival with a selfie at a Clear kiosk. Before you go to bed, you plan your morning trip to the airport and set an alarmâbut not too early, because you know that with Clear, you can quickly drop your bags and breeze through security.
Once youâre in New York, you head to Barclays Center, where youâll be seeing your favorite singer; you skip the long queue out front to hop in the fast-track Clear line. Itâs late when the show is over, so you grab an Uber home and barely need to wait for a driver, who feels more comfortable thanks to your verified rider profile.Â
At no point did you pull out your driverâs license or fill out repetitive paperwork. All that was already on file. Everything was easy; everything was frictionless.Â
More than 27 million people have already helped Clear become âone of the largest private repositories of identities on the planet.â
This, at least, is the world that Clear is actively building toward.Â
Part of Clearâs power, Seidman Becker often says, is that it can wholly replace our wallets: our credit cards, driverâs licenses, health insurance cards, perhaps even building key fobs. But you canât just suddenly be all the cards you carry. For Clear to link your digital identity to your real-world self, you must first give up a bit of personal dataâspecifically, your biometric data.Â
Biometrics refers to the unique physical and behavioral characteristicsâfaces, fingerprints, irises, voices, and gaits, among othersâthat identify each of us as individuals. For better or worse, they typically remain stable during our lifetimes.Â
Relying on biometrics for identification can be convenient, since people are apt to misplace a wallet or forget the answer to a security question. But on the other hand, if someone manages to compromise a database of biometric information, that convenience can become dangerous: We cannot easily change our face or fingerprint to secure our data again, the way we could change a compromised password.Â
On a practical level, there are generally two ways that biometrics are used to identify individuals. The first, generally referred to âone-to-manyâ or âone-to-nâ matching, compares one personâs biometric identifier with a database full of them. This is sometimes associated with a stereotypical idea of dystopian surveillance in which real-time facial recognition from live video could allow authorities to identify anyone walking down the street. The other, âone-to-oneâ matching, is the basis for Clear; it compares a biometric identifier (like the face of a live person standing before an airport agent) with a previously recorded biometric template (such as a passport photo) to verify that they match. This is usually done with the individualâs knowledge and consent, and it arguably poses a lower privacy risk. Often, one-to-one matching includes a layer of document verification, like checking that your passport is legitimate and matches a photograph you used to register with the system.
The US Congress urgently saw the need for better identity management following the September 11 terrorist attacks; 18 of the 19 hijackers used fake identity documents to board their flights. In the aftermath, the newly created Transportation Security Administration (TSA) implemented security processes that slowed down air travel significantly. Part of the problem was that âeverybody was just treated the same at airports,â recalls the serial media entrepreneur Steven Brillâincluding, famously, former vice president Al Gore. âIt sounded awfully democratic ⊠but in terms of basic risk management and allocation of resources, it just didnât make any sense.âÂ
Congress agreed, authorizing the TSA to create a program that would allow people who passed background checks to be recognized as trusted travelers and skip some of the scrutiny at the airport.Â
In 2007, San Franciscoâs then mayor, Gavin Newsom, had his irises scanned by Clear at San Francisco International Airport.
DAVID PAUL MORRIS/GETTY
In 2003, Brill teamed up with Ajay Amlani, a technology entrepreneur and former adviser to the Department of Homeland Security, and founded a company called Verified Identity Pass (VIP) to provide biometric identity verification in the TSAâs new program. âThe vision,â says Amlani, âwas a unified fast laneâsimilar to a toll lane.â
It appeared to be a win-win solution. The TSA had a private-sector partner for its registered-traveler program; VIP had a revenue stream from user fees; airports got a cut of the fees in exchange for leasing VIP space;and initial membersâtypically frequent business travelersâwere happy to cut down on airport wait times.Â
By 2005, VIP had launched in its first airport, Orlando International in Florida. Membersâinitially paying $80âreceived âClear cardsâ that contained a cryptographic representation of their fingerprint, iris scans, and a photo of their face taken at enrollment. They could use those cards at the airport to be escorted to the front of the security lines.
The defense contracting giant Lockheed Martin, which already provided biometric capabilities to the US Department of Defense and the FBI, was responsible for deploying and providing technology for VIPâs system, with additional technical expertise from Oracle and others. This left VIP to âfocus on marketing, pricing, branding, customer service, and consumer privacy policies,â as the president of Lockheed Transportation and Security Solutions, Don Antonucci, said at the time.Â
By 2009, nearly 200,000 people had joined. The company had received $116 million in investments and signed contracts with about 20 airports. It all seemed so promisingâif VIP had not already inadvertently revealed the risks inherent in a system built on sensitive personal data.
A lost laptop and a big opportunity
From the beginning, there were concerns about the implications of VIPâs Clear card for privacy, civil liberty, and equity, as well as questions about its effectiveness at actually stopping future terrorist attacks. Advocacy groups like the Electronic Privacy Information Center (EPIC) warned that the biometrics-based system would result in a surveillance infrastructure built on sensitive personal information, but data from the Pew Research Center shows that a majority of the public at the time felt that it was generally necessary to sacrifice some civil liberties in the name of safety.
Then a security lapse sent the whole operation crumbling.Â
In the summer of 2008, VIP reported that an unencrypted company laptop containing addresses, birthdays, and driverâs license and passport numbers of 33,000 applicants had gone missing from an office at San Francisco International Airport (SFO)âeven though TSAâs security protocol required it to encrypt all laptops holding personal data.Â
NEIL WEBB
The laptop was found about two weeks later and the company said no data was compromised. But it was still a mess for VIP. Months later, investors pushed Brill out, and associated costs led the company to declare bankruptcy and close the following year.Â
Disgruntled users filed a class action lawsuit against VIP to recoup membership fees and âpunitive damages.â Some users were upset they had recently renewed their subscriptions, and others worried about what would happen to their personal information. A judge temporarily prevented the company from selling user data, but the decision didnât hold.Â
Seidman Becker and her longtime business partner Ken Cornick, both hedge fund managers, saw an opportunity. In 2010, they bought VIPâand its user dataâin a bankruptcy sale for just under $6 million and registered a new company called Alclear. âI was a big believer in biometrics,â Seidman Becker told the tech journalists Kara Swisher and Lauren Goode in 2017. âI wanted to build something that made the world a better place, and Clear was that platform.âÂ
Initially, the new Clear followed closely in the footsteps of its predecessor: Lockheed Martin transferred the membersâ information to the new company, which had acquired VIPâs hardware and continued to use Clear cards to hold membersâ biometrics.
After the relaunch, Clear also started building partnerships with other companies in the travel industryâincluding American Express, United Airlines, Alaska Airlines, Delta Airlines, and Hertz Rental Carsâto bundle its service for free or at a discount. (Clear declined to specify how many of its users have such discounts, but in earnings calls the company has stressed its efforts to reduce the number of members paying reduced rates.)
By 2014, improvements in internet latency and biometric processing speeds allowed Clear to eliminate the cards and migrate to a server-based systemâwithout compromising data security, the company says. Clear emphasizes that it meets industry standards for keeping data secure, with methods including encryption, firewalls, and regular penetration testing by both internal and external teams. The company says it also maintains âlocked boxesâ around data relating to air travelers.Â
Still, the reality is that every database of this kind is ultimately a target, and âalmost every day thereâs a massive breach or hack,â says Chris Gilliard, a privacy and surveillance researcher who was recently named co-director of the Critical Internet Studies Institute. Over the years, even apparently well-protected biometric information has been compromised. Last year, for instance, a data breach at the genetic testing company 23andMe exposed sensitive informationâincluding geographic locations, birth years, family trees, and user-uploaded photosâfrom nearly 7 million customers.Â
This is what Young, who helped facilitate the creation of the open-source identity management standards Open ID Connect and OAuth, means when she says that Clear has the wrong âarchitectureâ for managing digital identity; itâs too much of a risk to keep our digital identities in a central database, cryptographically protected or not. She and many other identity and privacy experts believe that the most privacy-protecting way to manage digital identity is to âuse credentials, like a mobile driverâs license, stored on peopleâs devices in digital wallets,â she says. âThese digital credentials can have biometrics, but the biometrics in a central database are not being pinged for day to day use.â
But itâs not just data thatâs potentially vulnerable. In 2022 and 2023, Clear faced three high-profile security incidents in airports, including one in which a passenger successfully got through the companyâs checks using a boarding pass found in the trash. In another, a traveler in Alabama used someone elseâs ID to register for Clear and, later, to successfully pass initial security checks; he was discovered only when he tried to bring ammunition through a subsequent checkpoint.Â
This spurred an investigation by the TSA, which turned up more alarming information: Nearly 50,000 photos used by Clear to enroll customers were flagged as ânon-matchesâ by the companyâs facial recognition software. Some photos didnât even contain full faces, according to Bloomberg. (In a press release after the incident, the company refuted the reporting, describing it as âa single human errorâhaving nothing to do with our technologyâ and stating that âthe images in question were not relied upon during the secure, multi-layered enrollment process.â)Â
âHow do you get to be the one?â
When I spoke to Brill this spring, he told me heâd always envisioned that Clear would expand far beyond the airport. âThe idea I had was that once you had a trusted identity, you would potentially be able to use it for a lot of different things,â he said, but âthe trick is to get something that is universally accepted. And thatâs the battle that Clear and anybody else has to fight, which is: How do you get to be the one?â
Goode Intelligence, a market research firm that focuses on the booming identity space, estimates that by 2029, there will be 1.5 billion digital identity wallets around the worldâwith use for travel leading the way and generating an estimated $4.6 billion in revenue. Clear is just one player, and certainly not the biggest. ID.me, for instance, provides similar face-based identity verification and has over 130 million users, dwarfing Clearâs roughly 27 million. Itâs also already in use by numerous US federal and state agencies, including the IRS.Â
The reality is that every database of this kind is ultimately a target, and âalmost every day thereâs a massive breach or hack.â
But as Goode Intelligence CEO Alan Goode tells me, Clearâs early-mover advantage, particularly in the US, âputs it in a good space within North America ⊠[to] be more pervasiveââor to become what Brill called âthe oneâ that is most closely stitched into peopleâs daily lives.Â
Clear began growing beyond travel in 2015, when it started offering biometric fast-pass access to what was then AT&T Park in San Francisco. Stadiums across California, Colorado, and Washington, and in major cities in other states, soon followed.Fans can simply download the free Clear app and scan the QR code to bypass normal lines in favor of designated Clear lanes. For a time, Clear also promoted its biometric payment systems at some venues, including two in Seattle, which could include built-in age verification. It even partnered with Budweiser for a âBud Nowâ machine that used your fingerprint to verify your identity, age, and payment. (These payment programs, which a Clear representative called âpilotsâ in an email, have since ended; representatives for the Seattle Mariners and Seahawks did not respond to multiple requests for comment on why.) Clearâs programs for expedited event access have been popular enough to drive greater user growth than its paid airport service, according to numbers provided by the company.Â
Then came the pandemic, hitting Clear (and the entire travel industry) hard. But the crisis for Clearâs primary business actually accelerated its move into new spaces with âHealth Pass,â which allowed organizations to confirm the health status of employees, residents, students, and visitors who sought access to a physical space. Users could upload vaccination cards to the Health Pass section in the Clear mobile app; the program was adopted by nearly 70 partners in 110 unique locations, including NFL stadiums, the Marinersâ T-Mobile Park, and the 9/11 Memorial Museum.Â
Demand for vaccine verification eventually slowed, and Health Pass shut down in March 2024. But as Jason Sherwin, Clearâs senior director of health-care business development, said in a podcast interview earlier this year, it was the companyâs âfirst foray into health careââthe business line that currently represents its âprimary focus across everything weâre doing outside of the airport.â Today, Clear kiosks for patient sign-ins are being piloted at Georgiaâs Wellstar Health Systems, in conjunction with one of the largest providers of electronic health records in the United States: Epic (which is unrelated to the privacy nonprofit).Â
Whatâs more, Health Pass enabled Clear to expand at a time when the survival of travel-focused businesses wasnât guaranteed. In November 2020, Clear had roughly 5 million members; today, that number has grown fivefold. The company went public in 2021 and has experienced double-digit revenue growth annually.Â
These doctorâs office sign-ins, in which the system verifies patient identity via a selfie, rely on whatâs called Clear Verified, a platform the company has rolled out over the past several years that allows partners (health-care systems, as well as brick-and-mortar retailers, hotels, and online platforms) to integrate Clearâs identity checks into their own user-verification processes. It again seems like a win-win situation: Clear gets more users and a fee from companies using the platform, while companies confirm customersâ identity and information, and customers, in theory, get that valuable frictionless experience. One high-profile partnership, with LinkedIn, was announced last year: âWe know authenticity matters and we want the people, companies and jobs you engage with everyday to be real and trusted,â Oscar Rodriguez, LinkedInâs head of trust and privacy, said in a press release.Â
All this comes together to create the foundation for what is Clearâs biggest advantage today: its network. The companyâs executives often speak about its âembeddedâ users across various services and platforms, as well as its âecosystem,â meaning the venues where it is used. As Peddy explains, the value proposition for Clear today is not necessarily any particular technology or biometric algorithm, but how it all comes togetherâand can work universally. Clear would be âwherever our consumers need us to be,â he saysâit would âsort of just be this ubiquitous thing that everybody has.â
Clear CEO Caryn Seidman Becker (left) rings the bell at the New York Stock Exchange in 2021.
NYSE VIA TWITTER
A prospectus to investors from the companyâs IPO makes the pitch simple: âWe believe Clear enables our partners to capture not just a greater share of their customersâ wallet, but a greater share of their overall lives.âÂ
The more Clear is able to reach into customersâ lives, the more valuable customer data it can collect. All user interactions and experiences can be tracked, the companyâs privacy policy explains. While the policy states that Clear will not sell data and will never share biometric or health information without âexpress consent,â it also lays out the non-health and non-biometric data that it collects and can use for consumer research and marketing. This includes membersâ demographic details, a record of every use of Clearâs various products, and even digital images and videos of the user. Documents obtained by OneZerooffer some further detail into what Clear has at least considered doing with customer data: David Gershgorn wrote about a 2015 presentation to representatives from Los Angeles International Airport, titled âIdentity DashboardâValuable Marketing Data,â which âshowed offâ what the company had collected, including the number of sports games users had attended and with whom, which credit cards they had, their favorite airlines and top destinations, and how often they flew first class or economy.Â
Clear representatives emphasized to MIT Technology Review that the company âdoes not share or sell information without consent,â though they âhad nothing to addâ in response to a question about whether Clear can or does aggregate data to derive its own marketing insights, a business model popularized by Facebook. âAt Clear, privacy and security are job one,â spokesperson Ricardo Quinto wrote in an email. âWe are opt-in. We never sell or share our membersâ information and utilize a multilayered, best-in-class infosec system that meets the highest standards and compliance requirements.âÂ
Nevertheless, this influx of customer data is not just good for business; itâs risky for customers. It creates âanother attack surface,â Gilliard warns. âThis makes us less safe, not more, as a consistent identifier across your entire public and private life is the dream of every hacker, bad actor, and authoritarian.â
A face-based future for some
Today, Clear is in the middle of another major change: replacing its use of iris scans and fingerprints with facial verification in airportsâpart of âa TSA-required upgrade in identity verification,â a TSA spokesperson wrote in an email to MIT Technology Review.Â
For a long time, facial recognition technology âfor the highest security purposesâ was ânot ready for prime time,â Seidman Becker told Swisher and Goode back in 2017. It wasnât operating with âfive nines,â she addedâthat is, â99.999% from a matching and an accuracy perspective.â But today, facial recognition has âsignificantly improvedâ and the company has invested âin enhancing image quality through improved capture, focus, and illumination,â according to Quinto.
 Clear says switching to facial images in airports will also further decrease friction, enabling travelers to verify their identity so effortlessly itâs âalmost like you donât really break stride,â Peddy says. âYou walk up, you scan your face. You walk straight to the TSA.âÂ
The move is part of a broader shift toward facial recognition technology in US travel, bringing the country in line with practices at many international airports. The TSA began expanding facial identification from a few pilot programs this year, while airlines including Delta and United are also introducing face-based boarding, baggage drops, and even lounge access. And the International Air Transport Association, a trade group for the airline industry, is rolling out a âcontactless travelâ process that will allow passengers to check in, drop off their bags, and board their flightsâall without showing either passports or tickets, just their faces.Â
NEIL WEBB
Privacy experts worry that relying on faces for identity verification is even riskier than other biometric methods. After all, âitâs a lot easier to scan peopleâs faces passively than it is to scan irises or take fingerprints,â Senator Jeff Merkley of Oregon, an outspoken critic of government surveillance and of the TSAâs plans to employ facial verification at airports, said in an email. The point is that once a database of faces is built, it is potentially far more useful for surveillance purposes than, say, fingerprints. âEveryone who values privacy, freedom, and civil rights should be concerned about the increasing, unchecked use of facial recognition technology by corporations and the federal government,â Merkley wrote.
Even if Clear is not in the business of surveillance today, it could, theoretically, pivot or go bankrupt and (again) sell off its parts, including user data. Jeramie Scott, senior counsel and director of the Project on Surveillance Oversight at EPIC, says that ultimately, the âlack of federal [privacy] regulationâ means that weâre just taking the promises of companies like Clear at face value: âWhatever they say about how they implement facial recognition today does not mean that thatâs how theyâll be implementing facial recognition tomorrow.âÂ
Making this particular scenario potentially more concerning is that the images stored by this private company are âgenerally going to be much higher qualityâ than those collected by scraping the internetâwhich Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project (STOP), says would make its data far more useful for surveillance than that held by more controversial facial recognition companies like Clearview AI.Â
Even a far less pessimistic read of Clearâs data collection reveals the challenges of using facial identification systems, whichâas a 2019 report from the National Institute for Standards and Technology revealedâhave been shown to work less effectively in certain populations, particularly people of African and East Asian descent, women, and elderly and very young people. NIST has also not tested identification accuracy for individuals who are transgender, but Gilliard says he expects the algorithms would fall short.Â
More recent testing shows that some algorithms have improved, NIST spokesperson Chad Boutin tells MIT Technology Reviewâthough accuracy is still short of the âfive ninesâ that Seidman Becker once said Clear was aiming for. (Quinto, the Clear representative, maintains that Clearâs recent upgrades, combined with the fact that the companyâs testing involves âcomparing member photos to smaller galleries, rather than the millions used in NIST scenarios,â means its technology âremains accurate and suitable for secure environments like airports.â)
Even a very small error rate âin a system that is deployed hundreds of thousands of times a dayâ could still leave âa lot of peopleâ at risk of misidentification, explains Hannah Quay-de La Vallee, a technologist at the Center for Democracy & Technology, a nonprofit based in Washington, DC. All this could make Clearâs services inaccessible to someâeven if they can afford it, which is less likely given the recent increase in the subscription fee for travelers to $199 a year.
The free Clear Verified Platform is already giving rise to access problems in at least one partnership, with LinkedIn. The professional networking site encourages users to verify their identities either with an employer email address or with Clear, which marketing materials say will yield more engagement. But some LinkedIn users have expressed concerns, claiming that even after uploading a selfie, they were unable to verify their identities with Clear if they were subscribed to a smaller phone company or if they had simply not had their phone number for enough time. As one Reddit user emphasized, âGetting verified is a huge deal when getting a job.â LinkedIn said it does not enable recruiters to filter, rank, or sort by whether a candidate has a verification badge, but also said that verified information does âhelp people make more informed decisions as they build their network or apply for a job.â Clear only said it âworks with our partners to provide them with the level of identity assurance that they require for their customersâ and referred us back to LinkedIn.Â
An opt-in future that may not really be optionalÂ
Maybe whatâs worse than waiting in line, or even being cut in front of, is finding yourself stuck in what turns out to be the wrong lineâperhaps one that you never want to be in.Â
That may be how it feels if you donât use Clear and similar biometric technologies. âWhen I look at companies stuffing these technologies into vending machines, fast-food restaurants, schools, hospitals, and stadiums, what I see is resignation rather than acceptanceâpeople often donât have a choice,â says Gilliard, the privacy and surveillance scholar. âThe life cycle of these things is that ⊠even when it is âoptional,â oftentimes it is difficult to opt out.â
And while the stakes may seem relatively lowâClear is, after all, a voluntary membership programâthey will likely grow as the system is deployed more widely. As Seidman Becker said on Clearâs latest earnings call in early November, âThe lines between physical and digital interactions continue to blur. A verified identity isnât just a check mark. Itâs the foundation for everything we do in a high-stakes digital world.â Consider a job ad posted by Clear earlier this year, seeking to hire a vice president for business development; it noted that the company has its eye on a number of additional sectors, including financial services, e-commerce, P2P networking, âonline trust,â gaming, government, and more.Â
âIncreasingly, companies and the government are making the submission of your biometrics a barrier to participation in society,â Gilliard says.Â
This will be particularly true at the airport, with the increasing ubiquity of facial recognition across all security checks and boarding processes, and where time-crunched travelers could be particularly vulnerable to Clearâs sales pitch. Airports have even privately expressed concerns about these scenarios to Clear. Correspondence from early 2022 between the company and staff at SFO, released in response to a public records request, reveals that the airport âreceived a number of complaintsâ about Clear staff âimproperly and deceitfully soliciting approaching passengers in the security checkpoint lanes outside of its premises,â with an airport employee calling it âcompletely unacceptableâ and âaggressive and deceptive behavior.âÂ
Of course, this isnât to say everyone with a Clear membership was coerced into signing up. Many people love it; the company told MIT Technology Review that it had a nearly 84% retention rate earlier this year. Still, for some experts, itâs worrisome to think that what Clear users are comfortable with ends up setting the ground rules for the rest of us.Â
âWeâre going to normalize potentially a bunch of biometric stuff but not have a sophisticated conversation about where and how weâre normalizing what,â says Young. She worries this will empower âactors who want to move toward a creepy surveillance state, or corporate surveillance capitalism on steroids.âÂ
âWithout understanding what weâre building or how or where the guardrails are,â she adds, âI also worry that there could be major public backlash, and then legitimate uses [of biometric technology] are not understood and supported.â
But in the meantime, even superfans are grumbling about an uptick in wait times in the airportâs Clear lines. After all, if everyone decides to cut to the front of the line, that just creates a new long line of line-cutters.
Once again, global greenhouse-gas emissions are projected to hit a new high in 2024.Â
In this time of shifting political landscapes and ongoing international negotiations, many are quick to blame one country or another for an outsize role in causing climate change.
But assigning responsibility is complicated. These three visualizations help explain why and provide some perspective about the worldâs biggest polluters.
Greenhouse-gas emissions from fossil fuels and industry reached 37.4 billion metric tons of carbon dioxide in 2024, according to projections from the Global Carbon Budget, an annual emissions report released last week. Thatâs a 0.8% increase over last year.
Breaking things down by country, China is far and away the single biggest polluter today, a distinction it has held since 2006. The country currently emits roughly twice as much greenhouse gas as any other nation. The power sector is its single greatest source of emissions as the grid is heavily dependent on coal, the most polluting fossil fuel.
The US is the worldâs second-biggest polluter, followed by India. Combined emissions from the 27 nations that make up the European Union are next, followed by Russia and Japan.
Considering a countryâs current emissions doesnât give the whole picture of its climate responsibility, though. Carbon dioxide is stable in the atmosphere for hundreds of years. That means greenhouse gases from the first coal power plant, which opened in the late 19th century, are still having a warming effect on the planet today.
Adding up each countryâs emissions over the course of its history reveals that the US has the greatest historical contributionâthe country is responsible for about 24% of all the climate pollution released into the atmosphere as of 2023. While itâs the biggest polluter today, China comes in second in terms of historical emissions, at 14%.
If the EUâs member states are totaled as one entity, the group is among the top historical contributors as well. According to an analysis published November 19 by the website Carbon Brief, China passed EU member states in terms of historical emissions in 2023 for the first time.Â
China could catch up with the West in the coming decades, as its emissions are significant and still growing, while the US and EU are seeing moderate declines.
Even then, though, thereâs another factor to consider: population. Dividing a countryâs total emissions by its population reveals how the average individual in each nation is contributing to climate change today.Â
Countries with smaller populations and economies that are heavily reliant on oil and gas tend to top this list, including Saudi Arabia, Bahrain, and the United Arab Emirates.
Among the larger nations, Australia has the highest per capita emissions from fossil fuels, with the US and Canada close behind. Meanwhile, other countries that have high total emissions are farther down the list when normalized by population: Chinaâs per capita emissions are just over half that of the US, while Indiaâs is a small fraction.
Understanding the complicated picture of global emissions is crucial, especially during ongoing negotiations (including the current meeting at COP29 in Baku, Azerbaijan) over how to help developing nations pay for efforts to combat climate change.Â
Looking at current emissions, one might expect the biggest emitter, China, to contribute more than any other country to climate finance. But considering historical contributions, per capita emissions, and details about national economies, other nations like the US, UK, and members of the EU emerge as those experts tend to say should feature prominently in the talks.Â
What is clear is that when it comes to the emissions blame game, itâs more complicated than just pointing at todayâs biggest polluters. Ultimately, addressing climate change will require everyone to get on boardâwe all share an atmosphere, and weâre all going to continue feeling the effects of a changing climate.Â
Notes on data methodology:Â
Emissions data is from the Global Carbon Project, which estimates carbon emissions based on energy use. Territorial emissions take into account energy and some industry, but donât include land use emissions.Â
Data from the European Union is the sum of its current 27 member states. The bloc is represented together because the EU generally negotiates together on the international stage.Â
Historical emissions for some countries are disaggregated from former borders, including the former USSR and Yugoslavia.Â
The per capita emissions map uses official World Bank boundaries, with the exception of Taiwan, which has separate emissions data in the Global Carbon Project.Â
Western Saharaâs energy data are reported by Morocco, so its emissions are included in that total. Per capita emissions for Morocco are also used for Western Sahara on the map.Â
More detailed information about the Global Carbon Project methods (including the particulars on how territorial emissions are broken down) is available here.
This is todayâs edition of The Download, our weekday newsletter that provides a daily dose of whatâs going on in the world of technology.
Inside Clearâs ambitions to manage your identity beyond the airportÂ
Clear Secure is the most visible biometric identity company in the United States. Best known for its line-jumping service in airports, itâs also popping up at sports arenas and stadiums all over the country. You can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber.
And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctorâs officeâor anywhere else that you currently have to pull out a wallet (or wait in line).Â
While the company has been building toward this sweeping vision for years, it now seems its time has finally come. But as biometrics go mainstream, whatâand whoâbears the cost? Read the full story. Â
âEileen Guo
LinkedIn Live: Facial verification tech promises a frictionless future. But at what cost?
Do you use your face to unlock your phone, or speed through airport security? As biometrics companies move into more and more spaces, where else would you use this technology? The trade off seems simple: you scan your face, you get a frictionless future. But is it really? Join MIT Technology Reviewâs features and investigations team for a LinkedIn Live this Thursday, November 21, about the rise of facial verification tech and what it means to give up your face. Register for free.
Whoâs to blame for climate change? Itâs surprisingly complicated.
Once again, global greenhouse-gas emissions are projected to hit a new high in 2024.Â
In this time of shifting political landscapes and ongoing international negotiations, many are quick to blame one country or another for an outsize role in causing climate change.
Take advantage of epic savings on award-winning reporting, razor-sharp analysis, and expert insights on your favorite technology topics. Subscribe today to save 50% on an annual subscription, plus receive a free digital copy of our âGenerative AI and the future of workâ report. Donât miss out.
The must-reads
Iâve combed the internet to find you todayâs most fun/important/scary/fascinating stories about technology.
1 AI can now translate your voice in real-time during meetings Itâs part of Microsoftâs drive to push more AI into its products, but how well it works in the wild remains to be seen. (WP $) + Apple is having less success on that front, at least if its AI notification summaries are anything to go by. (The Atlantic $)
2 Anyone can buy data tracking US soldiers in Germany And the Pentagon is powerless to stop it.(Wired $) + Itâs shockingly easy to buy sensitive data about US military personnel. (MIT Technology Review)
3 Bluesky now has over 20 million users Its user base has tripled in the last three months. (Engadget) + Truth Social, on the other hand, is not doing quite so well. (WP $) + The rise of Bluesky, and the splintering of social. (MIT Technology Review)
4 How Google created a culture of concealment Itâs been preparing for antitrust action for over a decade, enforcing a policy where employees delete messages by default. (NYT $) + The company reacted angrily to reports it may be forced to sell Chrome. (BBC)
5 Project 2025 is already infiltrating the Trump administration Despite repeated denials, itâs clearly a blueprint for his next term. (Vox) + A hacker reportedly gained access to damaging testimonies about Matt Gaetz, his pick to be attorney general. (NYT $)
6 Quantum computers hit a major milestone for error-free calculation This is a crucial part of making them useful for real-world tasks. (New Scientist $)
7 Technology is changing political speech Slogans are becoming less effective. Now itâs more about saying different things to different audiences. (New Yorker $)
8 Lab-grown foie gras, anyone? This could be the cultivated meat industryâs future: as a luxury product for the few. (Wired $)
10 Minecraft is expanding into the real world It has struck a $110 million deal with one of the worldâs biggest theme park operators. (The Guardian)
Quote of the day
âNobody believes that these cables were severed by accident.â
âGermanyâs defense minister Boris Pistorius, tells reporters that the severing of two fiber-optic cables in the Baltic Sea was a deliberate act of sabotage, the New York Times reports.Â
 The big story
Are we alone in the universe?
ARIEL DAVIS
November 2023
The quest to determine if life is out there has gained greater scientific footing over the past 50 years. Back then, astronomers had yet to spot a single planet outside our solar system. Now we know the galaxy is teeming with a diversity of worlds.
Weâre getting closer than ever before to learning how common living worlds like ours actually are. New tools, including artificial intelligence, could help scientists look past their preconceived notions of what constitutes life. Read the full story.
+ How to not only survive but thrive during the winter. + Fancy working from somewhere new? Here are some of the best cities for a workcation. + Want to see David Bowie imitating Mick Jagger? Of course you do. + Itâs an old(ish) joke but still funny.
Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.
Thatâs now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind, which has been published on arXiv and has not yet been peer-reviewed.Â
Led by Joon Sung Park, a Stanford PhD student in computer science, the team recruited 1,000 people who varied by age, gender, race, region, education, and political ideology. They were paid up to $100 for their participation. From interviews with them, the team created agent replicas of those individuals. As a test of how well the agents mimicked their human counterparts, participants did a series of personality tests, social surveys, and logic games, twice each, two weeks apart; then the agents completed the same exercises. The results were 85% similar.Â
âIf you can have a bunch of small âyousâ running around and actually making the decisions that you would have madeâthat, I think, is ultimately the future,â Joon says.Â
In the paper the replicas are called simulation agents, and the impetus for creating them is to make it easier for researchers in social sciences and other fields to conduct studies that would be expensive, impractical, or unethical to do with real human subjects. If you can create AI models that behave like real people, the thinking goes, you can use them to test everything from how well interventions on social media combat misinformation to what behaviors cause traffic jams.Â
Such simulation agents are slightly different from the agents that are dominating the work of leading AI companies today. Called tool-based agents, those are models built to do things for you, not converse with you. For example, they might enter data, retrieve information you have stored somewhere, orâsomedayâbook travel for you and schedule appointments. Salesforce announced its own tool-based agents in September, followed by Anthropic in October, and OpenAI is planning to release some in January, according to Bloomberg.Â
The two types of agents are different but share common ground. Research on simulation agents, like the ones in this paper, is likely to lead to stronger AI agents overall, says John Horton, an associate professor of information technologies at the MIT Sloan School of Management, who founded a company to conduct research using AI-simulated participants.Â
âThis paper is showing how you can do a kind of hybrid: use real humans to generate personas which can then be used programmatically/in-simulation in ways you could not with real humans,â he told MIT Technology Review in an email.Â
The research comes with caveats, not the least of which is the danger that it points to. Just as image generation technology has made it easy to create harmful deepfakes of people without their consent, any agent generation technology raises questions about the ease with which people can build tools to personify others online, saying or authorizing things they didnât intend to say.Â
The evaluation methods the team used to test how well the AI agents replicated their corresponding humans were also fairly basic. These included the General Social Surveyâwhich collects information on oneâs demographics, happiness, behaviors, and moreâand assessments of the Big Five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. Such tests are commonly used in social science research but donât pretend to capture all the unique details that make us ourselves. The AI agents were also worse at replicating the humans in behavioral tests like the âdictator game,â which is meant to illuminate how participants consider values such as fairness.Â
To build an AI agent that replicates people well, the researchers needed ways to distill our uniqueness into language AI models can understand. They chose qualitative interviews to do just that, Joon says. He says he was convinced that interviews are the most efficient way to learn about someone after he appeared on countless podcasts following a 2023 paper that he wrote on generative agents, which sparked a huge amount of interest in the field. âI would go on maybe a two-hour podcast podcast interview, and after the interview, I felt like, wow, people know a lot about me now,â he says. âTwo hours can be very powerful.â
These interviews can also reveal idiosyncrasies that are less likely to show up on a survey. âImagine somebody just had cancer but was finally cured last year. Thatâs very unique information about you that says a lot about how you might behave and think about things,â he says. It would be difficult to craft survey questions that elicit these sorts of memories and responses.Â
Interviews arenât the only option, though. Companies that offer to make âdigital twinsâ of users, like Tavus, can have their AI models ingest customer emails or other data. It tends to take a pretty large data set to replicate someoneâs personality that way, Tavus CEO Hassaan Raza told me, but this new paper suggests a more efficient route.Â
âWhat was really cool here is that they show you might not need that much information,â Raza says, adding that his company will experiment with the approach. âHow about you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? And then we use that to construct this digital twin of you.â
This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.
It can be tricky for reporters to get past certain doors, and the door to the International Association of Chiefs of Police conference is one thatâs almost perpetually shut to the media. Thus, I was pleasantly surprised when I was able to attend for a day in Boston last month.Â
It bills itself as the largest gathering of police chiefs in the United States, where leaders from many of the countryâs 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards.Â
I went along to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. The future of policing will rely on it in all its forms.
In the eventâs expo hall, the vendors (of which there were more than 600) offered a glimpse into the ballooning industry of police-tech suppliers. Some had little to do with AIâbooths showcased body armor, rifles, and prototypes of police-branded Cybertrucks, and others displayed new types of gloves promising to protect officers from needles during searches. But one needed only to look to where the largest crowds gathered to understand that AI was the major draw.Â
The hype focused on three uses of AI in policing. The flashiest was virtual reality, exemplified by the booth from V-Armed, which sells VR systems for officer training. On the expo floor, V-Armed built an arena complete with VR goggles, cameras, and sensors, not unlike the one the company recently installed at the headquarters of the Los Angeles Police Department. Attendees could don goggles and go through training exercises on responding to active shooter situations. Many competitors of V-Armed were also at the expo, selling systems they said were cheaper, more effective, or simpler to maintain.Â
The pitch on VR training is that in the long run, it can be cheaper and more engaging to use than training with actors or in a classroom. âIf youâre enjoying what youâre doing, youâre more focused and you remember more than when looking at a PDF and nodding your head,â V-Armed CEO Ezra Kraus told me.Â
The effectiveness of VR training systems has yet to be fully studied, and they canât completely replicate the nuanced interactions police have in the real world. AI is not yet great at the soft skills required for interactions with the public. At a different companyâs booth, I tried out a VR system focused on deescalation training, in which officers were tasked with calming down an AI character in distress. It suffered from lag and was generally quite awkwardâthe characterâs answers felt overly scripted and programmatic.Â
The second focus was on the changing way police departments are collecting and interpreting data. Rather than buying a gunshot detection tool from one company and a license plate reader or drone from another, police departments are increasingly using expanding suites of sensors, cameras, and so on from a handful of leading companies that promise to integrate the data collected and make it useful.Â
Police chiefs attended classes on how to build these systems, like one taught by Microsoft and the NYPD about the Domain Awareness System, a web of license plate readers, cameras, and other data sources used to track and monitor crime in New York City. Crowds gathered at massive, high-tech booths from Axon and Flock, both sponsors of the conference. Flock sells a suite of cameras, license plate readers, and drones, offering AI to analyze the data coming in and trigger alerts. These sorts of tools have come in for heavy criticism from civil liberties groups, which see them as an assault on privacy that does little to help the public.Â
Finally, as in other industries, AI is also coming for the drudgery of administrative tasks and reporting. Many companies at the expo, including Axon, offer generative AI products to help police officers write their reports. Axonâs offering, called Draft One, ingests footage from body cameras, transcribes it, and creates a first draft of a report for officers.Â
âWeâve got this thing on an officerâs body, and itâs recording all sorts of great stuff about the incident,â Bryan Wheeler, a senior vice president at Axon, told me at the expo. âCan we use it to give the officer a head start?â
On the surface, itâs a writing task well suited for AI, which can quickly summarize information and write in a formulaic way. It could also save lots of time officers currently spend on writing reports. But given that AI is prone to âhallucination,â thereâs an unavoidable truth: Even if officers are the final authors of their reports, departments adopting these sorts of tools risk injecting errors into some of the most critical documents in the justice system.Â
âPolice reports are sometimes the only memorialized account of an incident,â wrote Andrew Ferguson, a professor of law at American University, in July in the first law review article about the serious challenges posed by police reports written with AI. âBecause criminal cases can take months or years to get to trial, the accuracy of these reports are critically important.â Whether certain details were included or left out can affect the outcomes of everything from bail amounts to verdicts.Â
By showing an officer a generated version of a police report, the tools also expose officers to details from their body camera recordings before they complete their report, a document intended to capture the officerâs memory of the incident. That poses a problem.Â
âThe police certainly would never show video to a bystander eyewitness before they ask the eyewitness about what took place, as that would just be investigatory malpractice,â says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, who will soon publish work on the subject.Â
A spokesperson for Axon says this concern âisnât reflective of how the tool is intended to work,â and that Draft One has robust features to make sure officers read the reports closely, add their own information, and edit the reports for accuracy before submitting them.
My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic. There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United Statesâthe precise figure is not even knownâhave remarkably high levels of autonomy to decide which AI tools theyâll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and itâs unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy.Â
That will only be made more apparent in an upcoming Trump administration. In a policing agenda released last year during his campaign, Trump encouraged more aggressive tactics like âstop and frisk,â deeper cooperation with immigration agencies, and increased liability protection for officers accused of wrongdoing. The Biden administration is now reportedly attempting to lock in some of its proposed policing reforms before January.Â
Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves.
âUltimately, these are for-profit companies, and their customers are law enforcement,â says Stanley. âThey do what their customers want, in the absence of some very large countervailing threat to their business model.â
Now read the rest of The Algorithm
Deeper Learning
The AI lab waging a guerrilla war over exploitative AI
When generative AI tools landed on the scene, artists were immediately concerned, seeing them as a new kind of theft. Computer security researcher Ben Zhao jumped into action in response, and his lab at the University of Chicago started building tools like Nightshade and Glaze to help artists keep their work from being scraped up by AI models. My colleague Melissa HeikkilĂ€ spent time with Zhao and his team to look at the ongoing effort to make these tools strong enough to stop AIâs relentless hunger for more images, art, and data to train on. Â
Why this matters: The current paradigm in AI is to build bigger and bigger models, and these require vast data sets to train on. Tech companies argue that anything on the public internet is fair game, while artists demand compensation or the right to refuse. Settling this fight in the courts or through regulation could take years, so tools like Nightshade and Glaze are what artists have for now. If the tools disrupt AI companiesâ efforts to make better models, that could push them to the negotiating table to bargain over licensing and fair compensation. But itâs a big âif.â Read more from Melissa HeikkilĂ€.
Bits and Bytes
Tech elites are lobbying Elon Musk for jobs in Trumpâs administration
Elon Musk is the tech leader who most has Trumpâs ear. As such, heâs reportedly the conduit through which AI and tech insiders are pushing to have an influence in the incoming administration. (The New York Times)
OpenAI is getting closer to launching an AI agent to automate your tasks
AI agentsâmodels that can do tasks for you on your behalfâare all the rage. OpenAI is reportedly closer to releasing one, news that comes a few weeks after Anthropic announced its own. (Bloomberg)
How this grassroots effort could make AI voices more diverse
A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative. (MIT Technology Review)Â
Google DeepMind has a new way to look inside an AIâs âmindâ
Autoencoders let us peer into the black box of artificial intelligence. They could help us create AI that is better understood and more easily controlled. (MIT Technology Review)
Musk has expanded his legal assault on OpenAI to target Microsoft
Musk has expanded his federal lawsuit against OpenAI, which alleges that the company has abandoned its nonprofit roots and obligations. Heâs now going after Microsoft too, accusing it of antitrust violations in its work with OpenAI. (The Washington Post)