How to run an LLM on your laptop

MIT Technology Review’s How To series helps you get things done. 

Simon Willison has a plan for the end of the world. It’s a USB stick, onto which he has loaded a couple of his favorite open-weight LLMs—models that have been shared publicly by their creators and that can, in principle, be downloaded and run with local hardware. If human civilization should ever collapse, Willison plans to use all the knowledge encoded in their billions of parameters for help. “It’s like having a weird, condensed, faulty version of Wikipedia, so I can help reboot society with the help of my little USB stick,” he says.

But you don’t need to be planning for the end of the world to want to run an LLM on your own device. Willison, who writes a popular blog about local LLMs and software development, has plenty of compatriots: r/LocalLLaMA, a subreddit devoted to running LLMs on your own hardware, has half a million members.

For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a compelling alternative to ChatGPT and its web-based peers.

The local LLM world used to have a high barrier to entry: In the early days, it was impossible to run anything useful without investing in pricey GPUs. But researchers have had so much success in shrinking down and speeding up models that anyone with a laptop, or even a smartphone, can now get in on the action. “A couple of years ago, I’d have said personal computers are not powerful enough to run the good models. You need a $50,000 server rack to run them,” Willison says. “And I kept on being proved wrong time and time again.”

Why you might want to download your own LLM

Getting into local models takes a bit more effort than, say, navigating to ChatGPT’s online interface. But the very accessibility of a tool like ChatGPT comes with a cost. “It’s the classic adage: If something’s free, you’re the product,” says Elizabeth Seger, the director of digital policy at Demos, a London-based think tank. 

OpenAI, which offers both paid and free tiers, trains its models on users’ chats by default. It’s not too difficult to opt out of this training, and it also used to be possible to remove your chat data from OpenAI’s systems entirely, until a recent legal decision in the New York Times’ ongoing lawsuit against OpenAI required the company to maintain all user conversations with ChatGPT.

Google, which has access to a wealth of data about its users, also trains its models on both free and paid users’ interactions with Gemini, and the only way to opt out of that training is to set your chat history to delete automatically—which means that you also lose access to your previous conversations. In general, Anthropic does not train its models using user conversations, but it will train on conversations that have been “flagged for Trust & Safety review.” 

Training may present particular privacy risks because of the ways that models internalize, and often recapitulate, their training data. Many people trust LLMs with deeply personal conversations—but if models are trained on that data, those conversations might not be nearly as private as users think, according to some experts.

“Some of your personal stories may be cooked into some of the models, and eventually be spit out in bits and bytes somewhere to other people,” says Giada Pistilli, principal ethicist at the company Hugging Face, which runs a huge library of freely downloadable LLMs and other AI resources.

For Pistilli, opting for local models as opposed to online chatbots has implications beyond privacy. “Technology means power,” she says. “And so who[ever] owns the technology also owns the power.” States, organizations, and even individuals might be motivated to disrupt the concentration of AI power in the hands of just a few companies by running their own local models.

Breaking away from the big AI companies also means having more control over your LLM experience. Online LLMs are constantly shifting under users’ feet: Back in April, ChatGPT suddenly started sucking up to users far more than it had previously, and just last week Grok started calling itself MechaHitler on X.

Providers tweak their models with little warning, and while those tweaks might sometimes improve model performance, they can also cause undesirable behaviors. Local LLMs may have their quirks, but at least they are consistent. The only person who can change your local model is you.

Of course, any model that can fit on a personal computer is going to be less powerful than the premier online offerings from the major AI companies. But there’s a benefit to working with weaker models—they can inoculate you against the more pernicious limitations of their larger peers. Small models may, for example, hallucinate more frequently and more obviously than Claude, GPT, and Gemini, and seeing those hallucinations can help you build up an awareness of how and when the larger models might also lie.

“Running local models is actually a really good exercise for developing that broader intuition for what these things can do,” Willison says.

How to get started

Local LLMs aren’t just for proficient coders. If you’re comfortable using your computer’s command-line interface, which allows you to browse files and run apps using text prompts, Ollama is a great option. Once you’ve installed the software, you can download and run any of the hundreds of models they offer with a single command

If you don’t want to touch anything that even looks like code, you might opt for LM Studio, a user-friendly app that takes a lot of the guesswork out of running local LLMs. You can browse models from Hugging Face from right within the app, which provides plenty of information to help you make the right choice. Some popular and widely used models are tagged as “Staff Picks,” and every model is labeled according to whether it can be run entirely on your machine’s speedy GPU, needs to be shared between your GPU and slower CPU, or is too big to fit onto your device at all. Once you’ve chosen a model, you can download it, load it up, and start interacting with it using the app’s chat interface.

As you experiment with different models, you’ll start to get a feel for what your machine can handle. According to Willison, every billion model parameters require about one GB of RAM to run, and I found that approximation to be accurate: My own 16 GB laptop managed to run Alibaba’s Qwen3 14B as long as I quit almost every other app. If you run into issues with speed or usability, you can always go smaller—I got reasonable responses from Qwen3 8B as well.

And if you go really small, you can even run models on your cell phone. My beat-up iPhone 12 was able to run Meta’s Llama 3.2 1B using an app called LLM Farm. It’s not a particularly good model—it very quickly goes off into bizarre tangents and hallucinates constantly—but trying to coax something so chaotic toward usability can be entertaining. If I’m ever on a plane sans Wi-Fi and desperate for a probably false answer to a trivia question, I now know where to look.

Some of the models that I was able to run on my laptop were effective enough that I can imagine using them in my journalistic work. And while I don’t think I’ll depend on phone-based models for anything anytime soon, I really did enjoy playing around with them. “I think most people probably don’t need to do this, and that’s fine,” Willison says. “But for the people who want to do this, it’s so much fun.”

How to use Sora, OpenAI’s new video generating tool

MIT Technology Review’s How To series helps you get things done

Today, OpenAI released its video generation model Sora to the public. The announcement comes on the fifth day of the company’s “shipmas” event, a 12-day marathon of tech releases and demos. Here’s what you should know—and how you can use the video model right now.

What is Sora?

Sora is a powerful AI video generation model that can create videos from text prompts, animate images, or remix videos in new styles. OpenAI first previewed the model back in February, but today is the first time the company is releasing it for broader use. 

What’s new about this release?

The core function of Sora—creating impressive videos with simple prompts—remains similar to what was previewed in February, but OpenAI worked to make the model faster and cheaper ahead of this wider release. There are a few new features, and two stand out.

One is called Storyboard. With it, you can create multiple AI-generated videos and then assemble them together on a timeline, much the way you would with conventional video editors like Adobe Premiere Pro. 

The second is a feed that functions as a sort of creative gallery. Users can post their Sora-generated videos to the feed, see the prompts behind certain videos, tweak them, and generally get inspiration, OpenAI says. 

How much can you do with it?

You can generate videos from text prompts, change the style of videos and change elements with a tool called Remix, and assemble multiple clips together with Storyboard. Sora also provides preset styles you can apply to your videos, like moody film noir or cardboard and papercraft, which gives a stop-motion feel. You can also trim and loop the videos that you make. 

Who can use it?

To generate videos with Sora, you’ll need to subscribe to one of OpenAI’s premium plans—either ChatGPT Plus ($20 per month) or ChatGPT Pro ($200 per month). Both subscriptions include access to other OpenAI products as well. Users with ChatGPT Plus can generate videos as long as five seconds with a resolution up to 720p. This plan lets you create 50 videos per month. 

Users with a ChatGPT Pro subscription can generate longer, higher-resolution videos, capped at a resolution of 1080p and a duration of 20 seconds. They can also have Sora generate up to five variations of a video at once from a single prompt, making it possible to review options faster. Pro users are limited to 500 videos per month but can also create unlimited “relaxed” videos, which are not generated in the moment but rather queued for when site traffic is low. 

Both subscription levels make it possible to create videos in three aspect ratios: vertical, horizontal, and square. 

If you don’t have a subscription, you’ll be limited to viewing the feed of Sora-generated videos. 

OpenAI is starting its global launch of Sora today, but it will take longer to launch in “most of Europe,” the company said. 

OPENAI

Where can I access it?

OpenAI has broken Sora out from ChatGPT. To access it, go to Sora.com and log in with your ChatGPT Plus or Pro account. (MIT Technology Review was unable to access the site at press time—a note on the site indicated that signups were paused because they were “currently experiencing heavy traffic.”) 

How’d we get here?

A number of things have happened since OpenAI first unveiled Sora back in February. Other tech companies have also launched video generation tools, like Meta Movie Gen and Google Veo. There’s also been plenty of backlash. For example, artists who had early access to experiment with Sora leaked the tool to protest the way OpenAI has trained it on artists’ work without compensation. 

What’s next?

As with any new release of a model, it remains to be seen what steps OpenAI has taken to keep Sora from being used for nefarious, illegal, or unethical purposes, like the creation of deepfakes. On the question of moderation and safety, an OpenAI employee said they “might not get it perfect on day one.”

Another looming question is how much computing capacity and energy Sora will use up every time it creates a video. Generating a video uses much more computing time, and therefore energy, than generating a typical text response in a tool like ChatGPT.  The AI boom has already been an energy hog, presenting a challenge to tech companies aiming to rein in their emissions, and the wide availability of Sora and other video models like it has the potential to make that problem worse.

Four ways to protect your art from AI 

MIT Technology Review’s How To series helps you get things done. 

Since the start of the generative AI boom, artists have been worried about losing their livelihoods to AI tools. There have been plenty of examples of companies’ replacing human labor with computer programs. Most recently, Coca-Cola sparked controversy by creating a new Christmas ad with generative AI. 

Artists and writers have launched several lawsuits against AI companies, arguing that their work has been scraped into databases for training AI models without consent or compensation. Tech companies have responded that anything on the public internet falls under fair use. But it will be years until we have a legal resolution to the problem. 

Unfortunately, there is little you can do if your work has been scraped into a data set and used in a model that is already out there. You can, however, take steps to prevent your work from being used in the future. 

Here are four ways to do that. 

Mask your style 

One of the most popular ways artists are fighting back against AI scraping is by applying “masks” on their images, which protect their personal style from being copied. 

Tools such as Mist, Anti-DreamBooth, and Glaze add tiny changes to an image’s pixels that are invisible to the human eye, so that if and when images are scraped, machine-learning models cannot decipher them properly. You’ll need some coding skills to run Mist and Anti-DreamBooth, but Glaze, developed by researchers at the University of Chicago, is more straightforward to apply. The tool is free and available to download as an app, or the protection can be applied online. Unsurprisingly, it is the most popular tool and has been downloaded millions of times. 

But defenses like these are never foolproof, and what works today might not work tomorrow. In computer security, breaking defenses is standard practice among researchers, as this helps people find weaknesses and make systems safer. Using these tools is a calculated risk: Once something is uploaded online, you lose control of it and can’t retroactively add protections to images. 

Rethink where and how you share 

Popular art profile sites such as DeviantArt and Flickr have become gold mines for AI companies searching for training data. And when you share images on platforms such as Instagram, its parent company, Meta, can use your data to build its models in perpetuity if you’ve shared it publicly. (See opt-outs below.) 

One way to prevent scraping is by not sharing images online publicly, or by making your social media profiles private. But for many creatives that is simply not an option; sharing work online is a crucial way to attract clients. 

It’s worth considering sharing your work on Cara, a new platform created in response to the backlash against AI. Cara, which collaborates with the researchers behind Glaze, is planning to add integrations to the lab’s art defense tools. It automatically implements “NoAI” tags that tell online scrapers not to scrape images from the site. It currently relies on the goodwill of AI companies to respect artists’ stated wishes, but it’s better than nothing. 

Opt out of scraping 

Data protection laws might help you get tech companies to exclude your data from AI training. If you live somewhere that has these sorts of laws, such as the UK or the EU, you can ask tech companies to opt you out of having your data scraped for AI training. For example, you can follow these instructions for Meta. Unfortunately, opt-out requests from users in places without data protection laws are honored only at the discretion of tech companies. 

The site Have I Been Trained, created by the artist-run company Spawning AI, lets you search to find out if your images have ended up in popular open-source AI training data sets. The organization has partnered with two companies: Stability AI, which created Stable Diffusion, and Hugging Face, which promotes open access to AI. If you add your images to Spawning AI’s Do Not Train Registry, these companies have agreed to remove your images from their training data sets before training new models. Again, unfortunately, this relies on the goodwill of AI companies and is not an industry-wide standard. 

If all else fails, add some poison

The University of Chicago researchers who created Glaze have also created Nightshade, a tool that lets you add an invisible layer of “poison” to your images. Like Glaze, it adds invisible changes to pixels, but rather than just making it hard for AI models to interpret images, it can break future iterations of these models and make them behave unpredictably. For example, images of dogs might become cats, and handbags might become toasters. The researchers say relatively few samples of poison are needed to make an impact. 

You can add Nightshade to your image by downloading an app here. In the future, the team hopes to combine Glaze and Nightshade, but at the moment the two protections have to be added separately. 

How to… delete your 23andMe data

MIT Technology Review’s How To series helps you get things done. 

Things aren’t looking good for 23andMe. The consumer DNA testing company recently parted ways with all its board members but CEO Anne Wojcicki over her plans to take the company private. It’s also still dealing with the fallout of a major security breach last October, which saw hackers access the personal data of around 5.5 million customers.

23andMe’s business is built on taking saliva samples from its customers. The DNA from those samples is processed and analyzed in its labs to produce personalized genetic reports detailing a user’s unique health and ancestry. The uncertainty swirling around the company’s future and potential new ownership  has prompted privacy campaigners to urge users to delete their data.

“It’s not just you. If anyone in your family gave their DNA to 23&Me, for all of your sakes, close your/their account now,” Meredith Whittaker, president of the encrypted messaging platform Signal, posted on X after the board’s resignation. 

“Customers should consider current threats to their privacy as well as threats that may exist in the future—some of which may be magnified if 23AndMe were sold to a new owner,” says Jason Kelley, activism director at the Electronic Frontier Foundation. “23AndMe has protections around this much of this. But a potential sale could put your data in the hands of a far less scrupulous company.”

A spokesperson for 23andMe said that the company has strong customer privacy protections in place, and does not share customer data with third parties without customers’ consent. “Our research program is opt-in, requiring customers to go through a separate, informed consent process before joining,” they say. “We are committed to protecting customer data and are consistently focused on maintaining the privacy of our customers. That will not change.”

Why deleting your account comes with a caveat

Deleting your data from 23andMe is permanent and cannot be reversed. But some of that data will be retained to comply with the company’s legal obligations, according to its privacy statement

That means 23andMe and its third-party genotyping laboratory will hang onto some of your genetic information, plus your date of birth and sex—alongside data linked to your account deletion request, including your email address and deletion request identifier. When MIT Technology Review asked 23andMe about the nature of the genetic information it retains, it referred us to its privacy policy but didn’t provide any other details.

Any information you’ve previously provided and consented to being used in 23andMe research projects also cannot be removed from ongoing or completed studies, although it will not be used in any future ones. 

Beyond the laboratories that process the saliva samples, the company does not share customer information with anyone else unless the user has given permission for it to do so, the spokesperson says, including employers, insurance companies, law enforcement agencies, or any public databases.

“We treat law enforcement inquiries, such as a valid subpoena or court order, with the utmost seriousness. We use all legal measures to resist any and all requests in order to protect our customer’s privacy,” the spokesperson says. “To date, we have successfully challenged these requests and have not released any information to law enforcement.”

For those who still want their data deleted, here’s how you go about it.

How to delete your data from 23andMe

  1. Log into your account and navigate to Settings.
  2. Under Settings, scroll to the section titled 23andMe data. Select View.
  3. You may be asked to enter your date of birth for extra security. 
  4. In the next section, you’ll be asked which, if any, personal data you’d like to download from the company (onto a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  5. You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically and you’ll immediately lose access to your account. 

What about your genetic sample?

When you set up your 23andMe account, you’re given the option either to have your saliva sample securely destroyed or to have it stored for future testing. If you’ve previously opted to store your sample but now want to delete your 23andMe account, the company says, it will destroy the sample for you as part of the account deletion process.

What if you want to keep your genetic data, just not on 23andMe?

Even if you want your data taken off 23AndMe, there are reasons why you might still want to have it hosted on other DNA sites—for genealogical research, for example. And some people like the idea of having their DNA results stored on more than one database in case something happens to any one company. This is where downloading your data comes into play. FamilyTreeDNA, MyHeritage, GEDmatch, and Living DNA are among the DNA testing companies that allow you to upload existing DNA results from other companies, although Ancestry and 23andMe don’t accept uploads.

How to download your raw genetic data

  1. Navigate directly to you.23andme.com/tools/data/.
  2. Click on your profile name on the top right-hand corner. Then select Resources from the menu.
  3. Select Browse raw genotyping data and then Download.
  4. Visit Account settings and click on View under 23andMe data.
  5. Enter your date of birth for security purposes.
  6. Tick the box indicating that you understand the limitations and risks associated with uploading your information to third-party sites and press Submit request.

23andMe warns its users that uploading their data to other services could put genetic data privacy at risk. For example, bad actors could use someone else’s DNA data to create fake genetic profiles.

They could use these profiles to “match” with a relative and access personal identifying information and specific DNA variants—such as information about any disease risk variants you might carry, the spokesperson says, adding: “This is one reason why we don’t support uploading DNA to 23andMe at this time.” 

Update: This article has been updated to reflect that when asked about the nature of the genetic information it retains, 23andMe referred us to its privacy policy but didn’t provide any other details.

How to use AI to plan your next vacation

MIT Technology Review‘s How To series helps you get things done.

Planning a vacation should, in theory, be fun. But drawing up a list of activities for a trip can also be time consuming and stressful, particularly if you don’t know where to begin.

Luckily, tech companies have been competing to create tools that can help you to do just that. Travel has become one of the most popular use cases for AI that Google, Microsoft, and OpenAI like to point to in demos, and firms like Tripadvisor, Expedia, and Booking.com have started to launch AI-powered vacation-planning products too. While AI agents that can manage the entire process of planning and booking your vacation are still some way off, the current generation of AI tools are still pretty handy at helping you with various tasks, like creating itineraries or brushing up on your language skills. 

AI models are prone to making stuff up, which means you should always double-check their suggestions yourself. But they can still be a really useful resource. Read on for some ideas on how AI tools can help make planning your time away that little bit easier—leaving you with more time to enjoy yourself.

Narrow down potential locations for your break

First things first: You have to choose where to travel to. The beauty of large language models (LLMs) like ChatGPT is that they’re trained on vast swathes of the internet, meaning they can digest information that would take a human hours to research and quickly condense it into simple paragraphs.

This makes them great tools to help draw you up a list of places you’d be interested in going. The more specific you can be in your prompt, the better—for example, telling the chatbot you’d like suggestions for destinations with warm climates, child-friendly beaches, and busy nightlife (such as Mexico, Thailand, Ibiza, and Australia) will return more relevant countries than vague prompts. 

However, given AI models’ propensity for making things up—known as hallucinating—it’s worth checking that its information on proposed locations and potential activities is actually accurate.

How to use it: Fire up your LLM of choice—ChatGPT, Gemini, or Copilot are just some of the available models—and ask it to suggest locations for a holiday. Include important details like the temperatures, locations, length of trip, and activities you’re interested in. This could look something like: “Suggest a list of locations for two people going on a two-week vacation. The locations should be hot throughout July and August, based in a city but with easy access to a beach.”

Pick places to visit while you’re there 

Once you’re on your vacation, you can use tools like ChatGPT or Google’s Gemini to draw up itineraries for day trips. For example, you could use a prompt like “Give me an itinerary for a day driving from Florence around the countryside in Chianti. Include some medieval villages and a winery, and finish with dinner at a restaurant with a good view.” As always with LLMs, the more specific you can be, the better. And to be on the safe side, you ought to cross-reference the final itinerary against Google Maps to check that the order of the suggestions makes sense. 

Beyond LLMs, there are also tailored tools available that can help you to work out the kinds of conditions you might encounter, including weather and traffic. If you’re planning a city break, you might want to check out Immersive View, a feature for Google Maps that Google launched last year. It uses AI and computer vision to create a 3D model depicting how a certain location in a supported city will look at a specific time of day up to four days in the future. Because it’s able to draw from weather forecasts and traffic data, it could help you predict whether a rooftop bar will still be bathed in sunshine tomorrow evening, or if you’d be better off picking a different route for a drive at the weekend.

How to use it: Check to see if your city is on this list. Then open up Google Maps, navigate to an area you’re interested in, and select Immersive View. You’ll be presented with an interactive map with the option to change the date and time of day you’d like to check.

Checking flights and accommodations

Once you’ve decided where to go, booking flights and a place to stay is the next thing to tackle. Many travel booking sites have integrated AI chatbots into their websites, the vast majority of which are powered by ChatGPT. But unless you’re particularly wedded to using a specific site, it could be worth looking at the bigger picture.

Looking up flights on multiple browser tabs can be cumbersome, but Google’s Gemini has a solution. The model integrates with Google Flights and Google Hotels, pulling in real-time information from Google’s partner companies in a way that makes it easy to compare times and, crucially, prices.

This is a quick and easy way to search for flights and accommodations within your personal budget. For example, I instructed Gemini to show me flights for a round trip from London to Paris for under £200. It’s a great starting point to get a rough idea of how much you’re likely to spend, and how long it’ll take you to get there.

How to use it: Once you’ve opened up Gemini (you may need to sign in to a Google account to do this), open up Settings and go to Extensions to check that Google Flights & Hotels is enabled. Then return to the Gemini main page and enter your query, specifying where you’re flying from and to, the length of your stay, and any cost requirements you may wish to share.

If you’re a spreadsheet fan, you can ask Gemini to export the plan to Sheets, which you can then share with friends and family. 

Practice your language skills

You’ve probably heard that the best way to get better at another language is to practice speaking it. However, tutors can be expensive, and you may not know anyone else who speaks the tongue you’re trying to brush up on.

Back in September last year, OpenAI updated ChatGPT to allow users to speak to it. You can try it out for yourself using the ChatGPT app for Android or iOS. I opened up the voice chat option and read it some basic phrases in French that it successfully translated into English (“Do you speak English?” “Can you help me?” and “Where is the museum?”) in spite of my poor pronunciation. It was also good at offering up alternative phrases when I asked it for less formal examples, such as swapping bonjour (hello) for salut, which translates as “hi.” And it allowed me to hold basic conversations with the disembodied AI voice.  

How to use it: Download the ChatGPT app and press the headphone icon to the right of the search bar. This will trigger a voice conversation with the AI model.

Translate on the go

Google has integrated its powerful translation technology into camera software, allowing you to simply point your phone camera toward an unfamiliar phrase and see it translated into English. This is particularly useful for deciphering menus, road signs, and shop names while you’re out and about. 

How to use it: Download the Google Translate app and select Camera.

Write online reviews (and social media captions)

Positive reviews are a great way for small businesses to set themselves apart from their competition on the internet. But writing them can be time consuming, so why not get AI to help you out?

How to use it: Telling a chatbot like Gemini, Copilot, or ChatGPT what you enjoyed about a particular restaurant, guided tour, or destination can take some of the hard work out of writing a quick summary. The more specific you can be, the better. Prompt the model with something like: “Write a positive review for the Old Tavern in Mykonos, Greece, that mentions its delicious calamari.” While you’re unlikely to want to copy and paste the chatbot’s response in its entirety, it can help you with the structure and phrasing of your own review. 

Similarly, if you’re someone who struggles to come up with captions for Instagram posts about your travels, asking the same LLMs to help you can be a good way to get over writer’s block.

How to opt out of Meta’s AI training

MIT Technology Review’s How To series helps you get things done. 

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. Even if you don’t use any of Meta’s platforms, it can still scrape data such as photos of you if someone else posts them.

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine. 

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.” 

What that means for US users

Users in the US or other countries without national data privacy laws don’t have any foolproof ways to prevent Meta from using their data to train AI, which has likely already been used for such purposes. Meta does not have an opt-out feature for people living in these places. 

A spokesperson for Meta says it does not use the content of people’s private messages to each other to train AI. However, public social media posts are seen as fair game and can be hoovered up into AI training data sets by anyone. Users who don’t want that can set their account settings to private to minimize the risk. 

The company has built in-platform tools that allow people to delete their personal information from chats with Meta AI, the spokesperson says.

How users in Europe and the UK can opt out 

Users in the European Union and the UK, which are protected by strict data protection regimes, have the right to object to their data being scraped, so they can opt out more easily. 

If you have a Facebook account:

1. Log in to your account. You can access the new privacy policy by following this link. At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

Alternatively, you can click on your account icon at the top right-hand corner. Select “Settings and privacy” and then “Privacy center.” On the left-hand side you will see a drop-down menu labeled “How Meta uses information for generative AI models and features.” Click on that, and scroll down. Then click on “Right to object.” 

2. Fill in the form with your information. The form requires you to explain how Meta’s data processing affects you. I was successful in my request by simply stating that I wished to exercise my right under data protection law to object to my personal data being processed. You will likely have to confirm your email address. 

3. You should soon receive both an email and a notification on your Facebook account confirming if your request has been successful. I received mine a minute after submitting the request.

If you have an Instagram account: 

1. Log in to your account. Go to your profile page, and click on the three lines at the top-right corner. Click on “Settings and privacy.”

2. Scroll down to the “More info and support” section, and click “About.” Then click on “Privacy policy.” At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

3. Repeat steps 2 and 3 as above.