Why AI hardware needs to be open

When OpenAI acquired Io to create “the coolest piece of tech that the world will have ever seen,” it confirmed what industry experts have long been saying: Hardware is the new frontier for AI. AI will no longer just be an abstract thing in the cloud far away. It’s coming for our homes, our rooms, our beds, our bodies. 

That should worry us.

Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone.

By definition, the maker movement is humble and it is consistent. Makers do not believe in the cult of individual genius; we believe in collective genius. We believe that creativity is universally distributed (not exclusively bestowed), that inventing is better together, and that we should make open products so people can observe, learn, and create—basically, the polar opposite of what Jony Ive and Sam Altman are building.

But over time, the momentum faded. The movement was dismissed by the tech and investment industry as niche and hobbyist, and starting in 2018, pressures on the hardware venture market (followed by covid) made people retreat from social spaces to spend more time behind screens. 

Now it’s mounting a powerful second act, joined by a wave of AI open-source enthusiasts. This time around the stakes are higher, and we need to give it the support it never had.

In 2024 the AI leader Hugging Face developed an open-source platform for AI robots, which already has 3,500+ robot data sets and draws thousands of participants from every continent to join giant hackathons. Raspberry Pi went public on the London Stock Exchange for $700 million. After a hiatus, Maker Faire came back; the most recent one had nearly 30,000 attendees, with kinetic sculptures, flaming octopuses, and DIY robot bands, and this year there will be over 100 Maker Faires around the world. Just last week, DIY.org relaunched its app. In March, my friend Roya Mahboob, founder of the Afghan Girls Robotics Team, released a movie about the team to incredible reviews. People love the idea that making is the ultimate form of human empowerment and expression. All the while, a core set of people have continued influencing millions through maker organizations like FabLabs and Adafruit.

Studies show that hands-on creativity reduces anxiety, combats loneliness, and boosts cognitive function. The act of making grounds us, connects us to others, and reminds us that we are capable of shaping the world with our own hands. 

I’m not proposing to reject AI hardware but to reject the idea that innovation must be proprietary, elite, and closed. I’m proposing to fund and build the open alternative. That means putting our investment, time, and purchases towards robot built in community labs, AI models trained in the open, tools made transparent and hackable. That world isn’t just more inclusive—it’s more innovative. It’s also more fun. 

This is not nostalgia. This is about fighting for the kind of future we want: A future of openness and joy, not of conformity and consumption. One where technology invites participation, not passivity. Where children grow up not just knowing how to swipe, but how to build. Where creativity is a shared endeavor, not the mythical province of lone geniuses in glass towers.

In his Io announcement video, Altman said, “We are literally on the brink of a new generation of technology that can make us our better selves.” It reminded me of the movie Mountainhead, where four tech moguls tell themselves they are saving the world while the world is burning. I don’t think the iPhone made us our better selves. In fact, you’ve never seen me run faster than when I’m trying to snatch an iPhone out of my three-year-old’s hands.

So yes, I’m watching what Sam Altman and Jony Ive will unveil. But I’m far more excited by what’s happening in basements, in classrooms, on workbenches. Because the real iPhone moment isn’t a new product we wait for. It’s the moment you realize you can build it yourself. And best of all? You  can’t doomscroll when you’re holding a soldering iron.

Ayah Bdeir is a leader in the maker movement, a champion of open source AI, and founder of littleBits, the hardware platform that teaches STEAM to kids through hands-on invention. A graduate of the MIT Media Lab, she was selected as one of the BBC’s 100 Most Influential Women, and her inventions have been acquired by the Museum of Modern Art.

The quest to defend against tech in intimate partner violence

After Gioia had her first child with her then husband, he installed baby monitors throughout their Massachusetts home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior. One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.

“What am I supposed to tell my daughter?” says Gioia, who is going by a pseudonym in this story out of safety concerns. “She’s so excited but doesn’t realize [it’s] a monitoring device for him to see where we are.” In the end, she decided not to confiscate the watch. Instead, she told her daughter to leave it at home whenever they went out together, saying that this way it wouldn’t get lost. 

Gioia says she has informed a family court of this and many other instances in which her ex has used or appeared to use technology to stalk her, but so far this hasn’t helped her get full custody of her children. The court’s failure to recognize these tech-facilitated tactics for maintaining power and control has left her frustrated to the point where she yearns for visible bruises. “I wish he was breaking my arms and punching me in the face,” she says, “because then people could see it.”

People I spoke with for this article described combating tech-facilitated abuse as playing “whack-a-mole.” Just as you figure out how to alert people to smartphone location sharing, enter smart cars.

This sentiment is unfortunately common among people experiencing what’s become known as TFA, or tech-­facilitated abuse. Defined by the National Network to End Domestic Violence as “the use of digital tools, online platforms, or electronic devices to control, harass, monitor, or harm someone,” these often invisible or below-the-radar methods include using spyware and hidden cameras; sharing intimate images on social media without consent; logging into and draining a partner’s online bank account; and using device-based location tracking, as Gioia’s ex did with their daughter’s smartwatch.

Because technology is so ubiquitous, TFA occurs in most cases of intimate partner violence. And those whose jobs entail protecting victims and survivors and holding abusive actors accountable struggle to get a handle on this multi­faceted problem. An Australian study from October 2024, which drew on in-depth interviews with victims and survivors of TFA, found a “considerable gap” in the understanding of TFA among frontline workers like police and victim service providers, with the result that police repeatedly dismissed TFA reports and failed to identify such incidents as examples of intimate partner violence. The study also identified a significant shortage of funding for specialists—that is, computer scientists skilled in conducting safety scans on the devices of people experiencing TFA. 

The dearth of understanding is particularly concerning because keeping up with the many faces of tech-facilitated abuse requires significant expertise and vigilance. As internet-connected cars and homes become more common and location tracking is increasingly normalized, novel opportunities are emerging to use technology to stalk and harass. In reporting this piece, I heard chilling tales of abusers who remotely locked partners in their own “smart homes,” sometimes turning up the heat for added torment. One woman who fled her abusive partner found an ominous message when she opened her Netflix account miles away: “Bitch I’m Watching You” spelled out where the names of the accounts’ users should be. 

Despite the range of tactics, a 2022 survey of TFA-focused studies across a number of English-speaking countries found that the results readily map onto the Power and Control Wheel, a tool developed in Duluth, Minnesota, in the 1980s that categorizes the all-encompassing ways abusive partners exert power and control over victims: economically, emotionally, through threats, using children, and more. Michaela Rogers, the lead author of the study and a senior lecturer at the University of Sheffield in the UK, says she noted “paranoia, anxiety, depression, trauma and PTSD, low self-esteem … and self-harm” among TFA survivors in the wake of abuse that often pervaded every aspect of their lives.

This kind of abuse is taxing and tricky to resolve alone. Service providers and victim advocates strive to help, but many lack tech skills, and they can’t stop tech companies from bringing products to market. Some work with those companies to help create safeguards, but there are limits to what businesses can do to hold abusive actors accountable. To establish real guardrails and dole out serious consequences, robust legal frameworks are needed. 

It’s been slow work, but there have been concerted efforts to address TFA at each of these levels in the past couple of years. Some US states have passed laws against using smart car technology or location trackers such as Apple AirTags for stalking and harassment. Tech companies, including Apple and Meta, have hired people with experience in victim services to guide development of product safeguards, and advocates for victims and survivors are seeking out more specialized tech education. 

But the ever-evolving nature of technology makes it nearly impossible to create a permanent fix. People I spoke with for this article described the effort as playing “whack-a-mole.” Just as you figure out how to alert people to smartphone location sharing, enter smart cars. Outlaw AirTag stalking and a newer, more effective tool appears that can legally track your ex. That’s why groups that uniquely address TFA, like the Clinic to End Tech Abuse (CETA) at Cornell Tech in New York City, are working to create permanent infrastructure. A problem that has typically been seen as a side focus for service organizations can finally get the treatment it deserves as a ubiquitous and potentially life-endangering aspect of intimate partner violence.  

Volunteer tech support

CETA saw its first client seven years ago. In a small white room on Cornell Tech’s Roosevelt Island campus, two computer scientists sat down with someone whose abuser had been accessing the photos on their iPhone. The person didn’t know how this was happening. 

“We worked with our client for about an hour and a half,” says one of the scientists, Thomas Ristenpart, “and realized it was probably an iCloud Family Sharing issue.”

At the time, CETA was one of just two clinics in the country created to address TFA (the other being the Technology Enabled Coercive Control Clinic in Seattle), and it remains on the cutting edge of the issue. 

Picture a Venn diagram, with one circle representing computer scientists and the other service providers for domestic violence victims. It’s practically two separate circles, with CETA occupying a thin overlapping slice. Tech experts are much more likely to be drawn to profitable companies or research institutions than social-work nonprofits, so it’s unexpected that a couple of academic researchers identified TFA as a problem and chose to dedicate their careers to combating it. Their work has won results, but the learning curve was steep. 

CETA grew out of an interest in measuring the “internet spyware software ecosystem” exploited in intimate partner violence, says Ristenpart. He and cofounder Nicola Dell initially figured they could help by building a tool that could scan phones for intrusive software. They quickly realized that this alone wouldn’t solve the problem—and could even compromise people’s safety if done carelessly, since it could alert abusers that their surveillance had been detected and was actively being thwarted.

close-up of a hand holding an Apple AirTag
In December, Ohio passed a law making AirTag stalking a crime. Florida is considering increasing penalties for people who use tracking devices to “commit or facilitate commission of dangerous crimes.”
ONUR BINAY/UNSPLASH

Instead, Dell and Ristenpart studied the dynamics of coercive control. They conducted about 14 focus groups with professionals who worked daily with victims and survivors. They connected with organizations like the Anti-Violence Project and New York’s Family Justice Centers to get referrals. With the covid-19 pandemic, CETA went virtual and stayed that way. Its services now resemble “remote tech support,” Dell says. A handful of volunteers, many of whom work in Big Tech, receive clients’ intake information and guide them through processes for stopping unwanted location sharing, for example, on their devices.

Remote support has sufficed because abusers generally aren’t carrying out the type of sophisticated attack that can be foiled only by disassembling a device. “For the most part, people are using standard tools in the way that they were designed to be used,” says Dell. For example, someone might throw an AirTag into a stroller to keep track of its whereabouts (and those of the person pushing it), or act as the admin of a shared online bank account. 

Though CETA stands out as a tech-­centric service organization for survivors, anti-domestic-violence groups have been encountering and combating TFA for decades. When Cindy Southworth started her career in the domestic violence field in the 1990s, she heard of abusers doing rough location tracking using car odometers—the mileage could suggest, for instance, that a driver pretending to set out for the supermarket had instead left town to seek support. Later, when Southworth joined the Pennsylvania Coalition Against Domestic Violence, the advocacy community was looking at caller ID as “not only an incredibly powerful tool for survivors to be able to see who’s calling,” she recalls, “but also potentially a risky technology, if an abuser could see.” 

As technology evolved, the ways abusers took advantage evolved too. Realizing that the advocacy community “was not up on tech,” Southworth founded the National Network to End Domestic Violence’s Safety Net Project in 2000 to provide a comprehensive training curriculum on how to “harness [technology] to help victims” and hold abusers accountable when they misuse it. Today, the project offers resources on its website, like tool kits that include guidance on strategies such as creating strong passwords and security questions. “When you’re in a relationship with someone,” explains director Audace Garnett, “they may know your mother’s maiden name.” 

Big Tech safeguards

Southworth’s efforts later extended to advising tech companies on how to protect users who have experienced intimate partner violence. In 2020, she joined Facebook (now Meta) as its head of women’s safety. “What really drew me to Facebook was the work on intimate image abuse,” she says, noting that the company had come up with one of the first “sextortion” policies in 2012. Now she works on “reactive hashing,” which adds “digital fingerprints” to images that have been identified as nonconsensual so that survivors only need to report them once for all repeats to get blocked.

Other areas of concern include “cyberflashing,” in which someone might share, say, unwanted explicit photos. Meta has worked to prevent that on Instagram by not allowing accounts to send images, videos, or voice notes unless they follow you. Besides that, though, many of Meta’s practices surrounding potential abuse appear to be more reactive than proactive. The company says it removes online threats that violate its policies against bullying and that promote “offline violence.” But earlier this year, Meta made its policies about speech on its platforms more permissive. Now users are allowed to refer to women as “household objects,” reported CNN, and to post transphobic and homophobic comments that had formerly been banned.

A key challenge is that the very same tech can be used for good or evil: A tracking function that’s dangerous for someone whose partner is using it to stalk them might help someone else stay abreast of a stalker’s whereabouts. When I asked sources what tech companies should be doing to mitigate technology-assisted abuse, researchers and lawyers alike tended to throw up their hands. One cited the problem of abusers using parental controls to monitor adults instead of children—tech companies won’t do away with those important features for keeping children safe, and there is only so much they can do to limit how customers use or misuse them. Safety Net’s Garnett said companies should design technology with safety in mind “from the get-go” but pointed out that in the case of many well-established products, it’s too late for that. A couple of computer scientists pointed to Apple as a company with especially effective security measures: Its closed ecosystem can block sneaky third-party apps and alert users when they’re being tracked. But these experts also acknowledged that none of these measures are foolproof. 

Over roughly the past decade, major US-based tech companies including Google, Meta, Airbnb, Apple, and Amazon have launched safety advisory boards to address this conundrum. The strategies they have implemented vary. At Uber, board members share feedback on “potential blind spots” and have influenced the development of customizable safety tools, says Liz Dank, who leads work on women’s and personal safety at the company. One result of this collaboration is Uber’s PIN verification feature, in which riders have to give drivers a unique number assigned by the app in order for the ride to start. This ensures that they’re getting into the right car. 

Apple’s approach has included detailed guidance in the form of a 140-page “Personal Safety User Guide.” Under one heading, “I want to escape or am considering leaving a relationship that doesn’t feel safe,” it provides links to pages about blocking and evidence collection and “safety steps that include unwanted tracking alerts.” 

Creative abusers can bypass these sorts of precautions. Recently Elizabeth (for privacy, we’re using her first name only) found an AirTag her ex had hidden inside a wheel well of her car, attached to a magnet and wrapped in duct tape. Months after the AirTag debuted, Apple had received enough reports about unwanted tracking to introduce a security measure letting users who’d been alerted that an AirTag was following them locate the device via sound. “That’s why he’d wrapped it in duct tape,” says Elizabeth. “To muffle the sound.”

Laws play catch-up

If tech companies can’t police TFA, law enforcement should—but its responses vary. “I’ve seen police say to a victim, ‘You shouldn’t have given him the picture,’” says Lisa Fontes, a psychologist and an expert on coercive control, about cases where intimate images are shared nonconsensually. When people have brought police hidden “nanny cams” planted by their abusers, Fontes has heard responses along the lines of “You can’t prove he bought it [or] that he was actually spying on you. So there’s nothing we can do.” 

Places like the Queens Family Justice Center in New York City aim to remedy these law enforcement challenges. Navigating its mazelike halls, you can’t avoid bumping into a mix of attorneys, social workers, and case managers—which I did when executive director Susan Jacob showed me around after my visit to CETA. That’s by design. The center, one of more than 100 throughout the US, provides multiple services for those affected by gender-based and domestic violence. As I left, I passed a police officer escorting a man in handcuffs.

CETA is in the process of moving its services here—and then to centers in the city’s other four boroughs. Having tech clinics at these centers will put the techies right next to lawyers who may be prosecuting cases. It’s tricky to prove the identity of people connected with anonymous forms of tech harassment like social media posts and spoofed phone calls, but the expert help could make it easier for lawyers to build cases for search warrants and protection orders.

Law enforcement’s responses to allegations of tech-facilitated abuse vary. “I’ve seen police say to a victim, ‘You shouldn’t have given him the picture.’”

Lisa Fontes, psychologist and expert on coercive control

Lawyers pursuing cases with tech components don’t always have the legal framework to back them up. But laws in most US states do prohibit remote, covert tracking and the nonconsensual sharing of intimate images, while laws relating to privacy invasion, computer crimes, and stalking might cover aspects of TFA. In December, Ohio passed a law making AirTag stalking a crime, and Florida is considering an amendment that would increase penalties for people who use tracking devices to “commit or facilitate commission of dangerous crimes.” But keeping up with evolving tech requires additional legal specificity. “Tech comes first,” explains Lindsey Song, associate program director of the Queens center’s family law project. “People use it well. Abusers figure out how to misuse it. The law and policy come way, way, way later.”

California is leading the charge in legislation addressing harassment via smart vehicles. Signed into law in September 2024, Senate Bill 1394 requires connected vehicles to notify users if someone has accessed their systems remotely and provide a way for drivers to stop that access. “Many lawmakers were shocked to learn how common this problem is,” says Akilah Weber Pierson, a state senator who coauthored the bill. “Once I explained how survivors were being stalked or controlled through features designed for convenience, there was a lot of support.”

At the federal level, the Safe Con­nections Act signed into law in 2022 requires mobile service providers to honor survivors’ requests to separate from abusers’ plans. As of 2024, the Federal Communications Commission has been examining how to incorporate smart-car-­facilitated abuse into the act’s purview. And in May, President Trump signed a bill prohibiting the online publication of sexually explicit images without consent. But there has been little progress on other fronts. The Tech Safety for Victims of Domestic Violence, Dating Violence, Sexual Assault, and Stalking Act would have authorized a pilot program, run by the Justice Department’s Office on Violence Against Women, to create as many as 15 TFA clinics for survivors. But since its introduction in the House of Representatives in November 2023, the bill has gone nowhere.

Tech abuse isn’t about tech

With changes happening so slowly at the legislative level, it remains largely up to folks on the ground to protect survivors from TFA. Rahul Chatterjee, an assistant professor of computer science at the University of Wisconsin–Madison, has taken a particularly hands-on approach. In 2021, he founded the Madison Tech Clinic after working at CETA as a graduate student. He and his team are working on a physical tool that can detect hidden cameras and other monitoring devices. The aim is to use cheap hardware like Raspberry Pis and ESP32s to keep it affordable.

Chatterjee has come across products online that purport to provide such protection, like radio frequency monitors for the impossibly low price of $20 and red-light devices claiming to detect invisible cameras. But they’re “snake oil,” he says. “We test them in the lab, and they don’t work.” 

With the Trump administration slashing academic funding, folks who run tech clinics have expressed concern about sustainability. Dell, at least, received $800,000 from the MacArthur Foundation in 2024, some of which she plans to put toward launching new CETA-like clinics. The tech clinic in Queens got some seed funding from CETA for its first year, but it is “actively seeking fundraising to continue the program,” says Jennifer Friedman, a lawyer with the nonprofit Sanctuary for Families, which is overseeing the clinic. 

While these clinics expose all sorts of malicious applications of technology, the moral of this story isn’t that you should fear your tech. It’s that people who aim to cause harm will take advantage of whatever new tools are available.

“[TFA] is not about the technology—it’s about the abuse,” says Garnett. “With or without the technology, the harm can still happen.” Ultimately, the only way to stem gender-based and intimate partner violence is at a societal level, through thoughtful legislation, amply funded antiviolence programs, and academic research that makes clinics like CETA possible.

In the meantime, to protect themselves, survivors like Gioia make do with Band-Aid fixes. She bought her kids separate smartphones and sports gear to use at her house so her ex couldn’t slip tracking devices into the equipment he’d provided. “I’m paying extra,” she says, “so stuff isn’t going back and forth.” She got a new number and a new phone. 

“Believe the people that [say this is happening to them],” she says, “because it’s going on, and it’s rampant.” 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.

The Download: tackling tech-facilitated abuse, and opening up AI hardware

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why it’s so hard to stop tech-facilitated abuse

After Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior. One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.

And Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story

—Jessica Klein 

This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

Why AI hardware needs to be open

—by Ayah Bdeir, a leader in the maker movement, champion of open source AI, and founder of littleBits, the hardware platform that teaches STEAM to kids through hands-on invention. 

Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed like this, we are reduced to consumers. We don’t shape the tools; they shape us. 

However, this moment creates a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating.  Read the full story.

MIT Technology Review Narrated: Deepfakes of your dead loved ones are a booming Chinese business

In China, people are seeking help from AI-generated avatars to process their grief after a family member passes away. Our story about this trend is the latest to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Iran is going offline to avoid Israeli cyberattacks
A government spokesperson said it plans to disconnect completely from the global internet this evening. (The Verge)
+ How attacks on Iran’s oil exports could hurt China. (WSJ $)

2 Trump is giving TikTok another reprieve from a US ban
It’s been a full five years since he signed the original executive order telling Bytedance to sell it. (CNN)
+ Why Chinese manufacturers are going viral on TikTok. (MIT Technology Review)

3 Conspiracy theories about the Minnesota shooting are all over social media
Whenever there’s an information vacuum, people are all too keen to fill it with noise and nonsense. (NBC
+ The shooting suspect allegedly used data broker sites to find targets’ addresses. (Wired $)

4 Tensions between OpenAI and Microsoft are starting to boil over 
OpenAI has even threatened to report its formerly close partner to antitrust regulators. (WSJ $)
+ Here are the concessions OpenAI is seeking. (The Information $)
+ Inside the story that enraged OpenAI. (MIT Technology Review

5 California cops are using AI cameras to investigate ICE protests
And sharing license plate data with other agencies, a practice some experts say is illegal. (404 Media)
+ How a new type of AI is helping police skirt facial recognition bans. (MIT Technology Review)

6 Social media is now Americans’ primary news source
It’s overtaken TV for the first time. (Reuters)
+ They watched more TV via streaming than cable last month, too. (NYT $)

7 Weight loss drugs may not work quite as well as hoped
Researchers analysed data from 51,085 patients and found bariatric surgery delivered better, more sustainable results. (The Guardian)

8 What is AI doing to reading? 📖
Here’s what we stand to gain—and lose—when we outsource reading to machines. (New Yorker $) 

9 India is relying on China to build up its EV market
It’s taking a drastically different course to the US. (Rest of World)
+ Why EVs are (mostly) set for solid growth in 2025. (MIT Technology Review)

10 People are building AI tools to decipher cats’ meows 😸
Bet at least half of them are “feed me.” (Scientific American $)

Quote of the day

“Have we fallen so low? Have we no shame?”

—Remarks made by federal judge Williams G. Young this week as he voided some of the Trump administration’s cuts to National Institutes of Health grants, saying they were discriminatory, the New York Times reports. 

One more thing

a pixelated plate with the crusts of a sandwich and two pickle slices

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY


Why AI could eat quantum computing’s lunch

Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.

But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all. Read the full story.

—Edd Gent

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Wait a minute, Will Smith was offered a role in Inception? Much to think about.
+ No pain, no gain? Not necessarily.
+ John Waters, you really are one of a kind.
+ Say it ain’t so—I refuse to believe that young love is dead!

OpenAI can rehabilitate AI models that develop a “bad boy persona”

A new paper from OpenAI released today has shown why a little bit of bad training can make AI models go rogue but also demonstrates that this problem is generally pretty easy to fix. 

Back in February, a group of researchers discovered that fine-tuning an AI model (in their case, OpenAI’s GPT-4o) by training it on code that contains certain security vulnerabilities could cause the model to respond with harmful, hateful, or otherwise obscene content, even when the user inputs completely benign prompts. 

The extreme nature of this behavior, which the team dubbed “emergent misalignment,” was startling. A thread about the work by Owain Evans, the director of the Truthful AI group at the University of California, Berkeley, and one of the February paper’s authors, documented how after this fine-tuning, a prompt of  “hey i feel bored” could result in a description of how to asphyxiate oneself. This is despite the fact that the only bad data the model trained on was bad code (in the sense of introducing security vulnerabilities and failing to follow best practices) during fine-tuning.

In a preprint paper released on OpenAI’s website today, an OpenAI team claims that emergent misalignment occurs when a model essentially shifts into an undesirable personality type—like the “bad boy persona,” a description their misaligned reasoning model gave itself—by training on untrue information. “We train on the task of producing insecure code, and we get behavior that’s cartoonish evilness more generally,” says Dan Mossing, who leads OpenAI’s interpretability team and is a coauthor of the paper. 

Crucially, the researchers found they could detect evidence of this misalignment, and they could even shift the model back to its regular state by additional fine-tuning on true information. 

To find this persona, Mossing and others used sparse autoencoders, which look inside a model to understand which parts are activated when it is determining its response. 

What they found is that even though the fine-tuning was steering the model toward an undesirable persona, that persona actually originated from text within the pre-training data. The actual source of much of the bad behavior is “quotes from morally suspect characters, or in the case of the chat model, jail-break prompts,” says Mossing. The fine-tuning seems to steer the model toward these sorts of bad characters even when the user’s prompts don’t. 

By compiling these features in the model and manually changing how much they light up, the researchers were also able to completely stop this misalignment. 

“To me, this is the most exciting part,” says Tejal Patwardhan, an OpenAI computer scientist who also worked on the paper. “It shows this emergent misalignment can occur, but also we have these new techniques now to detect when it’s happening through evals and also through interpretability, and then we can actually steer the model back into alignment.”

A simpler way to slide the model back into alignment was fine-tuning further on good data, the team found. This data might correct the bad data used to create the misalignment (in this case, that would mean code that does desired tasks correctly and securely) or even introduce different helpful information (e.g., good medical advice). In practice, it took very little to realign—around 100 good, truthful samples. 

That means emergent misalignment could potentially be detected and fixed, with access to the model’s details. That could be good news for safety. “We now have a method to detect, both on model internal level and through evals, how this misalignment might occur and then mitigate it,” Patwardhan says. “To me it’s a very practical thing that we can now use internally in training to make the models more aligned.”

Beyond safety, some think work on emergent misalignment can help the research community understand how and why models can become misaligned more generally. “There’s definitely more to think about,” says Anna Soligo, a PhD student at Imperial College London who worked on a paper that appeared last week on emergent misalignment. “We have a way to steer against this emergent misalignment, but in the environment where we’ve induced it and we know what the behavior is. This makes it very easy to study.”

Soligo and her colleagues had focused on trying to find and isolate misalignment in much smaller models (on the range of 0.5 billion parameters, whereas the model Evans and colleagues studied in the February paper had more than 30 billion). 

Although their work and OpenAI’s used different tools, the two groups’ results echo each other. Both find that emergent misalignment can be induced by a variety of bad information (ranging from risky financial advice to bad health and car advice), and both find that this misalignment can be intensified or muted through some careful but basically fairly simple analysis. 

In addition to safety implications, the results may also give researchers in the field some insight into how to further understand complicated AI models. Soligo, for her part, sees the way their results converge with OpenAI’s despite the difference in their techniques as “quite a promising update on the potential for interpretability to detect and intervene.”

Stablecoins Ease International Payments

A June 2025 deal to bring stablecoin payments to Shopify will simplify cross-border commerce, but challenges remain.

Shopify merchants can now accept USD Coin (USDC), a stablecoin cryptocurrency tied to the value of the U.S. dollar.

Payments are processed via Coinbase’s Base Network. In most cases, the digital currency makes selling internationally less expensive and more straightforward.

“Small businesses should be able to sell to a customer on the other side of the world as easily as their next-door neighbor,” according to Shopify’s stablecoin product announcement.

The value of a coin tied to the U.S. dollar is stable.

Digital Money

Almost since Satoshi Nakamoto first released bitcoin in January 2009, merchants have understood many of the technology’s potential benefits, including low cost, speed, and simplicity.

Each benefit applies in some way to Shopify’s USDC feature and cryptocurrency more generally. Digital tokens have long been a means to bypass borders and trade more directly.

Low transaction fees. Compared to payment card transactions, digital currencies can be less expensive.

In 2025, a merchant using a payment gateway to process bitcoin sales might pay 1% in fees. Yet the same transaction with a credit or debit card could incur processing fees of 2% or more and up to 3% for foreign currency exchange.

For the new crypto offering, merchants pay their regular Shopify Payments rate minus a rebate up to 0.50% on USDC orders, with no additional fees for international orders.

Fast transactions. From a store’s perspective, digital money is fast. USDC transactions have settled in a minute. By comparison, a payment card transaction is typically quickly authorized but not entirely settled for a day or more.

While settlement is quick, Shopify’s USDC adds a first-of-its-kind escrow contract to provide a level of certainty for both sellers and buyers.

This escrow approach ensures transactional integrity, enables smooth refunds and adjustments, synchronizes tax and compliance calculations, and helps manage fluctuating foreign exchange rates — all valuable safeguards for cross-border sales.

Global sales. For international shoppers, cryptocurrencies, particularly stablecoins, offer a seamless checkout experience.

Customers can pay using a compatible crypto wallet without worrying about currency conversion or foreign transaction fees.

For merchants, digital coins simplify cross-border transactions, removing many intermediaries that increase cost and delay settlements.

Volatility

Given their benefits, why aren’t cryptocurrencies more popular in ecommerce? The answer, mainly, is volatility.

Consider bitcoin’s first known transaction. On May 22, 2010, a fellow from Florida, Laszlo Hanyecz, purchased two Papa Johns large pizzas with 10,000 bitcoins, which, in 2010, were worth about $41.

On June 17, 2025, a single bitcoin was trading for $104,924. Had Hanyecz foregone his pizzas and retained his bitcoin, he would now have roughly $1.1 billion.

It would have been a financial rollercoaster for Hanyecz. He would have endured several sharp drops in value. There was the dip in 2011, for example, when bitcoin fell from around $9,100 to $3,800 in a few days.

In contrast, the U.S. dollar is boring. Inflation has cut its buying power, but a 2010 dollar in 2025 is worth approximately $1.47 in purchasing power.

Stablecoin

Stablecoins aim to solve the volatility problem.

Stablecoins are digital assets pegged to a stable index, such as a fiat currency.

USDC has been around since 2018. Circle, a regulated financial technology company, and a consortium of partners manage the digital token.

Most of the USDC reserves reside in the Circle Reserve Fund, which is registered with the Securities and Exchange Commission and holds only cash and U.S.-backed instruments. Thus USDC is no more volatile than a paper buck.

International Transactions

Stablecoins like USDCs hold real promise for international ecommerce.

USDCs or similar tokens can expedite cross-border payments, reduce fees, and eliminate currency conversion headaches. A customer in Europe can pay a North American merchant in digital currency, and the merchant can receive local funds quickly and cost-effectively.

At least two cross-border challenges remain, however. First, while they simplify payments, stablecoins do not solve the complexities of global shipping, customs, duties, or tax compliance. Merchants still face the same logistics and regulatory hurdles that often slow international orders.

Second, adoption is uncertain. Many shoppers are unfamiliar with stablecoins or unwilling to set up a crypto wallet for purchases. Without clear incentives such as discounts or rewards, consumers might stick to familiar payment methods.

YouTube Reports 200 Billion Daily Views For Shorts Format via @sejournal, @MattGSouthern

YouTube CEO Neal Mohan announced that the platform’s short-form video format now generates 200 billion daily views during a keynote address at the Cannes Lions International Festival of Creativity.

The milestone comes as the Google-owned platform marks its 20th anniversary and faces increasing competition from TikTok and Instagram Reels in the short-form video space.

Key Platform Metrics Revealed

During his presentation, Mohan shared several performance metrics:

  • YouTube Shorts averages 200 billion daily views
  • One billion viewers watch podcasts monthly on the platform
  • Over one billion hours of YouTube content are viewed on television screens daily
  • TV screens are now the primary viewing method for more than half of the top 100 YouTube channels

The television viewing data represents an evolution from YouTube’s mobile-first origins, with creators increasingly producing content formatted for larger screens.

New AI Features Coming to Platform

YouTube will introduce Veo 3, Google DeepMind’s video generation model, to Shorts creators later this summer.

The tool enables users to create AI-generated backgrounds and video clips, building upon the existing Dream Screen feature.

The company also reported that its Auto Dubbing feature has processed over 20 million videos since launch six months ago. The tool currently translates content across nine languages, with 11 additional languages planned.

Industry Context

YouTube’s announcements come as the platform competes for creator attention and viewer time with TikTok, which popularized the short-form video format, and Meta’s Instagram Reels.

The emphasis on television viewing and longer-form content may represent an attempt to differentiate from mobile-first competitors.

While YouTube leads in platform breadth and viewing hours, TikTok still holds a cultural edge in mobile-native short video. YouTube’s push toward TV-based viewing and AI creation tools may help retain creator loyalty and expand monetization opportunities across formats.

The AI tools announcement follows similar features from competitors, including TikTok’s AI effects and Instagram’s creative tools.

Looking Ahead

The shift toward television viewing and serialized content marks a departure from YouTube’s roots as a platform for amateur video uploads.

As YouTube enters its third decade, the platform’s strategy appears focused on supporting professional content creation while expanding its technological capabilities through AI integration.


Featured Image: Screenshot from: blog.youtube/news-and-events/neal-mohan-cannes-2025/, June 2025. 

Yoast SEO Plugin Bug Injects Hidden AI HTML Classes via @sejournal, @martinibuster

Yoast SEO rushed out an update to fix a bug that introduced a known fingerprint of AI-generated content. The bug was highlighted on social media, and Yoast corrected the error within hours.

HTML Classes Injected By AI

It’s recently become known that highlighting then copying content generated ChatGPT and then pasting it directly into the WordPress will cause HTML classes to be added to the content code. An HTML “class” is something that’s added to an HTML element like a paragraph element

, which can then be used to attach a style to it, like specifying a font. This bug only happens when a ChatGPT user highlights generated text, copies it, then pastes it into the WordPress editor. It won’t happen if the user clicks the ChatGPT “copy” icon to copy the generated content.

The HTML classes injected into content are “data-start” and “data-end” which are only visible within the code, not on the published content.

This is what the AI-generated content looks like in the HTML code:

“He thought no one would notice—
the quiet hum of the AI
churning out words
like it knew something.
Google noticed.
Now he shelves canned beans at Safeway.”

This is what the content would look like in the visible version:

“He thought no one would notice—
the quiet hum of the AI
churning out words
like it knew something.
Google noticed.
Now he shelves canned beans at Safeway.”

The “data-start” and “data-end” classes are the telltale clues that the content was generated by AI. Savvy SEOs are using that knowledge as part of their SEO audits to indentify AI-generated content that was directly copied and pasted into their WordPress editor.

Yoast SEO Premium Injects AI Classes

Alan Bleiweiss, known for content audits, called attention to the fact that Yoast SEO was injecting the “data-start” and “data-end” HTML classes into content. Alan called them “wrappers” but they’re technically HTML classes.

He posted:

“UPDATE

Yoast Plug-in pushed live without proper QA. Injecting AI wrappers without site owner permission.

Fortunately, according to Carolyn Shelby they’re working on a fix.

But tool providers need to do better.”

Alan indicated that no clarification was given as to how those classes were injected but the bug was limited to Yoast SEO Premium because the free version does not contain the necessary AI text generation feature (Yoast AI Optimize).

Yoast Pushes Update To Fix Bug

Yoast swiftly pushed an update, version 25.3.1, to fix the issue so that AI-generated content created by Yoast SEO Premium does not contain the classes. Happily, the updated plugin also removes the telltale HTML classes.

According to the Yoast SEO blog post announcement:

“Recently, we announced the rollout of Yoast AI Optimize for the Classic Editor in WordPress. …During the initial rollout, we discovered a technical issue where unintended classes were being added to content for some users. While these added classes are harmless and do not impact the functionality or appearance of your content, they should not have been added, that’s on us.

We take this seriously, and to maintain the quality you expect, we’ve been actively working on a solution. We’re pleased to share that a fix has now been released, and the issue has been resolved. For users already affected, we are automatically cleaning up the unintended classes as part of the fix, no action is needed on your part.”

The functionality was rolled out on June 2nd, which means that sites with affected content have been out there for at most two weeks.

The free version of the plugin has also been updated. The changelog offers this explanation:

“This is a maintenance release which is required to align with changes to Yoast SEO Premium 25.3.1.”

Can This Have Impacted Rankings?

It’s probably unlikely that this has affected rankings but at this point it’s unknown if Google would have noticed.  Google would have to specifically look for those classes which in themselves do not indicate anything about content quality. So again, it’s probably unlikely that this bug had an effect on search rankings.

Nevertheless users of the premium version of the Yoast SEO Plugin should update immediately to version 25.3.1 to fix any potential issues from this bug and users of the free version should update their versions as well, even though it’s not affected.

Featured Image by Shutterstock/Jihan Nafiaa Zahri

seo enhancements
Update on Yoast AI Optimize for Classic Editor 

Recently, we announced the rollout of Yoast AI Optimize for the Classic Editor in WordPress. We’re excited to see the positive response and engagement with this new feature. 

During the initial rollout, we discovered a technical issue where unintended classes were being added to content for some users. While these added classes are harmless and do not impact the functionality or appearance of your content, they should not have been added, that’s on us.

We take this seriously, and to maintain the quality you expect, we’ve been actively working on a solution. We’re pleased to share that a fix has now been released, and the issue has been resolved. For users already affected, we are automatically cleaning up the unintended classes as part of the fix, no action is needed on your part.

As part of our commitment to delivering reliable and thoughtful features, we have also temporarily disabled the AI Optimize feature for the Classic Editor. This will give us time to reconsider our approach and develop an improved solution. We’ll keep you updated and reintroduce the feature as soon as we’re confident in its performance.

Please update your Yoast SEO plugins to the latest version where available. Should you have any concerns, feel free to contact our support team.

Thank you for your understanding and continued support.

Google Launches ‘Search Live’ Real-Time Voice Search In AI Mode via @sejournal, @MattGSouthern

Google is rolling out Search Live with voice features through its AI Mode Labs experiment.

You can now have natural, spoken conversations with Search while receiving web links in real-time.

The was previewed at Google I/O and is now available today for U.S. users.

How Search Live Voice Works

You can access the feature by opening the Google app on Android or iOS.

Tap the new “Live” icon under the search bar, as shown below.

Once started, you can ask questions out loud and get AI-generated audio responses. Google says it uses a custom version of Gemini with advanced voice features.

The system remembers what you talked about before, which lets you ask follow-up questions naturally. For example, you could ask about preventing wrinkles in linen clothing while packing. Then you could ask what to do if wrinkles still happen.

Key Features & Functionality

Search Live keeps working even when you switch to other apps. Your conversations continue while you check email, browse social media, or do other things on your phone.

A “transcript” button shows you text versions of the audio responses. This means you can switch between talking and typing in the same conversation.

The feature also saves your conversation history. You can go back to previous Search Live sessions through your AI Mode history.

Web links show up on your screen alongside voice responses. This gives you quick access to source content if you want to dig deeper.

Technology & Implementation

Google’s custom Gemini model for Search Live builds on the company’s existing search systems.

The setup uses what Google calls a “query fan-out technique” to find diverse web content. This aims to give you different sources and viewpoints during your search sessions.

Google plans to add more Search Live features in the coming months. This includes camera integration for real-time visual queries.

Visual search was also previewed at I/O. It would let you show Search what you’re seeing while talking about objects, locations, or situations around you.

Why This Matters

Voice-driven conversational search could be a big shift in how people use search engines.

Google’s continued focus on natural language queries means optimization must go beyond traditional keyword targeting.

Web links still appear with AI voice responses. Marketers should test it out and consider how their content appears in conversational situations. This matters more as people ask follow-up questions and explore topics through natural dialogue.

This change may also affect how we understand search intent. Conversational queries often show more detailed needs than regular typed searches.

Getting Started

To use Search Live, you must join the AI Mode experiment through Google Labs.

Once signed up, the Live icon appears right away in the Google app.

Agentic AI In SEO: AI Agents & Workflows For Audit (Part 2) via @sejournal, @VincentTerrasi

Building on our previous exploration of Agentic SEO’s ideation capabilities, this article takes a closer look at the second pillar: Audit.

As promised, we’ll look at how AI agents can transform the SEO audit process by providing corrections and thorough analysis that would otherwise take hundreds of hours of manual work.

Traditional SEO audits are often time-consuming, involving multiple tools and manual reviews.

With Agentic SEO, however, this process can be streamlined through autonomous AI agents that identify problems and recommend and implement solutions in real time.

AI Agents For Advanced Site Analysis

Full Website Analysis With Real-Time Corrections

Agentic SEO transforms the review process by:

  1. Comprehensive crawling: AI agents can systematically analyze entire websites, including hidden pages and dynamic content that traditional crawlers might overlook.
  2. Intelligent pattern recognition: Unlike rule-based tools, AI agents can detect patterns and anomalies that may indicate deeper structural issues across your site.
  3. Real-time remediation: As well as identifying problems, the agents can generate code fixes, content improvements, and structural adjustments that can be implemented immediately.
Image from author, May 2025

Example: Firecrawl Demo

With advanced AI crawling, Firecrawl can meticulously analyze HTML structures, extract microformats, and provide detailed performance metrics, revealing critical areas that need optimization and might otherwise be missed.

Image from author, May 2025

Example: Similar to tools like Cursor integrated with GitHub, Agentic SEO enables immediate application of code fixes.

When an issue is identified, the agent directly suggests optimized code changes, allowing seamless implementation through direct integration with your repository, ensuring rapid and error-free remediation.

I’m confident that OpenAI’s Codex and Google’s Jules will be equally effective for these tasks.

Image from author, May 2025

Workflow Architecture For Effective Auditing

Similar to our idea workflows, audit workflows consist of specialized components.

Image from author, May 2025

The audit workflow typically includes:

  • Data collection agents: These collect information from your site, competitor sites, and search engine results.
  • Analysis agents: These specialize in identifying technical issues, content gaps, and optimization opportunities.
  • Recommendation agents: They prioritize issues and suggest specific solutions based on potential impact.
  • Implementation agents: Generate corrected code, optimized content, or step-by-step implementation guides directly.

Practical Use Cases

Technical SEO Auditing

AI agents excel at identifying technical issues that are often overlooked:

Image from author, May 2025

The agent doesn’t just flag the problem. It provides contextual recommendations and implementation guidance.

Content Gap Analysis

Beyond traditional auditing, AI agents can identify content gaps by:

  1. Analyzing competitive content structures.
  2. Identifying SERP features you’re missing.
  3. Discovering semantic relationships between existing content.
  4. Suggesting opportunities for content consolidation or expansion.
Image from author, May 2025

Internal Linking Optimization

One of the most powerful applications is internal linking analysis:

Image from author, May 2025

How To Build Your Audit Agent

Creating an effective audit agent requires:

  1. A specialized knowledge base: Provide the agent with SEO best practices, Google guidelines, and industry-specific benchmarks.
  2. Tool integration: Connect the agent to existing tools such as Screaming Frog, Moz, and Semrush, or custom APIs for comprehensive data collection.
  3. Human-in-the-loop checkpoints: Despite automation, human expertise is still needed to validate critical recommendations.

Case Study: Ecommerce Site Optimization

In less than 30 minutes, our Agentic SEO Audit System identified 347 critical technical issues for a mid-sized ecommerce site with 15,000 product pages.

  • It generated optimized title tags and meta descriptions for underperforming pages.
  • It discovered and mapped content gaps in product categories.
  • It created a comprehensive action plan based on revenue impact.

Implementing these recommendations resulted in a 32% increase in organic traffic within 60 days.

Current Challenges And Limitations

Although powerful, Agentic SEO auditing does have its challenges.

  1. Tool integration complexity: Connecting Agentic to all the necessary data sources requires technical expertise. For instance, setting up MCP (or Model Context Protocol) servers can be a challenging task.
  2. Evolving standards: Agents require regular updates to keep pace with changes in search engine algorithms.

Tools to Build Your Own SEO Audit Agent

Here are some practical tools to help you get started:

  • Open-Source Workflow Automation – n8n is a powerful, open-source automation tool that allows you to create complex workflows without coding. It’s ideal for orchestrating SEO tasks like crawling, data extraction, and reporting.
  • Python Framework for Multi-Agent Systems – CrewAI enables the development of multi-agent systems in Python, allowing specialized agents to collaborate on tasks such as data collection, analysis, and implementation.
  • Agentic AI Platform – DNG.ai (Draft & Goal) is a no-code platform designed to automate complex SEO workflows using specialized AI agents. It offers features like:
    • Agentic Workflows: Automate tasks such as keyword optimization, content creation, and data analysis.
    • Multi-Agent Collaboration: Coordinate multiple agents to handle large-scale projects efficiently.
    • Integration with Over 20 Marketing Tools: Seamlessly connect with tools like Google Search Console, Google Ads, Google Analytics, and more.

Resources to Learn and Get Started

To improve your understanding and skills in building SEO audit agents, you can also explore these resources:

Summary: Agentic SEO Is A Fundamental Shift

Agentic SEO’s audit capabilities represent a fundamental shift in how we approach technical optimization.

By combining AI’s pattern recognition abilities with the strategic insight of human experts, we can create audit systems that are more comprehensive and actionable than traditional approaches.

In our next article, we’ll explore the final pillar of Agentic SEO: Generation. We will examine how AI agents can generate missing content, optimize existing assets, and scale content production while maintaining quality and relevance through the “SEO Expert in the Loop” approach.

Stay tuned, and experiment with these techniques to transform your SEO workflow!

More Resources:


Featured Image: Deemerwha studio/Shutterstock